WorldWideScience

Sample records for intarna efficient prediction

  1. IntaRNA 2.0: enhanced and customizable prediction of RNA-RNA interactions.

    Science.gov (United States)

    Mann, Martin; Wright, Patrick R; Backofen, Rolf

    2017-07-03

    The IntaRNA algorithm enables fast and accurate prediction of RNA-RNA hybrids by incorporating seed constraints and interaction site accessibility. Here, we introduce IntaRNAv2, which enables enhanced parameterization as well as fully customizable control over the prediction modes and output formats. Based on up to date benchmark data, the enhanced predictive quality is shown and further improvements due to more restrictive seed constraints are highlighted. The extended web interface provides visualizations of the new minimal energy profiles for RNA-RNA interactions. These allow a detailed investigation of interaction alternatives and can reveal potential interaction site multiplicity. IntaRNAv2 is freely available (source and binary), and distributed via the conda package manager. Furthermore, it has been included into the Galaxy workflow framework and its already established web interface enables ad hoc usage. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. IntaRNA 2.0: enhanced and customizable prediction of RNA–RNA interactions

    Science.gov (United States)

    Mann, Martin; Wright, Patrick R.

    2017-01-01

    Abstract The IntaRNA algorithm enables fast and accurate prediction of RNA–RNA hybrids by incorporating seed constraints and interaction site accessibility. Here, we introduce IntaRNAv2, which enables enhanced parameterization as well as fully customizable control over the prediction modes and output formats. Based on up to date benchmark data, the enhanced predictive quality is shown and further improvements due to more restrictive seed constraints are highlighted. The extended web interface provides visualizations of the new minimal energy profiles for RNA–RNA interactions. These allow a detailed investigation of interaction alternatives and can reveal potential interaction site multiplicity. IntaRNAv2 is freely available (source and binary), and distributed via the conda package manager. Furthermore, it has been included into the Galaxy workflow framework and its already established web interface enables ad hoc usage. PMID:28472523

  3. Freiburg RNA Tools: a web server integrating INTARNA, EXPARNA and LOCARNA.

    Science.gov (United States)

    Smith, Cameron; Heyne, Steffen; Richter, Andreas S; Will, Sebastian; Backofen, Rolf

    2010-07-01

    The Freiburg RNA tools web server integrates three tools for the advanced analysis of RNA in a common web-based user interface. The tools IntaRNA, ExpaRNA and LocARNA support the prediction of RNA-RNA interaction, exact RNA matching and alignment of RNA, respectively. The Freiburg RNA tools web server and the software packages of the stand-alone tools are freely accessible at http://rna.informatik.uni-freiburg.de.

  4. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  5. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  6. Relationship between efficiency and predictability in stock price change

    Science.gov (United States)

    Eom, Cheoljun; Oh, Gabjin; Jung, Woo-Sung

    2008-09-01

    In this study, we evaluate the relationship between efficiency and predictability in the stock market. The efficiency, which is the issue addressed by the weak-form efficient market hypothesis, is calculated using the Hurst exponent and the approximate entropy (ApEn). The predictability corresponds to the hit-rate; this is the rate of consistency between the direction of the actual price change and that of the predicted price change, as calculated via the nearest neighbor prediction method. We determine that the Hurst exponent and the ApEn value are negatively correlated. However, predictability is positively correlated with the Hurst exponent.

  7. Towards Predicting Efficient and Anonymous Tor Circuits

    OpenAIRE

    Barton, Armon; Imani, Mohsen; Ming, Jiang; Wright, Matthew

    2018-01-01

    The Tor anonymity system provides online privacy for millions of users, but it is slower than typical web browsing. To improve Tor performance, we propose PredicTor, a path selection technique that uses a Random Forest classifier trained on recent measurements of Tor to predict the performance of a proposed path. If the path is predicted to be fast, then the client builds a circuit using those relays. We implemented PredicTor in the Tor source code and show through live Tor experiments and Sh...

  8. Computationally Efficient Prediction of Ionic Liquid Properties

    DEFF Research Database (Denmark)

    Chaban, V. V.; Prezhdo, O. V.

    2014-01-01

    Due to fundamental differences, room-temperature ionic liquids (RTIL) are significantly more viscous than conventional molecular liquids and require long simulation times. At the same time, RTILs remain in the liquid state over a much broader temperature range than the ordinary liquids. We exploit...... to ambient temperatures. We numerically prove the validity of the proposed concept for density and ionic diffusion of four different RTILs. This simple method enhances the computational efficiency of the existing simulation approaches as applied to RTILs by more than an order of magnitude....

  9. Numerical prediction of Pelton turbine efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Jott, D; Mez' nar, P; Lipej, A, E-mail: dragicajost@turboinstitut.s [Turbointtitut, Rovtnikova 7, Ljubljana, 1210 (Slovenia)

    2010-08-15

    This paper presents a numerical analysis of flow in a 2 jet Pelton turbine with horizontal axis. The analysis was done for the model at several operating points in different operating regimes. The results were compared to the results of a test of the model. Analysis was performed using ANSYS CFX-12.1 computer code. A k-{omega} SST turbulent model was used. Free surface flow was modelled by two-phase homogeneous model. At first, a steady state analysis of flow in the distributor with two injectors was performed for several needle strokes. This provided us with data on flow energy losses in the distributor and the shape and velocity of jets. The second step was an unsteady analysis of the runner with jets. Torque on the shaft was then calculated from pressure distribution data. Averaged torque values are smaller than measured ones. Consequently, calculated turbine efficiency is also smaller than the measured values, the difference is about 4 %. The shape of the efficiency diagram conforms well to the measurements.

  10. Numerical prediction of Pelton turbine efficiency

    Science.gov (United States)

    Jošt, D.; Mežnar, P.; Lipej, A.

    2010-08-01

    This paper presents a numerical analysis of flow in a 2 jet Pelton turbine with horizontal axis. The analysis was done for the model at several operating points in different operating regimes. The results were compared to the results of a test of the model. Analysis was performed using ANSYS CFX-12.1 computer code. A k-ω SST turbulent model was used. Free surface flow was modelled by two-phase homogeneous model. At first, a steady state analysis of flow in the distributor with two injectors was performed for several needle strokes. This provided us with data on flow energy losses in the distributor and the shape and velocity of jets. The second step was an unsteady analysis of the runner with jets. Torque on the shaft was then calculated from pressure distribution data. Averaged torque values are smaller than measured ones. Consequently, calculated turbine efficiency is also smaller than the measured values, the difference is about 4 %. The shape of the efficiency diagram conforms well to the measurements.

  11. Numerical prediction of Pelton turbine efficiency

    International Nuclear Information System (INIS)

    Jott, D; Mez'nar, P; Lipej, A

    2010-01-01

    This paper presents a numerical analysis of flow in a 2 jet Pelton turbine with horizontal axis. The analysis was done for the model at several operating points in different operating regimes. The results were compared to the results of a test of the model. Analysis was performed using ANSYS CFX-12.1 computer code. A k-ω SST turbulent model was used. Free surface flow was modelled by two-phase homogeneous model. At first, a steady state analysis of flow in the distributor with two injectors was performed for several needle strokes. This provided us with data on flow energy losses in the distributor and the shape and velocity of jets. The second step was an unsteady analysis of the runner with jets. Torque on the shaft was then calculated from pressure distribution data. Averaged torque values are smaller than measured ones. Consequently, calculated turbine efficiency is also smaller than the measured values, the difference is about 4 %. The shape of the efficiency diagram conforms well to the measurements.

  12. Genomic Prediction of Manganese Efficiency in Winter Barley

    Directory of Open Access Journals (Sweden)

    Florian Leplat

    2016-07-01

    Full Text Available Manganese efficiency is a quantitative abiotic stress trait controlled by several genes each with a small effect. Manganese deficiency leads to yield reduction in winter barley ( L.. Breeding new cultivars for this trait remains difficult because of the lack of visual symptoms and the polygenic features of the trait. Hence, Mn efficiency is a potential suitable trait for a genomic selection (GS approach. A collection of 248 winter barley varieties was screened for Mn efficiency using Chlorophyll (Chl fluorescence in six environments prone to induce Mn deficiency. Two models for genomic prediction were implemented to predict future performance and breeding value of untested varieties. Predictions were obtained using multivariate mixed models: best linear unbiased predictor (BLUP and genomic best linear unbiased predictor (G-BLUP. In the first model, predictions were based on the phenotypic evaluation, whereas both phenotypic and genomic marker data were included in the second model. Accuracy of predicting future phenotype, , and accuracy of predicting true breeding values, , were calculated and compared for both models using six cross-validation (CV schemes; these were designed to mimic plant breeding programs. Overall, the CVs showed that prediction accuracies increased when using the G-BLUP model compared with the prediction accuracies using the BLUP model. Furthermore, the accuracies [] of predicting breeding values were more accurate than accuracy of predicting future phenotypes []. The study confirms that genomic data may enhance the prediction accuracy. Moreover it indicates that GS is a suitable breeding approach for quantitative abiotic stress traits.

  13. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-03-16

    Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind

  14. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  15. Tools for Predicting Cleaning Efficiency in the LHC

    CERN Document Server

    Assmann, R W; Brugger, M; Hayes, M; Jeanneret, J B; Kain, V; Kaltchev, D I; Schmidt, F

    2003-01-01

    The computer codes SIXTRACK and DIMAD have been upgraded to include realistic models of proton scattering in collimator jaws, mechanical aperture restrictions, and time-dependent fields. These new tools complement long-existing simplified linear tracking programs used up to now for tracking with collimators. Scattering routines from STRUCT and K2 have been compared with one another and the results have been cross-checked to the FLUKA Monte Carlo package. A systematic error is assigned to the predictions of cleaning efficiency. Now, predictions of the cleaning efficiency are possible with a full LHC model, including chromatic effects, linear and nonlinear errors, beam-beam kicks and associated diffusion, and time-dependent fields. The beam loss can be predicted around the ring, both for regular and irregular beam losses. Examples are presented.

  16. Specialization does not predict individual efficiency in an ant.

    Directory of Open Access Journals (Sweden)

    Anna Dornhaus

    2008-11-01

    Full Text Available The ecological success of social insects is often attributed to an increase in efficiency achieved through division of labor between workers in a colony. Much research has therefore focused on the mechanism by which a division of labor is implemented, i.e., on how tasks are allocated to workers. However, the important assumption that specialists are indeed more efficient at their work than generalist individuals--the "Jack-of-all-trades is master of none" hypothesis--has rarely been tested. Here, I quantify worker efficiency, measured as work completed per time, in four different tasks in the ant Temnothorax albipennis: honey and protein foraging, collection of nest-building material, and brood transports in a colony emigration. I show that individual efficiency is not predicted by how specialized workers were on the respective task. Worker efficiency is also not consistently predicted by that worker's overall activity or delay to begin the task. Even when only the worker's rank relative to nestmates in the same colony was used, specialization did not predict efficiency in three out of the four tasks, and more specialized workers actually performed worse than others in the fourth task (collection of sand grains. I also show that the above relationships, as well as median individual efficiency, do not change with colony size. My results demonstrate that in an ant species without morphologically differentiated worker castes, workers may nevertheless differ in their ability to perform different tasks. Surprisingly, this variation is not utilized by the colony--worker allocation to tasks is unrelated to their ability to perform them. What, then, are the adaptive benefits of behavioral specialization, and why do workers choose tasks without regard for whether they can perform them well? We are still far from an understanding of the adaptive benefits of division of labor in social insects.

  17. DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail; Soufan, Othman; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.

    2016-01-01

    DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind can be accessed online at: http://​www.​cbrc.​kaust.​edu.​sa/​daspfind.

  18. FIRE BEHAVIOR PREDICTING MODELS EFFICIENCY IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS

    Directory of Open Access Journals (Sweden)

    Benjamin Leonardo Alves White

    2016-12-01

    Full Text Available Knowing how a wildfire will behave is extremely important in order to assist in fire suppression and prevention operations. Since the 1940’s mathematical models to estimate how the fire will behave have been developed worldwide, however, none of them, until now, had their efficiency tested in Brazilian commercial eucalypt plantations nor in other vegetation types in the country. This study aims to verify the accuracy of the Rothermel (1972 fire spread model, the Byram (1959 flame length model, and the fire spread and length equations derived from the McArthur (1962 control burn meters. To meet these objectives, 105 experimental laboratory fires were done and their results compared with the predicted values from the models tested. The Rothermel and Byram models predicted better than McArthur’s, nevertheless, all of them underestimated the fire behavior aspects evaluated and were statistically different from the experimental data.

  19. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  20. A Traffic Prediction Algorithm for Street Lighting Control Efficiency

    Directory of Open Access Journals (Sweden)

    POPA Valentin

    2013-01-01

    Full Text Available This paper presents the development of a traffic prediction algorithm that can be integrated in a street lighting monitoring and control system. The prediction algorithm must enable the reduction of energy costs and improve energy efficiency by decreasing the light intensity depending on the traffic level. The algorithm analyses and processes the information received at the command center based on the traffic level at different moments. The data is collected by means of the Doppler vehicle detection sensors integrated within the system. Thus, two methods are used for the implementation of the algorithm: a neural network and a k-NN (k-Nearest Neighbor prediction algorithm. For 500 training cycles, the mean square error of the neural network is 9.766 and for 500.000 training cycles the error amounts to 0.877. In case of the k-NN algorithm the error increases from 8.24 for k=5 to 12.27 for a number of 50 neighbors. In terms of a root means square error parameter, the use of a neural network ensures the highest performance level and can be integrated in a street lighting control system.

  1. Prediction of Protein Thermostability by an Efficient Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Jalal Rezaeenour

    2016-10-01

    significantly improves the accuracy of ELM in prediction of thermostable enzymes. ELM tends to require more neurons in the hidden-layer than conventional tuning-based learning algorithms. To overcome these, the proposed approach uses a GA which optimizes the structure and the parameters of the ELM. In summary, optimization of ELM with GA results in an efficient prediction method; numerical experiments proved that our approach yields excellent results.

  2. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  3. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    Science.gov (United States)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good

  4. Predicting the efficiency of deposit removal during filter backwash

    African Journals Online (AJOL)

    Abstract. The long-term performance of granular media filters used in drinking water treatment is ultimately limited by the efficiency ... efficiency) within each set of experiments appeared to affect the efficiency of backwash in addition to the parameters varied ... anisms involved in filter cleaning (Amirtharajah, 1978; Amirth-.

  5. Efficient buyer groups for prediction-of-use electricity tariffs

    OpenAIRE

    Robu, V; Vinyals, M; Rogers, A; Jennings, NR

    2014-01-01

    Copyright ? 2014, Association for the Advancement of Artificial Intelligence.Current electricity tariffs do not reflect the real cost that customers incur to suppliers, as units are charged at the same rate, regardless of how predictable each customers consumption is. A recent proposal to address this problem are prediction-of-use tariffs. In such tariffs, a customer is asked in advance to predict her future consumption, and is charged based both on her actual consumption and the deviation fr...

  6. An efficient attack identification and risk prediction algorithm for ...

    African Journals Online (AJOL)

    The social media is highly utilized cloud for storing huge amount of data. ... However, the adversarial scenario did not design properly to maintain the privacy of the ... Information Retrieval, Security Evaluation, Efficient Attack Identification and ...

  7. An Efficient Deterministic Approach to Model-based Prediction Uncertainty

    Data.gov (United States)

    National Aeronautics and Space Administration — Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the...

  8. Two-phased DEA-MLA approach for predicting efficiency of NBA players

    Directory of Open Access Journals (Sweden)

    Radovanović Sandro

    2014-01-01

    Full Text Available In sports, a calculation of efficiency is considered to be one of the most challenging tasks. In this paper, DEA is used to evaluate an efficiency of the NBA players, based on multiple inputs and multiple outputs. The efficiency is evaluated for 26 NBA players at the guard position based on existing data. However, if we want to generate the efficiency for a new player, we would have to re-conduct the DEA analysis. Therefore, to predict the efficiency of a new player, machine learning algorithms are applied. The DEA results are incorporated as an input for the learning algorithms, defining thereby an efficiency frontier function form with high reliability. In this paper, linear regression, neural network, and support vector machines are used to predict an efficiency frontier. The results have shown that neural networks can predict the efficiency with an error less than 1%, and the linear regression with an error less than 2%.

  9. MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY

    OpenAIRE

    Chayalakshmi C.L

    2018-01-01

    MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY ABSTRACT Calculation of boiler efficiency is essential if its parameters need to be controlled for either maintaining or enhancing its efficiency. But determination of boiler efficiency using conventional method is time consuming and very expensive. Hence, it is not recommended to find boiler efficiency frequently. The work presented in this paper deals with establishing the statistical mo...

  10. An efficient link prediction index for complex military organization

    Science.gov (United States)

    Fan, Changjun; Liu, Zhong; Lu, Xin; Xiu, Baoxin; Chen, Qing

    2017-03-01

    Quality of information is crucial for decision-makers to judge the battlefield situations and design the best operation plans, however, real intelligence data are often incomplete and noisy, where missing links prediction methods and spurious links identification algorithms can be applied, if modeling the complex military organization as the complex network where nodes represent functional units and edges denote communication links. Traditional link prediction methods usually work well on homogeneous networks, but few for the heterogeneous ones. And the military network is a typical heterogeneous network, where there are different types of nodes and edges. In this paper, we proposed a combined link prediction index considering both the nodes' types effects and nodes' structural similarities, and demonstrated that it is remarkably superior to all the 25 existing similarity-based methods both in predicting missing links and identifying spurious links in a real military network data; we also investigated the algorithms' robustness under noisy environment, and found the mistaken information is more misleading than incomplete information in military areas, which is different from that in recommendation systems, and our method maintained the best performance under the condition of small noise. Since the real military network intelligence must be carefully checked at first due to its significance, and link prediction methods are just adopted to purify the network with the left latent noise, the method proposed here is applicable in real situations. In the end, as the FINC-E model, here used to describe the complex military organizations, is also suitable to many other social organizations, such as criminal networks, business organizations, etc., thus our method has its prospects in these areas for many tasks, like detecting the underground relationships between terrorists, predicting the potential business markets for decision-makers, and so on.

  11. Efficient and Invariant Convolutional Neural Networks for Dense Prediction

    OpenAIRE

    Gao, Hongyang; Ji, Shuiwang

    2017-01-01

    Convolutional neural networks have shown great success on feature extraction from raw input data such as images. Although convolutional neural networks are invariant to translations on the inputs, they are not invariant to other transformations, including rotation and flip. Recent attempts have been made to incorporate more invariance in image recognition applications, but they are not applicable to dense prediction tasks, such as image segmentation. In this paper, we propose a set of methods...

  12. Predicting Efficient Antenna Ligands for Tb(III) Emission

    Energy Technology Data Exchange (ETDEWEB)

    Samuel, Amanda P.S.; Xu, Jide; Raymond, Kenneth

    2008-10-06

    A series of highly luminescent Tb(III) complexes of para-substituted 2-hydroxyisophthalamide ligands (5LI-IAM-X) has been prepared (X = H, CH{sub 3}, (C=O)NHCH{sub 3}, SO{sub 3}{sup -}, NO{sub 2}, OCH{sub 3}, F, Cl, Br) to probe the effect of substituting the isophthalamide ring on ligand and Tb(III) emission in order to establish a method for predicting the effects of chromophore modification on Tb(III) luminescence. The energies of the ligand singlet and triplet excited states are found to increase linearly with the {pi}-withdrawing ability of the substituent. The experimental results are supported by time-dependent density functional theory (TD-DFT) calculations performed on model systems, which predict ligand singlet and triplet energies within {approx}5% of the experimental values. The quantum yield ({Phi}) values of the Tb(III) complex increases with the triplet energy of the ligand, which is in part due to the decreased non-radiative deactivation caused by thermal repopulation of the triplet. Together, the experimental and theoretical results serve as a predictive tool that can be used to guide the synthesis of ligands used to sensitize lanthanide luminescence.

  13. Hurst exponent and prediction based on weak-form efficient market hypothesis of stock markets

    Science.gov (United States)

    Eom, Cheoljun; Choi, Sunghoon; Oh, Gabjin; Jung, Woo-Sung

    2008-07-01

    We empirically investigated the relationships between the degree of efficiency and the predictability in financial time-series data. The Hurst exponent was used as the measurement of the degree of efficiency, and the hit rate calculated from the nearest-neighbor prediction method was used for the prediction of the directions of future price changes. We used 60 market indexes of various countries. We empirically discovered that the relationship between the degree of efficiency (the Hurst exponent) and the predictability (the hit rate) is strongly positive. That is, a market index with a higher Hurst exponent tends to have a higher hit rate. These results suggested that the Hurst exponent is useful for predicting future price changes. Furthermore, we also discovered that the Hurst exponent and the hit rate are useful as standards that can distinguish emerging capital markets from mature capital markets.

  14. Efficient predictive model-based and fuzzy control for green urban mobility

    NARCIS (Netherlands)

    Jamshidnejad, A.

    2017-01-01

    In this thesis, we develop efficient predictive model-based control approaches, including model-predictive control (MPC) andmodel-based fuzzy control, for application in urban traffic networks with the aim of reducing a combination of the total time spent by the vehicles within the network and the

  15. Wind Turbine Generator Efficiency Based on Powertrain Combination and Annual Power Generation Prediction

    Directory of Open Access Journals (Sweden)

    Dongmyung Kim

    2018-05-01

    Full Text Available Wind turbine generators are eco-friendly generators that produce electric energy using wind energy. In this study, wind turbine generator efficiency is examined using a powertrain combination and annual power generation prediction, by employing an analysis model. Performance testing was conducted in order to analyze the efficiency of a hydraulic pump and a motor, which are key components, and so as to verify the analysis model. The annual wind speed occurrence frequency for the expected installation areas was used to predict the annual power generation of the wind turbine generators. It was found that the parallel combination of the induction motors exhibited a higher efficiency when the wind speed was low and the serial combination showed higher efficiency when wind speed was high. The results of predicting the annual power generation considering the regional characteristics showed that the power generation was the highest when the hydraulic motors were designed in parallel and the induction motors were designed in series.

  16. Model Predictive Vibration Control Efficient Constrained MPC Vibration Control for Lightly Damped Mechanical Structures

    CERN Document Server

    Takács, Gergely

    2012-01-01

    Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: ·         the implementation of ...

  17. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    Science.gov (United States)

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free

  18. Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches

    Directory of Open Access Journals (Sweden)

    Reza Rawassizadeh

    2015-09-01

    Full Text Available As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras.

  19. Energy-Efficient Integration of Continuous Context Sensing and Prediction into Smartwatches.

    Science.gov (United States)

    Rawassizadeh, Reza; Tomitsch, Martin; Nourizadeh, Manouchehr; Momeni, Elaheh; Peery, Aaron; Ulanova, Liudmila; Pazzani, Michael

    2015-09-08

    As the availability and use of wearables increases, they are becoming a promising platform for context sensing and context analysis. Smartwatches are a particularly interesting platform for this purpose, as they offer salient advantages, such as their proximity to the human body. However, they also have limitations associated with their small form factor, such as processing power and battery life, which makes it difficult to simply transfer smartphone-based context sensing and prediction models to smartwatches. In this paper, we introduce an energy-efficient, generic, integrated framework for continuous context sensing and prediction on smartwatches. Our work extends previous approaches for context sensing and prediction on wrist-mounted wearables that perform predictive analytics outside the device. We offer a generic sensing module and a novel energy-efficient, on-device prediction module that is based on a semantic abstraction approach to convert sensor data into meaningful information objects, similar to human perception of a behavior. Through six evaluations, we analyze the energy efficiency of our framework modules, identify the optimal file structure for data access and demonstrate an increase in accuracy of prediction through our semantic abstraction method. The proposed framework is hardware independent and can serve as a reference model for implementing context sensing and prediction on small wearable devices beyond smartwatches, such as body-mounted cameras.

  20. Predictability of Exchange Rates in Sri Lanka: A Test of the Efficient Market Hypothesis

    OpenAIRE

    Guneratne B Wickremasinghe

    2007-01-01

    This study examined the validity of the weak and semi-strong forms of the efficient market hypothesis (EMH) for the foreign exchange market of Sri Lanka. Monthly exchange rates for four currencies during the floating exchange rate regime were used in the empirical tests. Using a battery of tests, empirical results indicate that the current values of the four exchange rates can be predicted from their past values. Further, the tests of semi-strong form efficiency indicate that exchange rate pa...

  1. Computational Efficient Upscaling Methodology for Predicting Thermal Conductivity of Nuclear Waste forms

    International Nuclear Information System (INIS)

    Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2011-01-01

    This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.

  2. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    Science.gov (United States)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  3. Methodology for predicting market transformation due to implementation of energy efficiency standards and labels

    International Nuclear Information System (INIS)

    Mahlia, T.M.I.

    2004-01-01

    There are many papers that have been published on energy efficiency standards and labels. However, a very limited number of articles on the subject have discussed the transformation of appliance energy efficiency in the market after the programs are implemented. This paper is an attempt to investigate the market transformation due to implementation of minimum energy efficiency standards and energy labels. Even though the paper only investigates room air conditioners as a case study, the method is also applicable for predicting market transformation for other household electrical appliances

  4. Prediction and design of efficient exciplex emitters for high-efficiency, thermally activated delayed-fluorescence organic light-emitting diodes.

    Science.gov (United States)

    Liu, Xiao-Ke; Chen, Zhan; Zheng, Cai-Jun; Liu, Chuan-Lin; Lee, Chun-Sing; Li, Fan; Ou, Xue-Mei; Zhang, Xiao-Hong

    2015-04-08

    High-efficiency, thermally activated delayed-fluorescence organic light-emitting diodes based on exciplex emitters are demonstrated. The best device, based on a TAPC:DPTPCz emitter, shows a high external quantum efficiency of 15.4%. Strategies for predicting and designing efficient exciplex emitters are also provided. This approach allow prediction and design of efficient exciplex emitters for achieving high-efficiency organic light-emitting diodes, for future use in displays and lighting applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models

    International Nuclear Information System (INIS)

    Jošt, D; Škerlavaj, A; Lipej, A

    2012-01-01

    Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.

  6. Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models

    Science.gov (United States)

    Jošt, D.; Škerlavaj, A.; Lipej, A.

    2012-11-01

    Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.

  7. Automated Irrigation System using Weather Prediction for Efficient Usage of Water Resources

    Science.gov (United States)

    Susmitha, A.; Alakananda, T.; Apoorva, M. L.; Ramesh, T. K.

    2017-08-01

    In agriculture the major problem which farmers face is the water scarcity, so to improve the usage of water one of the irrigation system using drip irrigation which is implemented is “Automated irrigation system with partition facility for effective irrigation of small scale farms” (AISPF). But this method has some drawbacks which can be improved and here we are with a method called “Automated irrigation system using weather prediction for efficient usage of water resources’ (AISWP), it solves the shortcomings of AISPF process. AISWP method helps us to use the available water resources more efficiently by sensing the moisture present in the soil and apart from that it is actually predicting the weather by sensing two parameters temperature and humidity thereby processing the measured values through an algorithm and releasing the water accordingly which is an added feature of AISWP so that water can be efficiently used.

  8. Design of artificial neural networks using a genetic algorithm to predict collection efficiency in venturi scrubbers.

    Science.gov (United States)

    Taheri, Mahboobeh; Mohebbi, Ali

    2008-08-30

    In this study, a new approach for the auto-design of neural networks, based on a genetic algorithm (GA), has been used to predict collection efficiency in venturi scrubbers. The experimental input data, including particle diameter, throat gas velocity, liquid to gas flow rate ratio, throat hydraulic diameter, pressure drop across the venturi scrubber and collection efficiency as an output, have been used to create a GA-artificial neural network (ANN) model. The testing results from the model are in good agreement with the experimental data. Comparison of the results of the GA optimized ANN model with the results from the trial-and-error calibrated ANN model indicates that the GA-ANN model is more efficient. Finally, the effects of operating parameters such as liquid to gas flow rate ratio, throat gas velocity, and particle diameter on collection efficiency were determined.

  9. Supplementary Material for: DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail

    2016-01-01

    Abstract Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery

  10. An efficient ray tracing method for propagation prediction along a mobile route in urban environments

    Science.gov (United States)

    Hussain, S.; Brennan, C.

    2017-07-01

    This paper presents an efficient ray tracing algorithm for propagation prediction in urban environments. The work presented in this paper builds upon previous work in which the maximum coverage area where rays can propagate after interaction with a wall or vertical edge is described by a lit polygon. The shadow regions formed by buildings within the lit polygon are described by shadow polygons. In this paper, the lit polygons of images are mapped to a coarse grid superimposed over the coverage area. This mapping reduces the active image tree significantly for a given receiver point to accelerate the ray finding process. The algorithm also presents an efficient method of quickly determining the valid ray segments for a mobile receiver moving along a linear trajectory. The validation results show considerable computation time reduction with good agreement between the simulated and measured data for propagation prediction in large urban environments.

  11. Reduction efficiency prediction of CENIBRA's recovery boiler by direct minimization of gibbs free energy

    Directory of Open Access Journals (Sweden)

    W. L. Silva

    2008-09-01

    Full Text Available The reduction efficiency is an important variable during the black liquor burning process in the Kraft recovery boiler. This variable value is obtained by slow experimental routines and the delay of this measure disturbs the pulp and paper industry customary control. This paper describes an optimization approach for the reduction efficiency determination in the furnace bottom of the recovery boiler based on the minimization of the Gibbs free energy. The industrial data used in this study were directly obtained from CENIBRA's data acquisition system. The resulting approach is able to predict the steady state behavior of the chemical composition of the furnace recovery boiler, - especially the reduction efficiency when different operational conditions are used. This result confirms the potential of this approach in the analysis of the daily operation of the recovery boiler.

  12. Oxygen uptake efficiency slope and peak oxygen consumption predict prognosis in children with tetralogy of Fallot.

    Science.gov (United States)

    Tsai, Yun-Jeng; Li, Min-Hui; Tsai, Wan-Jung; Tuan, Sheng-Hui; Liao, Tin-Yun; Lin, Ko-Long

    2016-07-01

    Oxygen uptake efficiency slope (OUES) and peak oxygen consumption (VO2peak) are exercise parameters that can predict cardiac morbidity in patients with numerous heart diseases. But the predictive value in patients with tetralogy of Fallot is still undetermined, especially in children. We evaluated the prognostic value of OUES and VO2peak in children with total repair of tetralogy of Fallot. Retrospective cohort study. Forty tetralogy of Fallot patients younger than 12 years old were recruited. They underwent a cardiopulmonary exercise test during the follow-up period after total repair surgery. The results of the cardiopulmonary exercise test were used to predict the cardiac related hospitalization in the following two years after the test. OUES normalized by body surface area (OUES/BSA) and the percentage of predicted VO2peak appeared to be predictive for two-year cardiac related hospitalization. Receiver operating characteristic curve analysis demonstrated that the best threshold value for OUES/BSA was 1.029 (area under the curve = 0.70, p = 0.03), and for VO2peak was 74% of age prediction (area under the curve = 0.72, p = 0.02). The aforementioned findings were confirmed by Kaplan-Meier plots and log-rank test. OUES/BSA and VO2peak are useful predictors of cardiac-related hospitalization in children with total repair of tetralogy of Fallot. © The European Society of Cardiology 2015.

  13. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    Science.gov (United States)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  14. Real-time prediction models for output power and efficiency of grid-connected solar photovoltaic systems

    International Nuclear Information System (INIS)

    Su, Yan; Chan, Lai-Cheong; Shu, Lianjie; Tsui, Kwok-Leung

    2012-01-01

    Highlights: ► We develop online prediction models for solar photovoltaic system performance. ► The proposed prediction models are simple but with reasonable accuracy. ► The maximum monthly average minutely efficiency varies 10.81–12.63%. ► The average efficiency tends to be slightly higher in winter months. - Abstract: This paper develops new real time prediction models for output power and energy efficiency of solar photovoltaic (PV) systems. These models were validated using measured data of a grid-connected solar PV system in Macau. Both time frames based on yearly average and monthly average are considered. It is shown that the prediction model for the yearly/monthly average of the minutely output power fits the measured data very well with high value of R 2 . The online prediction model for system efficiency is based on the ratio of the predicted output power to the predicted solar irradiance. This ratio model is shown to be able to fit the intermediate phase (9 am to 4 pm) very well but not accurate for the growth and decay phases where the system efficiency is near zero. However, it can still serve as a useful purpose for practitioners as most PV systems work in the most efficient manner over this period. It is shown that the maximum monthly average minutely efficiency varies over a small range of 10.81% to 12.63% in different months with slightly higher efficiency in winter months.

  15. Toward an Efficient Prediction of Solar Flares: Which Parameters, and How?

    Directory of Open Access Journals (Sweden)

    Manolis K. Georgoulis

    2013-11-01

    Full Text Available Solar flare prediction has become a forefront topic in contemporary solar physics, with numerous published methods relying on numerous predictive parameters, that can even be divided into parameter classes. Attempting further insight, we focus on two popular classes of flare-predictive parameters, namely multiscale (i.e., fractal and multifractal and proxy (i.e., morphological parameters, and we complement our analysis with a study of the predictive capability of fundamental physical parameters (i.e., magnetic free energy and relative magnetic helicity. Rather than applying the studied parameters to a comprehensive statistical sample of flaring and non-flaring active regions, that was the subject of our previous studies, the novelty of this work is their application to an exceptionally long and high-cadence time series of the intensely eruptive National Oceanic and Atmospheric Administration (NOAA active region (AR 11158, observed by the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory. Aiming for a detailed study of the temporal evolution of each parameter, we seek distinctive patterns that could be associated with the four largest flares in the AR in the course of its five-day observing interval. We find that proxy parameters only tend to show preflare impulses that are practical enough to warrant subsequent investigation with sufficient statistics. Combining these findings with previous results, we conclude that: (i carefully constructed, physically intuitive proxy parameters may be our best asset toward an efficient future flare-forecasting; and (ii the time series of promising parameters may be as important as their instantaneous values. Value-based prediction is the only approach followed so far. Our results call for novel signal and/or image processing techniques to efficiently utilize combined amplitude and temporal-profile information to optimize the inferred solar-flare probabilities.

  16. A polynomial chaos ensemble hydrologic prediction system for efficient parameter inference and robust uncertainty assessment

    Science.gov (United States)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Huang, W.

    2015-11-01

    This paper presents a polynomial chaos ensemble hydrologic prediction system (PCEHPS) for an efficient and robust uncertainty assessment of model parameters and predictions, in which possibilistic reasoning is infused into probabilistic parameter inference with simultaneous consideration of randomness and fuzziness. The PCEHPS is developed through a two-stage factorial polynomial chaos expansion (PCE) framework, which consists of an ensemble of PCEs to approximate the behavior of the hydrologic model, significantly speeding up the exhaustive sampling of the parameter space. Multiple hypothesis testing is then conducted to construct an ensemble of reduced-dimensionality PCEs with only the most influential terms, which is meaningful for achieving uncertainty reduction and further acceleration of parameter inference. The PCEHPS is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability. A detailed comparison between the HYMOD hydrologic model, the ensemble of PCEs, and the ensemble of reduced PCEs is performed in terms of accuracy and efficiency. Results reveal temporal and spatial variations in parameter sensitivities due to the dynamic behavior of hydrologic systems, and the effects (magnitude and direction) of parametric interactions depending on different hydrological metrics. The case study demonstrates that the PCEHPS is capable not only of capturing both expert knowledge and probabilistic information in the calibration process, but also of implementing an acceleration of more than 10 times faster than the hydrologic model without compromising the predictive accuracy.

  17. Prediction of strontium bromide laser efficiency using cluster and decision tree analysis

    Directory of Open Access Journals (Sweden)

    Iliev Iliycho

    2018-01-01

    Full Text Available Subject of investigation is a new high-powered strontium bromide (SrBr2 vapor laser emitting in multiline region of wavelengths. The laser is an alternative to the atom strontium lasers and electron free lasers, especially at the line 6.45 μm which line is used in surgery for medical processing of biological tissues and bones with minimal damage. In this paper the experimental data from measurements of operational and output characteristics of the laser are statistically processed by means of cluster analysis and tree-based regression techniques. The aim is to extract the more important relationships and dependences from the available data which influence the increase of the overall laser efficiency. There are constructed and analyzed a set of cluster models. It is shown by using different cluster methods that the seven investigated operational characteristics (laser tube diameter, length, supplied electrical power, and others and laser efficiency are combined in 2 clusters. By the built regression tree models using Classification and Regression Trees (CART technique there are obtained dependences to predict the values of efficiency, and especially the maximum efficiency with over 95% accuracy.

  18. Interrelationships between trait anxiety, situational stress and mental effort predict phonological processing efficiency, but not effectiveness.

    Science.gov (United States)

    Edwards, Elizabeth J; Edwards, Mark S; Lyvers, Michael

    2016-08-01

    Attentional control theory (ACT) describes the mechanisms associated with the relationship between anxiety and cognitive performance. We investigated the relationship between cognitive trait anxiety, situational stress and mental effort on phonological performance using a simple (forward-) and complex (backward-) word span task. Ninety undergraduate students participated in the study. Predictor variables were cognitive trait anxiety, indexed using questionnaire scores; situational stress, manipulated using ego threat instructions; and perceived level of mental effort, measured using a visual analogue scale. Criterion variables (a) performance effectiveness (accuracy) and (b) processing efficiency (accuracy divided by response time) were analyzed in separate multiple moderated-regression analyses. The results revealed (a) no relationship between the predictors and performance effectiveness, and (b) a significant 3-way interaction on processing efficiency for both the simple and complex tasks, such that at higher effort, trait anxiety and situational stress did not predict processing efficiency, whereas at lower effort, higher trait anxiety was associated with lower efficiency at high situational stress, but not at low situational stress. Our results were in full support of the assumptions of ACT and implications for future research are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Efficient first-principles prediction of solid stability: Towards chemical accuracy

    Science.gov (United States)

    Zhang, Yubo; Kitchaev, Daniil A.; Yang, Julia; Chen, Tina; Dacek, Stephen T.; Sarmiento-Pérez, Rafael A.; Marques, Maguel A. L.; Peng, Haowei; Ceder, Gerbrand; Perdew, John P.; Sun, Jianwei

    2018-03-01

    The question of material stability is of fundamental importance to any analysis of system properties in condensed matter physics and materials science. The ability to evaluate chemical stability, i.e., whether a stoichiometry will persist in some chemical environment, and structure selection, i.e. what crystal structure a stoichiometry will adopt, is critical to the prediction of materials synthesis, reactivity and properties. Here, we demonstrate that density functional theory, with the recently developed strongly constrained and appropriately normed (SCAN) functional, has advanced to a point where both facets of the stability problem can be reliably and efficiently predicted for main group compounds, while transition metal compounds are improved but remain a challenge. SCAN therefore offers a robust model for a significant portion of the periodic table, presenting an opportunity for the development of novel materials and the study of fine phase transformations even in largely unexplored systems with little to no experimental data.

  20. Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering

    Science.gov (United States)

    Koehler, Sarah Muraoka

    Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is

  1. Internalizing and externalizing traits predict changes in sleep efficiency in emerging adulthood: An actigraphy study

    Directory of Open Access Journals (Sweden)

    Ashley eYaugher

    2015-10-01

    Full Text Available Research on psychopathology and experimental studies of sleep restriction support a relationship between sleep disruption and both internalizing and externalizing disorders. The objective of the current study was to extend this research by examining sleep, impulsivity, antisocial personality traits, and internalizing traits in a university sample. Three hundred and eighty six individuals (161 males between the ages of 18 and 27 years (M = 18.59, SD = 0.98 wore actigraphs for 7 days and completed established measures of disorder-linked personality traits and sleep quality (i.e., Personality Assessment Inventory, Triarchic Psychopathy Measure, Barratt Impulsiveness Scale-11, and the Pittsburgh Sleep Quality Index. As expected, sleep measures and questionnaire scores fell within the normal range of values and sex differences in sleep and personality were consistent with previous research results. Similar to findings in predominantly male forensic psychiatric settings, higher levels of impulsivity predicted poorer subjective sleep quality in both women and men. Consistent with well-established associations between depression and sleep, higher levels of depression in both sexes predicted poorer subjective sleep quality. Bidirectional analyses showed that better sleep efficiency decreases depression. Finally, moderation analyses showed that gender does have a primary role in sleep efficiency and marginal effects were found. The observed relations between sleep and personality traits in a typical university sample add to converging evidence of the relationship between sleep and psychopathology and may inform our understanding of the development of psychopathology in young adulthood.

  2. Genome-wide prediction of traits with different genetic architecture through efficient variable selection.

    Science.gov (United States)

    Wimmer, Valentin; Lehermeier, Christina; Albrecht, Theresa; Auinger, Hans-Jürgen; Wang, Yu; Schön, Chris-Carolin

    2013-10-01

    In genome-based prediction there is considerable uncertainty about the statistical model and method required to maximize prediction accuracy. For traits influenced by a small number of quantitative trait loci (QTL), predictions are expected to benefit from methods performing variable selection [e.g., BayesB or the least absolute shrinkage and selection operator (LASSO)] compared to methods distributing effects across the genome [ridge regression best linear unbiased prediction (RR-BLUP)]. We investigate the assumptions underlying successful variable selection by combining computer simulations with large-scale experimental data sets from rice (Oryza sativa L.), wheat (Triticum aestivum L.), and Arabidopsis thaliana (L.). We demonstrate that variable selection can be successful when the number of phenotyped individuals is much larger than the number of causal mutations contributing to the trait. We show that the sample size required for efficient variable selection increases dramatically with decreasing trait heritabilities and increasing extent of linkage disequilibrium (LD). We contrast and discuss contradictory results from simulation and experimental studies with respect to superiority of variable selection methods over RR-BLUP. Our results demonstrate that due to long-range LD, medium heritabilities, and small sample sizes, superiority of variable selection methods cannot be expected in plant breeding populations even for traits like FRIGIDA gene expression in Arabidopsis and flowering time in rice, assumed to be influenced by a few major QTL. We extend our conclusions to the analysis of whole-genome sequence data and infer upper bounds for the number of causal mutations which can be identified by LASSO. Our results have major impact on the choice of statistical method needed to make credible inferences about genetic architecture and prediction accuracy of complex traits.

  3. Efficient prediction of ground noise from helicopters and parametric studies based on acoustic mapping

    Directory of Open Access Journals (Sweden)

    Fei WANG

    2018-02-01

    Full Text Available Based on the acoustic mapping, a prediction model for the ground noise radiated from an in-flight helicopter is established. For the enhancement of calculation efficiency, a high-efficiency second-level acoustic radiation model capable of taking the influence of atmosphere absorption on noise into account is first developed by the combination of the point-source idea and the rotor noise radiation characteristics. The comparison between the present model and the direct computation method of noise is done and the high efficiency of the model is validated. Rotor free-wake analysis method and Ffowcs Williams-Hawkings (FW-H equation are applied to the aerodynamics and noise prediction in the present model. Secondly, a database of noise spheres with the characteristic parameters of advance ratio and tip-path-plane angle is established by the helicopter trim model together with a parametric modeling approach. Furthermore, based on acoustic mapping, a method of rapid simulation for the ground noise radiated from an in-flight helicopter is developed. The noise footprint for AH-1 rotor is then calculated and the influence of some parameters including advance ratio and flight path angle on ground noise is deeply analyzed using the developed model. The results suggest that with the increase of advance ratio and flight path angle, the peak noise levels on the ground first increase and then decrease, in the meantime, the maximum Sound Exposure Level (SEL noise on the ground shifts toward the advancing side of rotor. Besides, through the analysis of the effects of longitudinal forces on miss-distance and rotor Blade-Vortex Interaction (BVI noise in descent flight, some meaningful results for reducing the BVI noise on the ground are obtained. Keywords: Acoustic mapping, Helicopter, Noise footprint, Rotor noise, Second-level acoustic radiation model

  4. Driving Green: Toward the Prediction and Influence of Efficient Driving Behavior

    Science.gov (United States)

    Newsome, William D.

    Sub-optimal efficiency in activities involving the consumption of fossil fuels, such as driving, contribute to a miscellany of negative environmental, political, economic and social externalities. Demonstrations of the effectiveness of feedback interventions can be found in countless organizational settings, as can demonstrations of individual differences in sensitivity to feedback interventions. Mechanisms providing feedback to drivers about fuel economy are becoming standard equipment in most new vehicles, but vary considerably in their constitution. A keystone of Radical Behaviorism is the acknowledgement that verbal behavior appears to play a role in mediating apparent susceptibility to influence by contingencies of varying delay. In the current study, samples of verbal behavior (rules) were collected in the context of a feedback intervention to improve driving efficiency. In an analysis of differences in individual responsiveness to the feedback intervention, the rate of novel rules per week generated by drivers is revealed to account for a substantial proportion of the variability in relative efficiency gains across participants. The predictive utility of conceptual tools, such as the basic distinction among contingency-shaped and rule governed behavior, the elaboration of direct-acting and indirect-acting contingencies, and the psychological flexibility model, is bolstered by these findings.

  5. Incremental validity of positive orientation: predictive efficiency beyond the five-factor model

    Directory of Open Access Journals (Sweden)

    Łukasz Roland Miciuk

    2016-05-01

    Full Text Available Background The relation of positive orientation (a basic predisposition to think positively of oneself, one’s life and one’s future and personality traits is still disputable. The purpose of the described research was to verify the hypothesis that positive orientation has predictive efficiency beyond the five-factor model. Participants and procedure One hundred and thirty participants (at the mean age M = 24.84 completed the following questionnaires: the Self-Esteem Scale (SES, the Satisfaction with Life Scale (SWLS, the Life Orientation Test-Revised (LOT-R, the Positivity Scale (P-SCALE, the NEO Five Factor Inventory (NEO-FFI, the Self-Concept Clarity Scale (SCC, the Generalized Self-Efficacy Scale (GSES and the Life Engagement Test (LET. Results The introduction of positive orientation as an additional predictor in the second step of regression analyses led to better prediction of the following variables: purpose in life, self-concept clarity and generalized self-efficacy. This effect was the strongest for predicting purpose in life (i.e. 14% increment of the explained variance. Conclusions The results confirmed our hypothesis that positive orientation can be characterized by incremental validity – its inclusion in the regression model (in addition to the five main factors of personality increases the amount of explained variance. These findings may provide further evidence for the legitimacy of measuring positive orientation and personality traits separately.

  6. An Efficient Semi-supervised Learning Approach to Predict SH2 Domain Mediated Interactions.

    Science.gov (United States)

    Kundu, Kousik; Backofen, Rolf

    2017-01-01

    Src homology 2 (SH2) domain is an important subclass of modular protein domains that plays an indispensable role in several biological processes in eukaryotes. SH2 domains specifically bind to the phosphotyrosine residue of their binding peptides to facilitate various molecular functions. For determining the subtle binding specificities of SH2 domains, it is very important to understand the intriguing mechanisms by which these domains recognize their target peptides in a complex cellular environment. There are several attempts have been made to predict SH2-peptide interactions using high-throughput data. However, these high-throughput data are often affected by a low signal to noise ratio. Furthermore, the prediction methods have several additional shortcomings, such as linearity problem, high computational complexity, etc. Thus, computational identification of SH2-peptide interactions using high-throughput data remains challenging. Here, we propose a machine learning approach based on an efficient semi-supervised learning technique for the prediction of 51 SH2 domain mediated interactions in the human proteome. In our study, we have successfully employed several strategies to tackle the major problems in computational identification of SH2-peptide interactions.

  7. Resource competition model predicts zonation and increasing nutrient use efficiency along a wetland salinity gradient

    Science.gov (United States)

    Schoolmaster, Donald; Stagg, Camille L.

    2018-01-01

    A trade-off between competitive ability and stress tolerance has been hypothesized and empirically supported to explain the zonation of species across stress gradients for a number of systems. Since stress often reduces plant productivity, one might expect a pattern of decreasing productivity across the zones of the stress gradient. However, this pattern is often not observed in coastal wetlands that show patterns of zonation along a salinity gradient. To address the potentially complex relationship between stress, zonation, and productivity in coastal wetlands, we developed a model of plant biomass as a function of resource competition and salinity stress. Analysis of the model confirms the conventional wisdom that a trade-off between competitive ability and stress tolerance is a necessary condition for zonation. It also suggests that a negative relationship between salinity and production can be overcome if (1) the supply of the limiting resource increases with greater salinity stress or (2) nutrient use efficiency increases with increasing salinity. We fit the equilibrium solution of the dynamic model to data from Louisiana coastal wetlands to test its ability to explain patterns of production across the landscape gradient and derive predictions that could be tested with independent data. We found support for a number of the model predictions, including patterns of decreasing competitive ability and increasing nutrient use efficiency across a gradient from freshwater to saline wetlands. In addition to providing a quantitative framework to support the mechanistic hypotheses of zonation, these results suggest that this simple model is a useful platform to further build upon, simulate and test mechanistic hypotheses of more complex patterns and phenomena in coastal wetlands.

  8. Energy-Efficient Control with Harvesting Predictions for Solar-Powered Wireless Sensor Networks.

    Science.gov (United States)

    Zou, Tengyue; Lin, Shouying; Feng, Qijie; Chen, Yanlian

    2016-01-04

    Wireless sensor networks equipped with rechargeable batteries are useful for outdoor environmental monitoring. However, the severe energy constraints of the sensor nodes present major challenges for long-term applications. To achieve sustainability, solar cells can be used to acquire energy from the environment. Unfortunately, the energy supplied by the harvesting system is generally intermittent and considerably influenced by the weather. To improve the energy efficiency and extend the lifetime of the networks, we propose algorithms for harvested energy prediction using environmental shadow detection. Thus, the sensor nodes can adjust their scheduling plans accordingly to best suit their energy production and residual battery levels. Furthermore, we introduce clustering and routing selection methods to optimize the data transmission, and a Bayesian network is used for warning notifications of bottlenecks along the path. The entire system is implemented on a real-time Texas Instruments CC2530 embedded platform, and the experimental results indicate that these mechanisms sustain the networks' activities in an uninterrupted and efficient manner.

  9. An efficient hybrid technique in RCS predictions of complex targets at high frequencies

    Science.gov (United States)

    Algar, María-Jesús; Lozano, Lorena; Moreno, Javier; González, Iván; Cátedra, Felipe

    2017-09-01

    Most computer codes in Radar Cross Section (RCS) prediction use Physical Optics (PO) and Physical theory of Diffraction (PTD) combined with Geometrical Optics (GO) and Geometrical Theory of Diffraction (GTD). The latter approaches are computationally cheaper and much more accurate for curved surfaces, but not applicable for the computation of the RCS of all surfaces of a complex object due to the presence of caustic problems in the analysis of concave surfaces or flat surfaces in the far field. The main contribution of this paper is the development of a hybrid method based on a new combination of two asymptotic techniques: GTD and PO, considering the advantages and avoiding the disadvantages of each of them. A very efficient and accurate method to analyze the RCS of complex structures at high frequencies is obtained with the new combination. The proposed new method has been validated comparing RCS results obtained for some simple cases using the proposed approach and RCS using the rigorous technique of Method of Moments (MoM). Some complex cases have been examined at high frequencies contrasting the results with PO. This study shows the accuracy and the efficiency of the hybrid method and its suitability for the computation of the RCS at really large and complex targets at high frequencies.

  10. A Calibrated Lumped Element Model for the Prediction of PSJ Actuator Efficiency Performance

    Directory of Open Access Journals (Sweden)

    Matteo Chiatto

    2018-03-01

    Full Text Available Among the various active flow control techniques, Plasma Synthetic Jet (PSJ actuators, or Sparkjets, represent a very promising technology, especially because of their high velocities and short response times. A practical tool, employed for design and manufacturing purposes, consists of the definition of a low-order model, lumped element model (LEM, which is able to predict the dynamic response of the actuator in a relatively quick way and with reasonable fidelity and accuracy. After a brief description of an innovative lumped model, this work faces the experimental investigation of a home-designed and manufactured PSJ actuator, for different frequencies and energy discharges. Particular attention has been taken in the power supply system design. A specific home-made Pitot tube has allowed the detection of velocity profiles along the jet radial direction, for various energy discharges, as well as the tuning of the lumped model with experimental data, where the total device efficiency has been assumed as a fitting parameter. The best fitting value not only contains information on the actual device efficiency, but includes some modeling and experimental uncertainties, related also to the used measurement technique.

  11. Energy-Efficient Control with Harvesting Predictions for Solar-Powered Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Tengyue Zou

    2016-01-01

    Full Text Available Wireless sensor networks equipped with rechargeable batteries are useful for outdoor environmental monitoring. However, the severe energy constraints of the sensor nodes present major challenges for long-term applications. To achieve sustainability, solar cells can be used to acquire energy from the environment. Unfortunately, the energy supplied by the harvesting system is generally intermittent and considerably influenced by the weather. To improve the energy efficiency and extend the lifetime of the networks, we propose algorithms for harvested energy prediction using environmental shadow detection. Thus, the sensor nodes can adjust their scheduling plans accordingly to best suit their energy production and residual battery levels. Furthermore, we introduce clustering and routing selection methods to optimize the data transmission, and a Bayesian network is used for warning notifications of bottlenecks along the path. The entire system is implemented on a real-time Texas Instruments CC2530 embedded platform, and the experimental results indicate that these mechanisms sustain the networks’ activities in an uninterrupted and efficient manner.

  12. In Search of a Time Efficient Approach to Crack and Delamination Growth Predictions in Composites

    Science.gov (United States)

    Krueger, Ronald; Carvalho, Nelson

    2016-01-01

    Analysis benchmarking was used to assess the accuracy and time efficiency of algorithms suitable for automated delamination growth analysis. First, the Floating Node Method (FNM) was introduced and its combination with a simple exponential growth law (Paris Law) and Virtual Crack Closure technique (VCCT) was discussed. Implementation of the method into a user element (UEL) in Abaqus/Standard(Registered TradeMark) was also presented. For the assessment of growth prediction capabilities, an existing benchmark case based on the Double Cantilever Beam (DCB) specimen was briefly summarized. Additionally, the development of new benchmark cases based on the Mixed-Mode Bending (MMB) specimen to assess the growth prediction capabilities under mixed-mode I/II conditions was discussed in detail. A comparison was presented, in which the benchmark cases were used to assess the existing low-cycle fatigue analysis tool in Abaqus/Standard(Registered TradeMark) in comparison to the FNM-VCCT fatigue growth analysis implementation. The low-cycle fatigue analysis tool in Abaqus/Standard(Registered TradeMark) was able to yield results that were in good agreement with the DCB benchmark example. Results for the MMB benchmark cases, however, only captured the trend correctly. The user element (FNM-VCCT) always yielded results that were in excellent agreement with all benchmark cases, at a fraction of the analysis time. The ability to assess the implementation of two methods in one finite element code illustrated the value of establishing benchmark solutions.

  13. Labour-efficient in vitro lymphocyte population tracking and fate prediction using automation and manual review.

    Directory of Open Access Journals (Sweden)

    Rajib Chakravorty

    Full Text Available Interest in cell heterogeneity and differentiation has recently led to increased use of time-lapse microscopy. Previous studies have shown that cell fate may be determined well in advance of the event. We used a mixture of automation and manual review of time-lapse live cell imaging to track the positions, contours, divisions, deaths and lineage of 44 B-lymphocyte founders and their 631 progeny in vitro over a period of 108 hours. Using this data to train a Support Vector Machine classifier, we were retrospectively able to predict the fates of individual lymphocytes with more than 90% accuracy, using only time-lapse imaging captured prior to mitosis or death of 90% of all cells. The motivation for this paper is to explore the impact of labour-efficient assistive software tools that allow larger and more ambitious live-cell time-lapse microscopy studies. After training on this data, we show that machine learning methods can be used for realtime prediction of individual cell fates. These techniques could lead to realtime cell culture segregation for purposes such as phenotype screening. We were able to produce a large volume of data with less effort than previously reported, due to the image processing, computer vision, tracking and human-computer interaction tools used. We describe the workflow of the software-assisted experiments and the graphical interfaces that were needed. To validate our results we used our methods to reproduce a variety of published data about lymphocyte populations and behaviour. We also make all our data publicly available, including a large quantity of lymphocyte spatio-temporal dynamics and related lineage information.

  14. Cell population structure prior to bifurcation predicts efficiency of directed differentiation in human induced pluripotent cells.

    Science.gov (United States)

    Bargaje, Rhishikesh; Trachana, Kalliopi; Shelton, Martin N; McGinnis, Christopher S; Zhou, Joseph X; Chadick, Cora; Cook, Savannah; Cavanaugh, Christopher; Huang, Sui; Hood, Leroy

    2017-02-28

    Steering the differentiation of induced pluripotent stem cells (iPSCs) toward specific cell types is crucial for patient-specific disease modeling and drug testing. This effort requires the capacity to predict and control when and how multipotent progenitor cells commit to the desired cell fate. Cell fate commitment represents a critical state transition or "tipping point" at which complex systems undergo a sudden qualitative shift. To characterize such transitions during iPSC to cardiomyocyte differentiation, we analyzed the gene expression patterns of 96 developmental genes at single-cell resolution. We identified a bifurcation event early in the trajectory when a primitive streak-like cell population segregated into the mesodermal and endodermal lineages. Before this branching point, we could detect the signature of an imminent critical transition: increase in cell heterogeneity and coordination of gene expression. Correlation analysis of gene expression profiles at the tipping point indicates transcription factors that drive the state transition toward each alternative cell fate and their relationships with specific phenotypic readouts. The latter helps us to facilitate small molecule screening for differentiation efficiency. To this end, we set up an analysis of cell population structure at the tipping point after systematic variation of the protocol to bias the differentiation toward mesodermal or endodermal cell lineage. We were able to predict the proportion of cardiomyocytes many days before cells manifest the differentiated phenotype. The analysis of cell populations undergoing a critical state transition thus affords a tool to forecast cell fate outcomes and can be used to optimize differentiation protocols to obtain desired cell populations.

  15. Labour-efficient in vitro lymphocyte population tracking and fate prediction using automation and manual review.

    Science.gov (United States)

    Chakravorty, Rajib; Rawlinson, David; Zhang, Alan; Markham, John; Dowling, Mark R; Wellard, Cameron; Zhou, Jie H S; Hodgkin, Philip D

    2014-01-01

    Interest in cell heterogeneity and differentiation has recently led to increased use of time-lapse microscopy. Previous studies have shown that cell fate may be determined well in advance of the event. We used a mixture of automation and manual review of time-lapse live cell imaging to track the positions, contours, divisions, deaths and lineage of 44 B-lymphocyte founders and their 631 progeny in vitro over a period of 108 hours. Using this data to train a Support Vector Machine classifier, we were retrospectively able to predict the fates of individual lymphocytes with more than 90% accuracy, using only time-lapse imaging captured prior to mitosis or death of 90% of all cells. The motivation for this paper is to explore the impact of labour-efficient assistive software tools that allow larger and more ambitious live-cell time-lapse microscopy studies. After training on this data, we show that machine learning methods can be used for realtime prediction of individual cell fates. These techniques could lead to realtime cell culture segregation for purposes such as phenotype screening. We were able to produce a large volume of data with less effort than previously reported, due to the image processing, computer vision, tracking and human-computer interaction tools used. We describe the workflow of the software-assisted experiments and the graphical interfaces that were needed. To validate our results we used our methods to reproduce a variety of published data about lymphocyte populations and behaviour. We also make all our data publicly available, including a large quantity of lymphocyte spatio-temporal dynamics and related lineage information.

  16. Predictable quantum efficient detector based on n-type silicon photodiodes

    Science.gov (United States)

    Dönsberg, Timo; Manoocheri, Farshid; Sildoja, Meelis; Juntunen, Mikko; Savin, Hele; Tuovinen, Esa; Ronkainen, Hannu; Prunnila, Mika; Merimaa, Mikko; Tang, Chi Kwong; Gran, Jarle; Müller, Ingmar; Werner, Lutz; Rougié, Bernard; Pons, Alicia; Smîd, Marek; Gál, Péter; Lolli, Lapo; Brida, Giorgio; Rastello, Maria Luisa; Ikonen, Erkki

    2017-12-01

    The predictable quantum efficient detector (PQED) consists of two custom-made induced junction photodiodes that are mounted in a wedged trap configuration for the reduction of reflectance losses. Until now, all manufactured PQED photodiodes have been based on a structure where a SiO2 layer is thermally grown on top of p-type silicon substrate. In this paper, we present the design, manufacturing, modelling and characterization of a new type of PQED, where the photodiodes have an Al2O3 layer on top of n-type silicon substrate. Atomic layer deposition is used to deposit the layer to the desired thickness. Two sets of photodiodes with varying oxide thicknesses and substrate doping concentrations were fabricated. In order to predict recombination losses of charge carriers, a 3D model of the photodiode was built into Cogenda Genius semiconductor simulation software. It is important to note that a novel experimental method was developed to obtain values for the 3D model parameters. This makes the prediction of the PQED responsivity a completely autonomous process. Detectors were characterized for temperature dependence of dark current, spatial uniformity of responsivity, reflectance, linearity and absolute responsivity at the wavelengths of 488 nm and 532 nm. For both sets of photodiodes, the modelled and measured responsivities were generally in agreement within the measurement and modelling uncertainties of around 100 parts per million (ppm). There is, however, an indication that the modelled internal quantum deficiency may be underestimated by a similar amount. Moreover, the responsivities of the detectors were spatially uniform within 30 ppm peak-to-peak variation. The results obtained in this research indicate that the n-type induced junction photodiode is a very promising alternative to the existing p-type detectors, and thus give additional credibility to the concept of modelled quantum detector serving as a primary standard. Furthermore, the manufacturing of

  17. The assessment of different models to predict solar module temperature, output power and efficiency for Nis, Serbia

    International Nuclear Information System (INIS)

    Pantic, Lana S.; Pavlović, Tomislav M.; Milosavljević, Dragana D.; Radonjic, Ivana S.; Radovic, Miodrag K.; Sazhko, Galina

    2016-01-01

    Five different models for calculating solar module temperature, output power and efficiency for sunny days with different solar radiation intensities and ambient temperatures are assessed in this paper. Thereafter, modeled values are compared to the experimentally obtained values for the horizontal solar module in Nis, Serbia. The criterion for determining the best model was based on the statistical analysis and the agreement between the calculated and the experimental values. The calculated values of solar module temperature are in good agreement with the experimentally obtained ones, with some variations over and under the measured values. The best agreement between calculated and experimentally obtained values was for summer months with high solar radiation intensity. The nonlinear model for calculating the output power is much better than the linear model and at the same time predicts better the total electrical energy generated by the solar module during the day. The nonlinear model for calculating the solar module efficiency predicts the efficiency higher than the STC (Standard Test Conditions) value of solar module efficiency for all conditions, while the linear model predicts the solar module efficiency very well. This paper provides a simple and efficient guideline to estimate relevant parameters of a monocrystalline silicon solar module under the moderate-continental climate conditions. - Highlights: • Linear model for solar module temperature gives accurate predictions for August. • The nonlinear model better predicts the solar module power than the linear model. • For calculating solar module power for Nis we propose the nonlinear model. • For calculating solar model efficiency for Nis we propose adoption of linear model. • The adopted models can be used for calculations throughout the year.

  18. Birth weight predicted baseline muscular efficiency, but not response of energy expenditure to calorie restriction: An empirical test of the predictive adaptive response hypothesis.

    Science.gov (United States)

    Workman, Megan; Baker, Jack; Lancaster, Jane B; Mermier, Christine; Alcock, Joe

    2016-07-01

    Aiming to test the evolutionary significance of relationships linking prenatal growth conditions to adult phenotypes, this study examined whether birth size predicts energetic savings during fasting. We specifically tested a Predictive Adaptive Response (PAR) model that predicts greater energetic saving among adults who were born small. Data were collected from a convenience sample of young adults living in Albuquerque, NM (n = 34). Indirect calorimetry quantified changes in resting energy expenditure (REE) and active muscular efficiency that occurred in response to a 29-h fast. Multiple regression analyses linked birth weight to baseline and postfast metabolic values while controlling for appropriate confounders (e.g., sex, body mass). Birth weight did not moderate the relationship between body size and energy expenditure, nor did it predict the magnitude change in REE or muscular efficiency observed from baseline to after fasting. Alternative indicators of birth size were also examined (e.g., low v. normal birth weight, comparison of tertiles), with no effects found. However, baseline muscular efficiency improved by 1.1% per 725 g (S.D.) increase in birth weight (P = 0.037). Birth size did not influence the sensitivity of metabolic demands to fasting-neither at rest nor during activity. Moreover, small birth size predicted a reduction in the efficiency with which muscles convert energy expended into work accomplished. These results do not support the ascription of adaptive function to phenotypes associated with small birth size. © 2015 Wiley Periodicals, Inc. Am. J. Hum. Biol. 28:484-492, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. A Robust Model Predictive Control for efficient thermal management of internal combustion engines

    International Nuclear Information System (INIS)

    Pizzonia, Francesco; Castiglione, Teresa; Bova, Sergio

    2016-01-01

    Highlights: • A Robust Model Predictive Control for ICE thermal management was developed. • The proposed control is effective in decreasing the warm-up time. • The control system reduces coolant flow rate under fully warmed conditions. • The control strategy operates the cooling system around onset of nucleate boiling. • Little on-line computational effort is required. - Abstract: Optimal thermal management of modern internal combustion engines (ICE) is one of the key factors for reducing fuel consumption and CO_2 emissions. These are measured by using standardized driving cycles, like the New European Driving Cycle (NEDC), during which the engine does not reach thermal steady state; engine efficiency and emissions are therefore penalized. Several techniques for improving ICE thermal efficiency were proposed, which range from the use of empirical look-up tables to pulsed pump operation. A systematic approach to the problem is however still missing and this paper aims to bridge this gap. The paper proposes a Robust Model Predictive Control of the coolant flow rate, which makes use of a zero-dimensional model of the cooling system of an ICE. The control methodology incorporates explicitly the model uncertainties and achieves the synthesis of a state-feedback control law that minimizes the “worst case” objective function while taking into account the system constraints, as proposed by Kothare et al. (1996). The proposed control strategy is to adjust the coolant flow rate by means of an electric pump, in order to bring the cooling system to operate around the onset of nucleate boiling: across it during warm-up and above it (nucleate or saturated boiling) under fully warmed conditions. The computationally heavy optimization is carried out off-line, while during the operation of the engine the control parameters are simply picked-up on-line from look-up tables. Owing to the little computational effort required, the resulting control strategy is suitable for

  20. Pulmonary hypertension in patients with idiopathic pulmonary fibrosis - the predictive value of exercise capacity and gas exchange efficiency.

    Directory of Open Access Journals (Sweden)

    Sven Gläser

    Full Text Available Exercise capacity and survival of patients with IPF is potentially impaired by pulmonary hypertension. This study aims to investigate diagnostic and prognostic properties of gas exchange during exercise and lung function in IPF patients with or without pulmonary hypertension. In a multicentre setting, patients with IPF underwent right heart catheterization, cardiopulmonary exercise and lung function testing during their initial evaluation. Mortality follow up was evaluated. Seventy-three of 135 patients [82 males; median age of 64 (56; 72 years] with IPF had pulmonary hypertension as assessed by right heart catheterization [median mean pulmonary arterial pressure 34 (27; 43 mmHg]. The presence of pulmonary hypertension was best predicted by gas exchange efficiency for carbon dioxide (cut off ≥152% predicted; area under the curve 0.94 and peak oxygen uptake (≤56% predicted; 0.83, followed by diffusing capacity. Resting lung volumes did not predict pulmonary hypertension. Survival was best predicted by the presence of pulmonary hypertension, followed by peak oxygen uptake [HR 0.96 (0.93; 0.98]. Pulmonary hypertension in IPF patients is best predicted by gas exchange efficiency during exercise and peak oxygen uptake. In addition to invasively measured pulmonary arterial pressure, oxygen uptake at peak exercise predicts survival in this patient population.

  1. Efficient multi-scenario Model Predictive Control for water resources management with ensemble streamflow forecasts

    Science.gov (United States)

    Tian, Xin; Negenborn, Rudy R.; van Overloop, Peter-Jules; María Maestre, José; Sadowska, Anna; van de Giesen, Nick

    2017-11-01

    Model Predictive Control (MPC) is one of the most advanced real-time control techniques that has been widely applied to Water Resources Management (WRM). MPC can manage the water system in a holistic manner and has a flexible structure to incorporate specific elements, such as setpoints and constraints. Therefore, MPC has shown its versatile performance in many branches of WRM. Nonetheless, with the in-depth understanding of stochastic hydrology in recent studies, MPC also faces the challenge of how to cope with hydrological uncertainty in its decision-making process. A possible way to embed the uncertainty is to generate an Ensemble Forecast (EF) of hydrological variables, rather than a deterministic one. The combination of MPC and EF results in a more comprehensive approach: Multi-scenario MPC (MS-MPC). In this study, we will first assess the model performance of MS-MPC, considering an ensemble streamflow forecast. Noticeably, the computational inefficiency may be a critical obstacle that hinders applicability of MS-MPC. In fact, with more scenarios taken into account, the computational burden of solving an optimization problem in MS-MPC accordingly increases. To deal with this challenge, we propose the Adaptive Control Resolution (ACR) approach as a computationally efficient scheme to practically reduce the number of control variables in MS-MPC. In brief, the ACR approach uses a mixed-resolution control time step from the near future to the distant future. The ACR-MPC approach is tested on a real-world case study: an integrated flood control and navigation problem in the North Sea Canal of the Netherlands. Such an approach reduces the computation time by 18% and up in our case study. At the same time, the model performance of ACR-MPC remains close to that of conventional MPC.

  2. DDR: Efficient computational method to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.; Ashoor, Haitham; Bajic, Vladimir B.

    2017-01-01

    but not all DTIs between them are not known. Using independent sources of evidence, we verify as correct 22 out of the top 25 DDR novel predictions. This suggests that DDR can be used as an efficient method to identify correct DTIs.

  3. ARCH Models Efficiency Evaluation in Prediction and Poultry Price Process Formation

    Directory of Open Access Journals (Sweden)

    Behzad Fakari Sardehae

    2016-09-01

    . This study shows that the heterogeneous variance exists in error term and indicated by LM-test. Results and Discussion: Results showed that stationary test of the poultry price has a unit root and is stationary with one lag difference, and thus the price of poultry was used in the study by one lag difference. Main results showed that ARCH is the best model for fluctuation prediction. Moreover, news has asymmetric effect on poultry price fluctuation and good news has a stronger effect on poultry price fluctuation than bad news and leverage effect doesnot existin poultry price. Moreover current fluctuation does not transmit to future. One of the main assumptions of time series models is constant variance in estimated coefficients. If this assumption has not been, the estimated coefficients for the correlation between the serial data would be biased and results in wrong interpretation. The results showed that ARCH effects existed in error terms of poultry price and so the ARCH family with student t distribution should be used. Normality test of error term and exam of heterogeneous variance needed and lack of attention to its cause false conclusion. Result showed that ARCH models have good predictive power and ARMA models are less efficient than ARCH models. It shows that non-linear predictions are better than linear prediction. According to the results that student distribution should be used as target distribution in estimated patterns. Conclusion: Huge need for poultry, require the creation of infrastructure to response to demands. Results showed that change in poultry price volatility over time, may intensifies at anytime. The asymmetric effect of good and bad news in poultry price leading to consumer's reaction. The good news had significant effects on the poultry market and created positive change in the poultry price, but the bad news did not result insignificant effects. In fact, because the poultry product in the household portfolio is essential, it should not

  4. Predictive information speeds up visual awareness in an individuation task by modulating threshold setting, not processing efficiency.

    Science.gov (United States)

    De Loof, Esther; Van Opstal, Filip; Verguts, Tom

    2016-04-01

    Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Multiple regression models for the prediction of the maximum obtainable thermal efficiency of organic Rankine cycles

    DEFF Research Database (Denmark)

    Larsen, Ulrik; Pierobon, Leonardo; Wronski, Jorrit

    2014-01-01

    Much attention is focused on increasing the energy efficiency to decrease fuel costs and CO2 emissions throughout industrial sectors. The ORC (organic Rankine cycle) is a relatively simple but efficient process that can be used for this purpose by converting low and medium temperature waste heat ...

  6. An Entropy-Based Kernel Learning Scheme toward Efficient Data Prediction in Cloud-Assisted Network Environments

    Directory of Open Access Journals (Sweden)

    Xiong Luo

    2016-07-01

    Full Text Available With the recent emergence of wireless sensor networks (WSNs in the cloud computing environment, it is now possible to monitor and gather physical information via lots of sensor nodes to meet the requirements of cloud services. Generally, those sensor nodes collect data and send data to sink node where end-users can query all the information and achieve cloud applications. Currently, one of the main disadvantages in the sensor nodes is that they are with limited physical performance relating to less memory for storage and less source of power. Therefore, in order to avoid such limitation, it is necessary to develop an efficient data prediction method in WSN. To serve this purpose, by reducing the redundant data transmission between sensor nodes and sink node while maintaining the required acceptable errors, this article proposes an entropy-based learning scheme for data prediction through the use of kernel least mean square (KLMS algorithm. The proposed scheme called E-KLMS develops a mechanism to maintain the predicted data synchronous at both sides. Specifically, the kernel-based method is able to adjust the coefficients adaptively in accordance with every input, which will achieve a better performance with smaller prediction errors, while employing information entropy to remove these data which may cause relatively large errors. E-KLMS can effectively solve the tradeoff problem between prediction accuracy and computational efforts while greatly simplifying the training structure compared with some other data prediction approaches. What’s more, the kernel-based method and entropy technique could ensure the prediction effect by both improving the accuracy and reducing errors. Experiments with some real data sets have been carried out to validate the efficiency and effectiveness of E-KLMS learning scheme, and the experiment results show advantages of the our method in prediction accuracy and computational time.

  7. Relationship among performance, carcass, and feed efficiency characteristics, and their ability to predict economic value in the feedlot.

    Science.gov (United States)

    Retallick, K M; Faulkner, D B; Rodriguez-Zas, S L; Nkrumah, J D; Shike, D W

    2013-12-01

    A 4-yr study was conducted using 736 steers of known Angus, Simmental, or Simmental × Angus genetics to determine performance, carcass, and feed efficiency factors that explained variation in economic performance. Steers were pen fed and individual DMI was recorded using a GrowSafe automated feeding system (GrowSafe Systems Ltd., Airdrie, Alberta, Canada). Steers consumed a similar diet and received similar management each year. The objectives of this study were to: 1) determine current economic value of feed efficiency and 2) identify performance, carcass, and feed efficiency characteristics that predict: carcass value, profit, cost of gain, and feed costs. Economic data used were from 2011 values. Feed efficiency values investigated were: feed conversion ratio (FCR; feed to gain), residual feed intake (RFI), residual BW gain (RG), and residual intake and BW gain (RIG). Dependent variables were carcass value ($/steer), profit ($/steer), feed costs ($/steer • d(-1)), and cost of gain ($/kg). Independent variables were year, DMI, ADG, HCW, LM area, marbling, yield grade, dam breed, and sire breed. A 10% improvement in RG (P Profit increased with a 10% improvement in feed efficiency (P profit. Eighty-five percent of the variation in cost of gain was explained by ADG, DMI, HCW, and year. Prediction equations were developed that excluded ADG and DMI, and included feed efficiency values. Using these equations, cost of gain was explained primarily by FCR (R(2) = 0.71). Seventy-three percent of profitability was explained, with 55% being accounted for by RG and marbling. These prediction equations represent the relative importance of factors contributing to economic success in feedlot cattle based on current prices.

  8. An efficient numerical target strength prediction model: Validation against analysis solutions

    NARCIS (Netherlands)

    Fillinger, L.; Nijhof, M.J.J.; Jong, C.A.F. de

    2014-01-01

    A decade ago, TNO developed RASP (Rapid Acoustic Signature Prediction), a numerical model for the prediction of the target strength of immersed underwater objects. The model is based on Kirchhoff diffraction theory. It is currently being improved to model refraction, angle dependent reflection and

  9. Efficient prediction of human protein-protein interactions at a global scale.

    Science.gov (United States)

    Schoenrock, Andrew; Samanfar, Bahram; Pitre, Sylvain; Hooshyar, Mohsen; Jin, Ke; Phillips, Charles A; Wang, Hui; Phanse, Sadhna; Omidi, Katayoun; Gui, Yuan; Alamgir, Md; Wong, Alex; Barrenäs, Fredrik; Babu, Mohan; Benson, Mikael; Langston, Michael A; Green, James R; Dehne, Frank; Golshani, Ashkan

    2014-12-10

    Our knowledge of global protein-protein interaction (PPI) networks in complex organisms such as humans is hindered by technical limitations of current methods. On the basis of short co-occurring polypeptide regions, we developed a tool called MP-PIPE capable of predicting a global human PPI network within 3 months. With a recall of 23% at a precision of 82.1%, we predicted 172,132 putative PPIs. We demonstrate the usefulness of these predictions through a range of experiments. The speed and accuracy associated with MP-PIPE can make this a potential tool to study individual human PPI networks (from genomic sequences alone) for personalized medicine.

  10. Supplementary Material for: DASPfind: new efficient method to predict drug–target interactions

    KAUST Repository

    Ba Alawi, Wail; Soufan, Othman; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.

    2016-01-01

    Abstract Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational

  11. Human Factors of Automated Driving : Predicting the Effects of Authority Transitions on Traffic Flow Efficiency

    NARCIS (Netherlands)

    Varotto, S.F.; Hoogendoorn, R.G.; Van Arem, B.; Hoogendoorn, S.P.

    2014-01-01

    Automated driving potentially has a significant impact on traffic flow efficiency. Automated vehicles, which possess cooperative capabilities, are expected to reduce congestion levels for instance by increasing road capacity, by anticipating traffic conditions further downstream and also by

  12. Development of a prediction model for the cost saving potentials in implementing the building energy efficiency rating certification

    International Nuclear Information System (INIS)

    Jeong, Jaewook; Hong, Taehoon; Ji, Changyoon; Kim, Jimin; Lee, Minhyun; Jeong, Kwangbok; Koo, Choongwan

    2017-01-01

    Highlights: • This study evaluates the building energy efficiency rating (BEER) certification. • Prediction model was developed for cost saving potentials by the BEER certification. • Prediction model was developed using LCC analysis, ROV, and Monte Carlo simulation. • Cost saving potential was predicted to be 2.78–3.77% of the construction cost. • Cost saving potential can be used for estimating the investment value of BEER. - Abstract: Building energy efficiency rating (BEER) certification is an energy performance certificates (EPCs) in South Korea. It is critical to examine the cost saving potentials of the BEER-certification in advance. This study aimed to develop a prediction model for the cost saving potentials in implementing the BEER-certification, in which the cost saving potentials included the energy cost savings of the BEER-certification and the relevant CO_2 emissions reduction as well as the additional construction cost for the BEER-certification. The prediction model was developed by using data mining, life cycle cost analysis, real option valuation, and Monte Carlo simulation. The database were established with 437 multi-family housing complexes (MFHCs), including 116 BEER-certified MFHCs and 321 non-certified MFHCs. The case study was conducted to validate the developed prediction model using 321 non-certified MFHCs, which considered 20-year life cycle. As a result, compared to the additional construction cost, the average cost saving potentials of the 1st-BEER-certified MFHCs in Groups 1, 2, and 3 were predicted to be 3.77%, 2.78%, and 2.87%, respectively. The cost saving potentials can be used as a guideline for the additional construction cost of the BEER-certification in the early design phase.

  13. A theoretical model for prediction of deposition efficiency in cold spraying

    International Nuclear Information System (INIS)

    Li Changjiu; Li Wenya; Wang Yuyue; Yang Guanjun; Fukanuma, H.

    2005-01-01

    The deposition behavior of a spray particle stream with a particle size distribution was theoretically examined for cold spraying in terms of deposition efficiency as a function of particle parameters and spray angle. The theoretical relation was established between the deposition efficiency and spray angle. The experiments were conducted by measuring deposition efficiency at different driving gas conditions and different spray angles using gas-atomized copper powder. It was found that the theoretically estimated results agreed reasonably well with the experimental ones. Based on the theoretical model and experimental results, it was revealed that the distribution of particle velocity resulting from particle size distribution influences significantly the deposition efficiency in cold spraying. It was necessary for the majority of particles to achieve a velocity higher than the critical velocity in order to improve the deposition efficiency. The normal component of particle velocity contributed to the deposition of the particle under the off-nomal spray condition. The deposition efficiency of sprayed particles decreased owing to the decrease of the normal velocity component as spray was performed at off-normal angle

  14. A Pedestrian Approach to Indoor Temperature Distribution Prediction of a Passive Solar Energy Efficient House

    Directory of Open Access Journals (Sweden)

    Golden Makaka

    2015-01-01

    Full Text Available With the increase in energy consumption by buildings in keeping the indoor environment within the comfort levels and the ever increase of energy price there is need to design buildings that require minimal energy to keep the indoor environment within the comfort levels. There is need to predict the indoor temperature during the design stage. In this paper a statistical indoor temperature prediction model was developed. A passive solar house was constructed; thermal behaviour was simulated using ECOTECT and DOE computer software. The thermal behaviour of the house was monitored for a year. The indoor temperature was observed to be in the comfort level for 85% of the total time monitored. The simulation results were compared with the measured results and those from the prediction model. The statistical prediction model was found to agree (95% with the measured results. Simulation results were observed to agree (96% with the statistical prediction model. Modeled indoor temperature was most sensitive to the outdoor temperatures variations. The daily mean peak ones were found to be more pronounced in summer (5% than in winter (4%. The developed model can be used to predict the instantaneous indoor temperature for a specific house design.

  15. Background matching and camouflage efficiency predict population density in four-eyed turtle (Sacalia quadriocellata).

    Science.gov (United States)

    Xiao, Fanrong; Yang, Canchao; Shi, Haitao; Wang, Jichao; Sun, Liang; Lin, Liu

    2016-10-01

    Background matching is an important way to camouflage and is widespread among animals. In the field, however, few studies have addressed background matching, and there has been no reported camouflage efficiency in freshwater turtles. Background matching and camouflage efficiency of the four-eyed turtle, Sacalia quadriocellata, among three microhabitat sections of Hezonggou stream were investigated by measuring carapace components of CIE L*a*b* (International Commission on Illumination; lightness, red/green and yellow/blue) color space, and scoring camouflage efficiency through the use of humans as predators. The results showed that the color difference (ΔE), lightness difference (ΔL(*)), and chroma difference (Δa(*)b(*)) between carapace and the substrate background in midstream were significantly lower than that upstream and downstream, indicating that the four-eyed turtle carapace color most closely matched the substrate of midstream. In line with these findings, the camouflage efficiency was the best for the turtles that inhabit midstream. These results suggest that the four-eyed turtles may enhance camouflage efficiency by selecting microhabitat that best match their carapace color. This finding may explain the high population density of the four-eyed turtle in the midstream section of Hezonggou stream. To the best of our knowledge, this study is among the first to quantify camouflage of freshwater turtles in the wild, laying the groundwork to further study the function and mechanisms of turtle camouflage. Copyright © 2016. Published by Elsevier B.V.

  16. On the predictability of extreme events in records with linear and nonlinear long-range memory: Efficiency and noise robustness

    Science.gov (United States)

    Bogachev, Mikhail I.; Bunde, Armin

    2011-06-01

    We study the predictability of extreme events in records with linear and nonlinear long-range memory in the presence of additive white noise using two different approaches: (i) the precursory pattern recognition technique (PRT) that exploits solely the information about short-term precursors, and (ii) the return interval approach (RIA) that exploits long-range memory incorporated in the elapsed time after the last extreme event. We find that the PRT always performs better when only linear memory is present. In the presence of nonlinear memory, both methods demonstrate comparable efficiency in the absence of white noise. When additional white noise is present in the record (which is the case in most observational records), the efficiency of the PRT decreases monotonously with increasing noise level. In contrast, the RIA shows an abrupt transition between a phase of low level noise where the prediction is as good as in the absence of noise, and a phase of high level noise where the prediction becomes poor. In the phase of low and intermediate noise the RIA predicts considerably better than the PRT, which explains our recent findings in physiological and financial records.

  17. Quantitative property-property relationship (QPPR) approach in predicting flotation efficiency of chelating agents as mineral collectors.

    Science.gov (United States)

    Natarajan, R; Nirdosh, I; Venuvanalingam, P; Ramalingam, M

    2002-07-01

    The QPPR approach has been used to model cupferrons as mineral collectors. Separation efficiencies (Es) of these chelating agents have been correlated with property parameters namely, log P, log Koc, substituent-constant sigma, Mullikan and ESP derived charges using multiple regression analysis. Es of substituted-cupferrons in the flotation of a uranium ore could be predicted within experimental error either by log P or log Koc and an electronic parameter. However, when a halo, methoxy or phenyl substituent was in para to the chelating group, experimental Es was greater than the predicted values. Inclusion of a Boolean type indicative parameter improved significantly the predictability power. This approach has been extended to 2-aminothiophenols that were used to float a zinc ore and the correlations were found to be reasonably good.

  18. Tailored high-resolution numerical weather forecasts for energy efficient predictive building control

    Science.gov (United States)

    Stauch, V. J.; Gwerder, M.; Gyalistras, D.; Oldewurtel, F.; Schubiger, F.; Steiner, P.

    2010-09-01

    The high proportion of the total primary energy consumption by buildings has increased the public interest in the optimisation of buildings' operation and is also driving the development of novel control approaches for the indoor climate. In this context, the use of weather forecasts presents an interesting and - thanks to advances in information and predictive control technologies and the continuous improvement of numerical weather prediction (NWP) models - an increasingly attractive option for improved building control. Within the research project OptiControl (www.opticontrol.ethz.ch) predictive control strategies for a wide range of buildings, heating, ventilation and air conditioning (HVAC) systems, and representative locations in Europe are being investigated with the aid of newly developed modelling and simulation tools. Grid point predictions for radiation, temperature and humidity of the high-resolution limited area NWP model COSMO-7 (see www.cosmo-model.org) and local measurements are used as disturbances and inputs into the building system. The control task considered consists in minimizing energy consumption whilst maintaining occupant comfort. In this presentation, we use the simulation-based OptiControl methodology to investigate the impact of COSMO-7 forecasts on the performance of predictive building control and the resulting energy savings. For this, we have selected building cases that were shown to benefit from a prediction horizon of up to 3 days and therefore, are particularly suitable for the use of numerical weather forecasts. We show that the controller performance is sensitive to the quality of the weather predictions, most importantly of the incident radiation on differently oriented façades. However, radiation is characterised by a high temporal and spatial variability in part caused by small scale and fast changing cloud formation and dissolution processes being only partially represented in the COSMO-7 grid point predictions. On the

  19. STUDY OF SOLUTION REPRESENTATION LANGUAGE INFLUENCE ON EFFICIENCY OF INTEGER SEQUENCES PREDICTION

    Directory of Open Access Journals (Sweden)

    A. S. Potapov

    2015-01-01

    Full Text Available Methods based on genetic programming for the problem solution of integer sequences extrapolation are the subjects for study in the paper. In order to check the hypothesis about the influence of language expression of program representation on the prediction effectiveness, the genetic programming method based on several limited languages for recurrent sequences has been developed. On the single sequence sample the implemented method with the use of more complete language has shown results, significantly better than the results of one of the current methods represented in literature based on artificial neural networks. Analysis of experimental comparison results for the realized method with the usage of different languages has shown that language extension increases the difficulty of consistent patterns search in languages, available for prediction in a simpler language though it makes new sequence classes accessible for prediction. This effect can be reduced but not eliminated completely at language extension by the constructions, which make solutions more compact. Carried out researches have drawn to the conclusion that alone the choice of an adequate language for solution representation is not enough for the full problem solution of integer sequences prediction (and, all the more, universal prediction problem. However, practically applied methods can be received by the usage of genetic programming.

  20. More Gamma More Predictions: Gamma-Synchronization as a Key Mechanism for Efficient Integration of Classical Receptive Field Inputs with Surround Predictions

    Science.gov (United States)

    Vinck, Martin; Bosman, Conrado A.

    2016-01-01

    During visual stimulation, neurons in visual cortex often exhibit rhythmic and synchronous firing in the gamma-frequency (30–90 Hz) band. Whether this phenomenon plays a functional role during visual processing is not fully clear and remains heavily debated. In this article, we explore the function of gamma-synchronization in the context of predictive and efficient coding theories. These theories hold that sensory neurons utilize the statistical regularities in the natural world in order to improve the efficiency of the neural code, and to optimize the inference of the stimulus causes of the sensory data. In visual cortex, this relies on the integration of classical receptive field (CRF) data with predictions from the surround. Here we outline two main hypotheses about gamma-synchronization in visual cortex. First, we hypothesize that the precision of gamma-synchronization reflects the extent to which CRF data can be accurately predicted by the surround. Second, we hypothesize that different cortical columns synchronize to the extent that they accurately predict each other’s CRF visual input. We argue that these two hypotheses can account for a large number of empirical observations made on the stimulus dependencies of gamma-synchronization. Furthermore, we show that they are consistent with the known laminar dependencies of gamma-synchronization and the spatial profile of intercolumnar gamma-synchronization, as well as the dependence of gamma-synchronization on experience and development. Based on our two main hypotheses, we outline two additional hypotheses. First, we hypothesize that the precision of gamma-synchronization shows, in general, a negative dependence on RF size. In support, we review evidence showing that gamma-synchronization decreases in strength along the visual hierarchy, and tends to be more prominent in species with small V1 RFs. Second, we hypothesize that gamma-synchronized network dynamics facilitate the emergence of spiking output that

  1. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    Science.gov (United States)

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  2. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jensen, Søren Holdt

    2006-01-01

    A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

  3. Efficient Implementation of Solvers for Linear Model Predictive Control on Embedded Devices

    DEFF Research Database (Denmark)

    Frison, Gianluca; Kwame Minde Kufoalor, D.; Imsland, Lars

    2014-01-01

    This paper proposes a novel approach for the efficient implementation of solvers for linear MPC on embedded devices. The main focus is to explain in detail the approach used to optimize the linear algebra for selected low-power embedded devices, and to show how the high-performance implementation...

  4. Genetic parameters and predicted selection results for maternal traits related to lactation efficiency in sows

    NARCIS (Netherlands)

    Bergsma, R.; Kanis, E.; Verstegen, M.W.A.

    2008-01-01

    The increased productivity of sows increases the risk of a more pronounced negative energy balance during lactation. One possibility to prevent this is to increase the lactation efficiency (LE) genetically and thereby increase milk output for a given feed intake and mobilization of body tissue. The

  5. Efficient network disintegration under incomplete information: the comic effect of link prediction

    Science.gov (United States)

    Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin

    2016-01-01

    The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the “comic effect” of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized. PMID:26960247

  6. Introducing etch kernels for efficient pattern sampling and etch bias prediction

    Science.gov (United States)

    Weisbuch, François; Lutich, Andrey; Schatz, Jirka

    2018-01-01

    Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels, as well as the choice of calibration patterns, is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels-"internal, external, curvature, Gaussian, z_profile"-designed to represent the finest details of the resist geometry to characterize precisely the etch bias at any point along a resist contour. By evaluating the etch kernels on various structures, it is possible to map their etch signatures in a multidimensional space and analyze them to find an optimal sampling of structures. The etch kernels evaluated on these structures were combined with experimental etch bias derived from scanning electron microscope contours to train artificial neural networks to predict etch bias. The method applied to contact and line/space layers shows an improvement in etch model prediction accuracy over standard etch model. This work emphasizes the importance of the etch kernel definition to characterize and predict complex etch effects.

  7. Tidal influence on offshore wind fields and resource predictions[Efficient Development of Offshore Windfarms

    Energy Technology Data Exchange (ETDEWEB)

    Khan, D. [Entec UK Ltd., Doherty Innovation Centre, Penicuik (United Kingdom); Infield, D. [Loughborough Univ., Centre for Renewable Energy Systems Tecnology, Loughborough (United Kingdom)

    2002-03-01

    The rise and fall of the sea surface due to tides effectively moves an offshore wind turbine hub through the wind shear profile. This effect is quantified using measured data from 3 offshore UK sites. Statistical evidence of the influence of tide on mean wind speed and turbulence is presented. The implications of this effect for predicting offshore wind resource are outlined. (au)

  8. Efficient network disintegration under incomplete information: the comic effect of link prediction

    Science.gov (United States)

    Tan, Suo-Yi; Wu, Jun; Lü, Linyuan; Li, Meng-Jun; Lu, Xin

    2016-03-01

    The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the “comic effect” of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized.

  9. Efficient CRISPR/Cas9-Mediated Versatile, Predictable, and Donor-Free Gene Knockout in Human Pluripotent Stem Cells.

    Science.gov (United States)

    Liu, Zhongliang; Hui, Yi; Shi, Lei; Chen, Zhenyu; Xu, Xiangjie; Chi, Liankai; Fan, Beibei; Fang, Yujiang; Liu, Yang; Ma, Lin; Wang, Yiran; Xiao, Lei; Zhang, Quanbin; Jin, Guohua; Liu, Ling; Zhang, Xiaoqing

    2016-09-13

    Loss-of-function studies in human pluripotent stem cells (hPSCs) require efficient methodologies for lesion of genes of interest. Here, we introduce a donor-free paired gRNA-guided CRISPR/Cas9 knockout strategy (paired-KO) for efficient and rapid gene ablation in hPSCs. Through paired-KO, we succeeded in targeting all genes of interest with high biallelic targeting efficiencies. More importantly, during paired-KO, the cleaved DNA was repaired mostly through direct end joining without insertions/deletions (precise ligation), and thus makes the lesion product predictable. The paired-KO remained highly efficient for one-step targeting of multiple genes and was also efficient for targeting of microRNA, while for long non-coding RNA over 8 kb, cleavage of a short fragment of the core promoter region was sufficient to eradicate downstream gene transcription. This work suggests that the paired-KO strategy is a simple and robust system for loss-of-function studies for both coding and non-coding genes in hPSCs. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  10. Predicting the oral uptake efficiency of chemicals in mammals: Combining the hydrophilic and lipophilic range

    Energy Technology Data Exchange (ETDEWEB)

    O' Connor, Isabel A., E-mail: i.oconnor@science.ru.nl [Radboud University Nijmegen, Institute for Water and Wetland Research, Department of Environmental Science, P.O. Box 9010, NL-6500 GL, Nijmegen (Netherlands); Huijbregts, Mark A.J., E-mail: m.huijbregts@science.ru.nl [Radboud University Nijmegen, Institute for Water and Wetland Research, Department of Environmental Science, P.O. Box 9010, NL-6500 GL, Nijmegen (Netherlands); Ragas, Ad M.J., E-mail: a.ragas@science.ru.nl [Radboud University Nijmegen, Institute for Water and Wetland Research, Department of Environmental Science, P.O. Box 9010, NL-6500 GL, Nijmegen (Netherlands); Open University, School of Science, P.O. Box 2960,6401 DL Heerlen (Netherlands); Hendriks, A. Jan, E-mail: a.j.hendriks@science.ru.nl [Radboud University Nijmegen, Institute for Water and Wetland Research, Department of Environmental Science, P.O. Box 9010, NL-6500 GL, Nijmegen (Netherlands)

    2013-01-01

    Environmental risk assessment requires models for estimating the bioaccumulation of untested compounds. So far, bioaccumulation models have focused on lipophilic compounds, and only a few have included hydrophilic compounds. Our aim was to extend an existing bioaccumulation model to estimate the oral uptake efficiency of pollutants in mammals for compounds over a wide K{sub ow} range with an emphasis on hydrophilic compounds, i.e. compounds in the lower K{sub ow} range. Usually, most models use octanol as a single surrogate for the membrane and thus neglect the bilayer structure of the membrane. However, compounds with polar groups can have different affinities for the different membrane regions. Therefore, an existing bioaccumulation model was extended by dividing the diffusion resistance through the membrane into an outer and inner membrane resistance, where the solvents octanol and heptane were used as surrogates for these membrane regions, respectively. The model was calibrated with uptake efficiencies of environmental pollutants measured in different mammals during feeding studies combined with human oral uptake efficiencies of pharmaceuticals. The new model estimated the uptake efficiency of neutral (RMSE = 14.6) and dissociating (RMSE = 19.5) compounds with logK{sub ow} ranging from − 10 to + 8. The inclusion of the K{sub hw} improved uptake estimation for 33% of the hydrophilic compounds (logK{sub ow} < 0) (r{sup 2} = 0.51, RMSE = 22.8) compared with the model based on K{sub ow} only (r{sup 2} = 0.05, RMSE = 34.9), while hydrophobic compounds (logK{sub ow} > 0) were estimated equally by both model versions with RMSE = 15.2 (K{sub ow} and K{sub hw}) and RMSE = 15.7 (K{sub ow} only). The model can be used to estimate the oral uptake efficiency for both hydrophilic and hydrophobic compounds. -- Highlights: ► A mechanistic model was developed to estimate oral uptake efficiency. ► Model covers wide logK{sub ow} range (- 10 to + 8) and several mammalian

  11. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    Science.gov (United States)

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  12. A machine learning approach for predicting CRISPR-Cas9 cleavage efficiencies and patterns underlying its mechanism of action.

    Science.gov (United States)

    Abadi, Shiran; Yan, Winston X; Amar, David; Mayrose, Itay

    2017-10-01

    The adaptation of the CRISPR-Cas9 system as a genome editing technique has generated much excitement in recent years owing to its ability to manipulate targeted genes and genomic regions that are complementary to a programmed single guide RNA (sgRNA). However, the efficacy of a specific sgRNA is not uniquely defined by exact sequence homology to the target site, thus unintended off-targets might additionally be cleaved. Current methods for sgRNA design are mainly concerned with predicting off-targets for a given sgRNA using basic sequence features and employ elementary rules for ranking possible sgRNAs. Here, we introduce CRISTA (CRISPR Target Assessment), a novel algorithm within the machine learning framework that determines the propensity of a genomic site to be cleaved by a given sgRNA. We show that the predictions made with CRISTA are more accurate than other available methodologies. We further demonstrate that the occurrence of bulges is not a rare phenomenon and should be accounted for in the prediction process. Beyond predicting cleavage efficiencies, the learning process provides inferences regarding patterns that underlie the mechanism of action of the CRISPR-Cas9 system. We discover that attributes that describe the spatial structure and rigidity of the entire genomic site as well as those surrounding the PAM region are a major component of the prediction capabilities.

  13. A machine learning approach for predicting CRISPR-Cas9 cleavage efficiencies and patterns underlying its mechanism of action.

    Directory of Open Access Journals (Sweden)

    Shiran Abadi

    2017-10-01

    Full Text Available The adaptation of the CRISPR-Cas9 system as a genome editing technique has generated much excitement in recent years owing to its ability to manipulate targeted genes and genomic regions that are complementary to a programmed single guide RNA (sgRNA. However, the efficacy of a specific sgRNA is not uniquely defined by exact sequence homology to the target site, thus unintended off-targets might additionally be cleaved. Current methods for sgRNA design are mainly concerned with predicting off-targets for a given sgRNA using basic sequence features and employ elementary rules for ranking possible sgRNAs. Here, we introduce CRISTA (CRISPR Target Assessment, a novel algorithm within the machine learning framework that determines the propensity of a genomic site to be cleaved by a given sgRNA. We show that the predictions made with CRISTA are more accurate than other available methodologies. We further demonstrate that the occurrence of bulges is not a rare phenomenon and should be accounted for in the prediction process. Beyond predicting cleavage efficiencies, the learning process provides inferences regarding patterns that underlie the mechanism of action of the CRISPR-Cas9 system. We discover that attributes that describe the spatial structure and rigidity of the entire genomic site as well as those surrounding the PAM region are a major component of the prediction capabilities.

  14. DDR: Efficient computational method to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-11-23

    Motivation Finding computationally drug-target interactions (DTIs) is a convenient strategy to identify new DTIs at low cost with reasonable accuracy. However, the current DTI prediction methods suffer the high false positive prediction rate. Results We developed DDR, a novel method that improves the DTI prediction accuracy. DDR is based on the use of a heterogeneous graph that contains known DTIs with multiple similarities between drugs and multiple similarities between target proteins. DDR applies non-linear similarity fusion method to combine different similarities. Before fusion, DDR performs a pre-processing step where a subset of similarities is selected in a heuristic process to obtain an optimized combination of similarities. Then, DDR applies a random forest model using different graph-based features extracted from the DTI heterogeneous graph. Using five repeats of 10-fold cross-validation, three testing setups, and the weighted average of area under the precision-recall curve (AUPR) scores, we show that DDR significantly reduces the AUPR score error relative to the next best start-of-the-art method for predicting DTIs by 34% when the drugs are new, by 23% when targets are new, and by 34% when the drugs and the targets are known but not all DTIs between them are not known. Using independent sources of evidence, we verify as correct 22 out of the top 25 DDR novel predictions. This suggests that DDR can be used as an efficient method to identify correct DTIs.

  15. Prediction of the new efficient permanent magnet SmCoNiFe3

    Science.gov (United States)

    Söderlind, P.; Landa, A.; Locht, I. L. M.; Åberg, D.; Kvashnin, Y.; Pereiro, M.; Däne, M.; Turchi, P. E. A.; Antropov, V. P.; Eriksson, O.

    2017-09-01

    We propose a new efficient permanent magnet, SmCoNiFe3, which is a development of the well-known SmCo5 prototype. More modern neodymium magnets of the Nd-Fe-B type have an advantage over SmCo5 because of their greater maximum energy products due to their iron-rich stoichiometry. Our new magnet, however, removes most of this disadvantage of SmCo5 while preserving its superior high-temperature efficiency over the neodymium magnets. We show by means of first-principles electronic-structure calculations that SmCoNiFe3 has very favorable magnetic properties and could therefore potentially replace SmCo5 or Nd-Fe-B types in various applications.

  16. Spatial extrapolation of light use efficiency model parameters to predict gross primary production

    Directory of Open Access Journals (Sweden)

    Karsten Schulz

    2011-12-01

    Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.

  17. [Prediction of the efficiency of endoscopic lung volume reduction by valves in severe emphysema].

    Science.gov (United States)

    Bocquillon, V; Briault, A; Reymond, E; Arbib, F; Jankowski, A; Ferretti, G; Pison, C

    2016-11-01

    In severe emphysema, endoscopic lung volume reduction with valves is an alternative to surgery with less morbidity and mortality. In 2015, selection of patients who will respond to this technique is based on emphysema heterogeneity, a complete fissure visible on the CT-scan and absence of collateral ventilation between lobes. Our case report highlights that individualized prediction is possible. A 58-year-old woman had severe, disabling pulmonary emphysema. A high resolution thoracic computed tomography scan showed that the emphysema was heterogeneous, predominantly in the upper lobes, integrity of the left greater fissure and no collateral ventilation with the left lower lobe. A valve was inserted in the left upper lobe bronchus. At one year, clinical and functional benefits were significant with complete atelectasis of the treated lobe. The success of endoscopic lung volume reduction with a valve can be predicted, an example of personalized medicine. Copyright © 2016 SPLF. Published by Elsevier Masson SAS. All rights reserved.

  18. Model predictive control technologies for efficient and flexible power consumption in refrigeration systems

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Edlund, Kristian

    2012-01-01

    . In this paper we describe a novel economic-optimizing Model Predictive Control (MPC) scheme that reduces operating costs by utilizing the thermal storage capabilities. A nonlinear optimization tool to handle a non-convex cost function is utilized for simulations with validated scenarios. In this way we...... explicitly address advantages from daily variations in outdoor temperature and electricity prices. Secondly, we formulate a new cost function that enables the refrigeration system to contribute with ancillary services to the balancing power market. This involvement can be economically beneficial...... of the system models allows us to describe and handle model as well as prediction uncertainties in this framework. This means we can demonstrate means for robustifying the performance of the controller....

  19. Efficient operation scheduling for adsorption chillers using predictive optimization-based control methods

    Science.gov (United States)

    Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz

    2017-10-01

    Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.

  20. Efficient Multivariable Generalized Predictive Control for Autonomous Underwater Vehicle in Vertical Plane

    OpenAIRE

    Yao, Xuliang; Yang, Guangyi

    2016-01-01

    This paper presents the design and simulation validation of a multivariable GPC (generalized predictive control) for AUV (autonomous underwater vehicle) in vertical plane. This control approach has been designed in the case of AUV navigating with low speed near water surface, in order to restrain wave disturbance effectively and improve pitch and heave motion stability. The proposed controller guarantees compliance with rudder manipulation, AUV output constraints, and driving energy consumpti...

  1. Efficient rolling texture predictions and texture-sensitive thermomechanical properties of α-uranium foils

    Science.gov (United States)

    Steiner, Matthew A.; Klein, Robert W.; Calhoun, Christopher A.; Knezevic, Marko; Garlea, Elena; Agnew, Sean R.

    2017-11-01

    Finite element (FE) analysis was used to simulate the strain history of an α-uranium foil during cold straight-rolling, with the sheet modeled as an isotropic elastoplastic continuum. The resulting strain history was then used as input for a viscoplastic self-consistent (VPSC) polycrystal plasticity model to simulate crystallographic texture evolution. Mid-plane textures predicted via the combined FE→VPSC approach show alignment of the (010) poles along the rolling direction (RD), and the (001) poles along the normal direction (ND) with a symmetric splitting along RD. The surface texture is similar to that of the mid-plane, but with a shear-induced asymmetry that favors one of the RD split features of the (001) pole figure. Both the mid-plane and surface textures predicted by the FE→VPSC approach agree with published experimental results for cold straight-rolled α-uranium plates, as well as predictions made by a more computationally intensive full-field crystal plasticity based finite element model. α-uranium foils produced by cold-rolling must typically undergo a recrystallization anneal to restore ductility prior to their final application, resulting in significant texture evolution from the cold-rolled plate deformation texture. Using the texture measured from a foil in the final recrystallized state, coefficients of thermal expansion and the elastic stiffness tensors were calculated using a thermo-elastic self-consistent model, and the anisotropic yield loci and flow curves along the RD, TD, and ND were predicted using the VPSC code.

  2. Efficient Prediction of Progesterone Receptor Interactome Using a Support Vector Machine Model

    Directory of Open Access Journals (Sweden)

    Ji-Long Liu

    2015-03-01

    Full Text Available Protein-protein interaction (PPI is essential for almost all cellular processes and identification of PPI is a crucial task for biomedical researchers. So far, most computational studies of PPI are intended for pair-wise prediction. Theoretically, predicting protein partners for a single protein is likely a simpler problem. Given enough data for a particular protein, the results can be more accurate than general PPI predictors. In the present study, we assessed the potential of using the support vector machine (SVM model with selected features centered on a particular protein for PPI prediction. As a proof-of-concept study, we applied this method to identify the interactome of progesterone receptor (PR, a protein which is essential for coordinating female reproduction in mammals by mediating the actions of ovarian progesterone. We achieved an accuracy of 91.9%, sensitivity of 92.8% and specificity of 91.2%. Our method is generally applicable to any other proteins and therefore may be of help in guiding biomedical experiments.

  3. Efficient Prediction of Low-Visibility Events at Airports Using Machine-Learning Regression

    Science.gov (United States)

    Cornejo-Bueno, L.; Casanova-Mateo, C.; Sanz-Justo, J.; Cerro-Prada, E.; Salcedo-Sanz, S.

    2017-11-01

    We address the prediction of low-visibility events at airports using machine-learning regression. The proposed model successfully forecasts low-visibility events in terms of the runway visual range at the airport, with the use of support-vector regression, neural networks (multi-layer perceptrons and extreme-learning machines) and Gaussian-process algorithms. We assess the performance of these algorithms based on real data collected at the Valladolid airport, Spain. We also propose a study of the atmospheric variables measured at a nearby tower related to low-visibility atmospheric conditions, since they are considered as the inputs of the different regressors. A pre-processing procedure of these input variables with wavelet transforms is also described. The results show that the proposed machine-learning algorithms are able to predict low-visibility events well. The Gaussian process is the best algorithm among those analyzed, obtaining over 98% of the correct classification rate in low-visibility events when the runway visual range is {>}1000 m, and about 80% under this threshold. The performance of all the machine-learning algorithms tested is clearly affected in extreme low-visibility conditions ({algorithm performance in daytime and nighttime conditions, and for different prediction time horizons.

  4. An efficient model for predicting mixing lengths in serial pumping of petroleum products

    Energy Technology Data Exchange (ETDEWEB)

    Baptista, Renan Martins [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas. Div. de Explotacao]. E-mail: renan@cenpes.petrobras.com.br; Rachid, Felipe Bastos de Freitas [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Dept. de Engenharia Mecanica]. E-mail: rachid@mec.uff.br; Araujo, Jose Henrique Carneiro de [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Dept. de Ciencia da Computacao]. E-mail: jhca@dcc.ic.uff.br

    2000-07-01

    This paper presents a new model for estimating mixing volumes which arises in batching transfers in multi product pipelines. The novel features of the model are the incorporation of the flow rate variation with time and the use of a more precise effective dispersion coefficient, which is considered to depend on the concentration. The governing equation of the model forms a non linear initial value problem that is solved by using a predictor corrector finite difference method. A comparison among the theoretical predictions of the proposed model, a field test and other classical procedures show that it exhibits the best estimate over the whole range of admissible concentrations investigated. (author)

  5. Neural and hybrid modeling: an alternative route to efficiently predict the behavior of biotechnological processes aimed at biofuels obtainment.

    Science.gov (United States)

    Curcio, Stefano; Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele

    2014-01-01

    The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  6. Neural and Hybrid Modeling: An Alternative Route to Efficiently Predict the Behavior of Biotechnological Processes Aimed at Biofuels Obtainment

    Directory of Open Access Journals (Sweden)

    Stefano Curcio

    2014-01-01

    Full Text Available The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  7. A new approach for the prediction of thermal efficiency in solar receivers

    International Nuclear Information System (INIS)

    Barbero, Rubén; Rovira, Antonio; Montes, María José; Martínez Val, José María

    2016-01-01

    Highlights: • A new model for thermal efficiency calculation of solar collectors is developed. • It is derived from the complete differential equation for any technology. • Accurately capture the results of numerical models avoiding iteration process. • Two new critical parameters are defined to be considered for design. • Some relevant aspects for design arise from its application to PTC. - Abstract: Optimization of solar concentration receiver designs requires of models that characterize thermal balance at receiver wall. This problem depends on external heat transfer coefficients that are a function of the third power of the temperature at the absorber wall. This nonlinearity introduces a difficulty in obtaining analytical solutions for the balance differential equations. So, nowadays, several approximations consider these heat transfer coefficients as a constant or suggest a linear dependence. These hypotheses suppose an important limitation for their application. This paper describes a new approach that allows the use of an analytical expression obtained from the heat balance differential equation. Two simplifications based on this model can be made in order to obtain other much simpler equations that adequately characterize collector performance for the majority of solar technologies. These new equations allow the explicit calculation of the efficiency as a function of some characteristic parameters of the receiver. This explicit calculation introduces some advantages in the receiver optimization process because iteration processes are avoided during the calculations. Validation of the proposed models was made by the use of the experimental measurements reported by Sandia National Laboratories (SNL) for the trough collector design LS-2.

  8. The problems and solutions of predicting participation in energy efficiency programs

    International Nuclear Information System (INIS)

    Davis, Alexander L.; Krishnamurti, Tamar

    2013-01-01

    Highlights: • Energy efficiency pilot studies suffer from severe volunteer bias. • We formulate an approach for accommodating volunteer bias. • A short questionnaire and classification trees can control for the bias. - Abstract: This paper discusses volunteer bias in residential energy efficiency studies. We briefly evaluate the bias in existing studies. We then show how volunteer bias can be corrected when not avoidable, using an on-line study of intentions to enroll in an in-home display trial as an example. We found that the best predictor of intentions to enroll was expected benefit from the in-home display. Constraints on participation, such as time in the home and trust in scientists, were also associated with enrollment intentions. Using Breiman’s classification tree algorithm we found that the best model of intentions to enroll contained only five variables: expected enjoyment of the program, presence in the home during morning hours, trust (in friends and in scientists), and perceived ability to handle unexpected problems. These results suggest that a short questionnaire, that takes at most 1 min to complete, would allow better control of volunteer bias than a more extensive questionnaire. This paper should allow researchers who employ field studies involving human behavior to be better equipped to address volunteer bias

  9. Kernel based machine learning algorithm for the efficient prediction of type III polyketide synthase family of proteins

    Directory of Open Access Journals (Sweden)

    Mallika V

    2010-03-01

    Full Text Available Type III Polyketide synthases (PKS are family of proteins considered to have significant role in the biosynthesis of various polyketides in plants, fungi and bacteria. As these proteins show positive effects to human health, more researches are going on regarding this particular protein. Developing a tool to identify the probability of sequence, being a type III polyketide synthase will minimize the time consumption and manpower efforts. In this approach, we have designed and implemented PKSIIIpred, a high performance prediction server for type III PKS where the classifier is Support Vector Machine (SVM. Based on the limited training dataset, the tool efficiently predicts the type III PKS superfamily of proteins with high sensitivity and specificity. PKSIIIpred is available at http://type3pks.in/prediction/. We expect that this tool may serve as a useful resource for type III PKS researchers. Currently work is being progressed for further betterment of prediction accuracy by including more sequence features in the training dataset.

  10. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    Science.gov (United States)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  11. Experimental and Numerical Simulations Predictions Comparison of Power and Efficiency in Hydraulic Turbine

    Directory of Open Access Journals (Sweden)

    Laura Castro

    2011-01-01

    Full Text Available On-site power and mass flow rate measurements were conducted in a hydroelectric power plant (Mexico. Mass flow rate was obtained using Gibson's water hammer-based method. A numerical counterpart was carried out by using the commercial CFD software, and flow simulations were performed to principal components of a hydraulic turbine: runner and draft tube. Inlet boundary conditions for the runner were obtained from a previous simulation conducted in the spiral case. The computed results at the runner's outlet were used to conduct the subsequent draft tube simulation. The numerical results from the runner's flow simulation provided data to compute the torque and the turbine's power. Power-versus-efficiency curves were built, and very good agreement was found between experimental and numerical data.

  12. AN EFFICIENT ROBUST IMAGE WATERMARKING BASED ON AC PREDICTION TECHNIQUE USING DCT TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Gaurav Gupta

    2015-08-01

    Full Text Available The expansion of technology has made several simple ways to manipulate the original content. This has brought the concern for security of the content which is easily available in open network. Digital watermarking is the most suitable solution for the defined issue. Digital watermarking is the art of inserting the logo into multimedia object to have proof of ownership whenever it is required. The proposed algorithm is useful in authorized distribution and ownership verification. The algorithm uses the concept of AC prediction using DCT to embed the watermark in the image. The algorithm has excellent robustness against all the attacks and outperforms the similar work with admirable performance in terms of Normalized Correlation (NC, Peak Signal to Noise Ratio (PSNR and Tamper Assessment Function (TAF.

  13. The efficiency of the RULES-4 classification learning algorithm in predicting the density of agents

    Directory of Open Access Journals (Sweden)

    Ziad Salem

    2014-12-01

    Full Text Available Learning is the act of obtaining new or modifying existing knowledge, behaviours, skills or preferences. The ability to learn is found in humans, other organisms and some machines. Learning is always based on some sort of observations or data such as examples, direct experience or instruction. This paper presents a classification algorithm to learn the density of agents in an arena based on the measurements of six proximity sensors of a combined actuator sensor units (CASUs. Rules are presented that were induced by the learning algorithm that was trained with data-sets based on the CASU’s sensor data streams collected during a number of experiments with “Bristlebots (agents in the arena (environment”. It was found that a set of rules generated by the learning algorithm is able to predict the number of bristlebots in the arena based on the CASU’s sensor readings with satisfying accuracy.

  14. Industrial Compositional Streamline Simulation for Efficient and Accurate Prediction of Gas Injection and WAG Processes

    Energy Technology Data Exchange (ETDEWEB)

    Margot Gerritsen

    2008-10-31

    Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids

  15. Working memory load predicts visual search efficiency: Evidence from a novel pupillary response paradigm.

    Science.gov (United States)

    Attar, Nada; Schneps, Matthew H; Pomplun, Marc

    2016-10-01

    An observer's pupil dilates and constricts in response to variables such as ambient and focal luminance, cognitive effort, the emotional stimulus content, and working memory load. The pupil's memory load response is of particular interest, as it might be used for estimating observers' memory load while they are performing a complex task, without adding an interruptive and confounding memory test to the protocol. One important task in which working memory's involvement is still being debated is visual search, and indeed a previous experiment by Porter, Troscianko, and Gilchrist (Quarterly Journal of Experimental Psychology, 60, 211-229, 2007) analyzed observers' pupil sizes during search to study this issue. These authors found that pupil size increased over the course of the search, and they attributed this finding to accumulating working memory load. However, since the pupil response is slow and does not depend on memory load alone, this conclusion is rather speculative. In the present study, we estimated working memory load in visual search during the presentation of intermittent fixation screens, thought to induce a low, stable level of arousal and cognitive effort. Using standard visual search and control tasks, we showed that this paradigm reduces the influence of non-memory-related factors on pupil size. Furthermore, we found an early increase in working memory load to be associated with more efficient search, indicating a significant role of working memory in the search process.

  16. Early language processing efficiency predicts later receptive vocabulary outcomes in children born preterm.

    Science.gov (United States)

    Marchman, Virginia A; Adams, Katherine A; Loi, Elizabeth C; Fernald, Anne; Feldman, Heidi M

    2016-01-01

    As rates of prematurity continue to rise, identifying which preterm children are at increased risk for learning disabilities is a public health imperative. Identifying continuities between early and later skills in this vulnerable population can also illuminate fundamental neuropsychological processes that support learning in all children. At 18 months adjusted age, we used socioeconomic status (SES), medical variables, parent-reported vocabulary, scores on the Bayley Scales of Infant and Toddler Development (third edition) language composite, and children's lexical processing speed in the looking-while-listening (LWL) task as predictor variables in a sample of 30 preterm children. Receptive vocabulary as measured by the Peabody Picture Vocabulary Test (fourth edition) at 36 months was the outcome. Receptive vocabulary was correlated with SES, but uncorrelated with degree of prematurity or a composite of medical risk. Importantly, lexical processing speed was the strongest predictor of receptive vocabulary (r = -.81), accounting for 30% unique variance. Individual differences in lexical processing efficiency may be able to serve as a marker for information processing skills that are critical for language learning.

  17. Comparison Of Human Modelling Tools For Efficiency Of Prediction Of EVA Tasks

    Science.gov (United States)

    Dischinger, H. Charles, Jr.; Loughead, Tomas E.

    1998-01-01

    Construction of the International Space Station (ISS) will require extensive extravehicular activity (EVA, spacewalks), and estimates of the actual time needed continue to rise. As recently as September, 1996, the amount of time to be spent in EVA was believed to be about 400 hours, excluding spacewalks on the Russian segment. This estimate has recently risen to over 1100 hours, and it could go higher before assembly begins in the summer of 1998. These activities are extremely expensive and hazardous, so any design tools which help assure mission success and improve the efficiency of the astronaut in task completion can pay off in reduced design and EVA costs and increased astronaut safety. The tasks which astronauts can accomplish in EVA are limited by spacesuit mobility. They are therefore relatively simple, from an ergonomic standpoint, requiring gross movements rather than time motor skills. The actual tasks include driving bolts, mating and demating electric and fluid connectors, and actuating levers; the important characteristics to be considered in design improvement include the ability of the astronaut to see and reach the item to be manipulated and the clearance required to accomplish the manipulation. This makes the tasks amenable to simulation in a Computer-Assisted Design (CAD) environment. For EVA, the spacesuited astronaut must have his or her feet attached on a work platform called a foot restraint to obtain a purchase against which work forces may be actuated. An important component of the design is therefore the proper placement of foot restraints.

  18. Predictive control strategy of a gas turbine for improvement of combined cycle power plant dynamic performance and efficiency.

    Science.gov (United States)

    Mohamed, Omar; Wang, Jihong; Khalil, Ashraf; Limhabrash, Marwan

    2016-01-01

    This paper presents a novel strategy for implementing model predictive control (MPC) to a large gas turbine power plant as a part of our research progress in order to improve plant thermal efficiency and load-frequency control performance. A generalized state space model for a large gas turbine covering the whole steady operational range is designed according to subspace identification method with closed loop data as input to the identification algorithm. Then the model is used in developing a MPC and integrated into the plant existing control strategy. The strategy principle is based on feeding the reference signals of the pilot valve, natural gas valve, and the compressor pressure ratio controller with the optimized decisions given by the MPC instead of direct application of the control signals. If the set points for the compressor controller and turbine valves are sent in a timely manner, there will be more kinetic energy in the plant to release faster responses on the output and the overall system efficiency is improved. Simulation results have illustrated the feasibility of the proposed application that has achieved significant improvement in the frequency variations and load following capability which are also translated to be improvements in the overall combined cycle thermal efficiency of around 1.1 % compared to the existing one.

  19. Sensory Gain Outperforms Efficient Readout Mechanisms in Predicting Attention-Related Improvements in Behavior

    Science.gov (United States)

    Ester, Edward F.; Deering, Sean

    2014-01-01

    Spatial attention has been postulated to facilitate perceptual processing via several different mechanisms. For instance, attention can amplify neural responses in sensory areas (sensory gain), mediate neural variability (noise modulation), or alter the manner in which sensory signals are selectively read out by postsensory decision mechanisms (efficient readout). Even in the context of simple behavioral tasks, it is unclear how well each of these mechanisms can account for the relationship between attention-modulated changes in behavior and neural activity because few studies have systematically mapped changes between stimulus intensity, attentional focus, neural activity, and behavioral performance. Here, we used a combination of psychophysics, event-related potentials (ERPs), and quantitative modeling to explicitly link attention-related changes in perceptual sensitivity with changes in the ERP amplitudes recorded from human observers. Spatial attention led to a multiplicative increase in the amplitude of an early sensory ERP component (the P1, peaking ∼80–130 ms poststimulus) and in the amplitude of the late positive deflection component (peaking ∼230–330 ms poststimulus). A simple model based on signal detection theory demonstrates that these multiplicative gain changes were sufficient to account for attention-related improvements in perceptual sensitivity, without a need to invoke noise modulation. Moreover, combining the observed multiplicative gain with a postsensory readout mechanism resulted in a significantly poorer description of the observed behavioral data. We conclude that, at least in the context of relatively simple visual discrimination tasks, spatial attention modulates perceptual sensitivity primarily by modulating the gain of neural responses during early sensory processing PMID:25274817

  20. Age-related changes of frontal-midline theta is predictive of efficient memory maintenance.

    Science.gov (United States)

    Kardos, Z; Tóth, B; Boha, R; File, B; Molnár, M

    2014-07-25

    Frontal areas are thought to be the coordinators of working memory processes by controlling other brain areas reflected by oscillatory activities like frontal-midline theta (4-7 Hz). With aging substantial changes can be observed in the frontal brain areas, presumably leading to age-associated changes in cortical correlates of cognitive functioning. The present study aimed to test whether altered frontal-midline theta dynamics during working memory maintenance may underlie the capacity deficits observed in older adults. 33-channel EEG was recorded in young (18-26 years, N=20) and old (60-71 years, N=16) adults during the retention period of a visual delayed match-to-sample task, in which they had to maintain arrays of 3 or 5 colored squares. An additional visual odd-ball task was used to be able to measure the electrophysiological indices of sustained attentional processes. Old participants showed reduced frontal theta activity during both tasks compared to the young group. In the young memory maintenance-related frontal-midline theta activity was shown to be sensitive both to the increased memory demands and to efficient subsequent memory performance, whereas the old adults showed no such task-related difference in the frontal theta activity. The decrease of frontal-midline theta activity in the old group indicates that cerebral aging may alter the cortical circuitries of theta dynamics, thereby leading to age-associated decline of working memory maintenance function. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. Influence of the radial-inflow turbine efficiency prediction on the design and analysis of the Organic Rankine Cycle (ORC) system

    International Nuclear Information System (INIS)

    Song, Jian; Gu, Chun-wei; Ren, Xiaodong

    2016-01-01

    Highlights: • The efficiency prediction is based on the velocity triangle and loss models. • The efficiency selection has a big influence on the working fluid selection. • The efficiency selection has a big influence on system parameter determination. - Abstract: The radial-inflow turbine is a common choice for the power output in the Organic Rankine Cycle (ORC) system. Its efficiency is related to the working fluid property and the system operating condition. Generally, the radial-inflow turbine efficiency is assumed to be a constant value in the conventional ORC system analysis. Few studies focus on the influence of the radial-inflow turbine efficiency selection on the system design and analysis. Actually, the ORC system design and the radial-inflow turbine design are coupled with each other. Different thermal parameters of the ORC system would lead to different radial-inflow turbine design and then different turbine efficiency, and vice versa. Therefore, considering the radial-inflow turbine efficiency prediction in the ORC system design can enhance its reliability and accuracy. In this paper, a one-dimensional analysis model for the radial-inflow turbine in the ORC system is presented. The radial-inflow turbine efficiency prediction in this model is based on the velocity triangle and loss models, rather than a constant efficiency assumption. The influence of the working fluid property and the system operating condition on the turbine performance is evaluated. The thermodynamic analysis of the ORC system with a model predicted turbine efficiency and a constant turbine efficiency is conducted and the results are compared with each other. It indicates that the turbine efficiency selection has a significant influence on the working fluid selection and the system parameter determination.

  2. Model predictive control-based efficient energy recovery control strategy for regenerative braking system of hybrid electric bus

    International Nuclear Information System (INIS)

    Li, Liang; Zhang, Yuanbo; Yang, Chao; Yan, Bingjie; Marina Martinez, C.

    2016-01-01

    Highlights: • A 7-degree-of-freedom model of hybrid electric vehicle with regenerative braking system is built. • A modified nonlinear model predictive control strategy is developed. • The particle swarm optimization algorithm is employed to solve the optimization problem. • The proposed control strategy is verified by simulation and hardware-in-loop tests. • Test results verify the effectiveness of the proposed control strategy. - Abstract: As one of the main working modes, the energy recovered with regenerative braking system provides an effective approach so as to greatly improve fuel economy of hybrid electric bus. However, it is still a challenging issue to ensure braking stability while maximizing braking energy recovery. To solve this problem, an efficient energy recovery control strategy is proposed based on the modified nonlinear model predictive control method. Firstly, combined with the characteristics of the compound braking process of single-shaft parallel hybrid electric bus, a 7 degrees of freedom model of the vehicle longitudinal dynamics is built. Secondly, considering nonlinear characteristic of the vehicle model and the efficiency of regenerative braking system, the particle swarm optimization algorithm within the modified nonlinear model predictive control is adopted to optimize the torque distribution between regenerative braking system and pneumatic braking system at the wheels. So as to reduce the computational time of modified nonlinear model predictive control, a nearest point method is employed during the braking process. Finally, the simulation and hardware-in-loop test are carried out on road conditions with different tire–road adhesion coefficients, and the proposed control strategy is verified by comparing it with the conventional control method employed in the baseline vehicle controller. The simulation and hardware-in-loop test results show that the proposed strategy can ensure vehicle safety during emergency braking

  3. Predicting academic performance and clinical competency for international dental students: seeking the most efficient and effective measures.

    Science.gov (United States)

    Stacey, D Graham; Whittaker, John M

    2005-02-01

    Measures used in the selection of international dental students to a U.S. D.D.S. program were examined to identify the grouping that most effectively and efficiently predicted academic performance and clinical competency. Archival records from the International Dental Program (IDP) at Loma Linda University provided data on 171 students who had trained in countries outside the United States. The students sought admission to the D.D.S. degree program, successful completion of which qualified them to sit for U.S. licensure. As with most dental schools, competition is high for admission to the D.D.S. program. The study's goal was to identify what measures contributed to a fair and accurate selection process for dental school applicants from other nations. Multiple regression analyses identified National Board Part II and dexterity measures as significant predictors of academic performance and clinical competency. National Board Part I, TOEFL, and faculty interviews added no significant additional help in predicting eventual academic performance and clinical competency.

  4. Virtual quantification of metabolites by capillary electrophoresis-electrospray ionization-mass spectrometry: predicting ionization efficiency without chemical standards.

    Science.gov (United States)

    Chalcraft, Kenneth R; Lee, Richard; Mills, Casandra; Britz-McKibbin, Philip

    2009-04-01

    A major obstacle in metabolomics remains the identification and quantification of a large fraction of unknown metabolites in complex biological samples when purified standards are unavailable. Herein we introduce a multivariate strategy for de novo quantification of cationic/zwitterionic metabolites using capillary electrophoresis-electrospray ionization-mass spectrometry (CE-ESI-MS) based on fundamental molecular, thermodynamic, and electrokinetic properties of an ion. Multivariate calibration was used to derive a quantitative relationship between the measured relative response factor (RRF) of polar metabolites with respect to four physicochemical properties associated with ion evaporation in ESI-MS, namely, molecular volume (MV), octanol-water distribution coefficient (log D), absolute mobility (mu(o)), and effective charge (z(eff)). Our studies revealed that a limited set of intrinsic solute properties can be used to predict the RRF of various classes of metabolites (e.g., amino acids, amines, peptides, acylcarnitines, nucleosides, etc.) with reasonable accuracy and robustness provided that an appropriate training set is validated and ion responses are normalized to an internal standard(s). The applicability of the multivariate model to quantify micromolar levels of metabolites spiked in red blood cell (RBC) lysates was also examined by CE-ESI-MS without significant matrix effects caused by involatile salts and/or major co-ion interferences. This work demonstrates the feasibility for virtual quantification of low-abundance metabolites and their isomers in real-world samples using physicochemical properties estimated by computer modeling, while providing deeper insight into the wide disparity of solute responses in ESI-MS. New strategies for predicting ionization efficiency in silico allow for rapid and semiquantitative analysis of newly discovered biomarkers and/or drug metabolites in metabolomics research when chemical standards do not exist.

  5. Technical note: Nitrogen isotopic fractionation can be used to predict nitrogen-use efficiency in dairy cows fed temperate pasture.

    Science.gov (United States)

    Cheng, L; Sheahan, A J; Gibbs, S J; Rius, A G; Kay, J K; Meier, S; Edwards, G R; Dewhurst, R J; Roche, J R

    2013-12-01

    The objective of this study was to investigate the relationship between nitrogen isotopic fractionation (δ(15)N) and nitrogen-use efficiency (milk nitrogen/nitrogen intake; NUE) in pasture-fed dairy cows supplemented with increasing levels of urea to mimic high rumen degradable protein pastures in spring. Fifteen cows were randomly assigned to freshly cut pasture and either supplemented with 0, 250, or 336 g urea/d. Feed, milk, and plasma were analyzed for δ(15)N, milk and plasma for urea nitrogen concentration, and plasma for ammonia concentration. Treatment effects were tested using ANOVA and relationships between variables were established by linear regression. Lower dry matter intake (P = 0.002) and milk yield (P = 0.002) occurred with the highest urea supplementation (336 g urea/d) compared with the other two treatments. There was a strong linear relationship between milk δ(15)N - feed δ(15)N and NUE: [NUE (%) = 58.9 - 10.17 × milk δ(15)N - feed δ(15)N (‰) (r(2) = 0.83, P < 0.001, SE = 1.67)] and between plasma δ(15)N - feed δ(15)N and NUE: [NUE (%) = 52.4 - 8.61 × plasma δ(15)N - feed δ(15)N (‰) (r(2) = 0.85, P < 0.001, SE = 1.56)] . This study confirmed the potential use of δ(15)N to predict NUE in cows consuming different levels of rumen degradable protein.

  6. Predicting multiprocessing efficiency on the Cray multiprocessors in a (CTSS) time-sharing environment/application to a 3-D magnetohydrodynamics code

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1988-01-01

    A formula is derived for predicting multiprocessing efficiency on Cray supercomputers equipped with the Cray Time-Sharing System (CTSS). The model is applicable to an intensive time-sharing environment. The actual efficiency estimate depends on three factors: the code size, task length, and job mix. The implementation of multitasking in a three-dimensional plasma magnetohydrodynamics (MHD) code, TEMCO, is discussed. TEMCO solves the primitive one-fluid compressible MHD equations and includes resistive and Hall effects in Ohm's law. Virtually all segments of the main time-integration loop are multitasked. The multiprocessing efficiency model is applied to TEMCO. Excellent agreement is obtained between the actual multiprocessing efficiency and the theoretical prediction

  7. Predicting High or Low Transfer Efficiency of Photovoltaic Systems Using a Novel Hybrid Methodology Combining Rough Set Theory, Data Envelopment Analysis and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lee-Ing Tong

    2012-02-01

    Full Text Available Solar energy has become an important energy source in recent years as it generates less pollution than other energies. A photovoltaic (PV system, which typically has many components, converts solar energy into electrical energy. With the development of advanced engineering technologies, the transfer efficiency of a PV system has been increased from low to high. The combination of components in a PV system influences its transfer efficiency. Therefore, when predicting the transfer efficiency of a PV system, one must consider the relationship among system components. This work accurately predicts whether transfer efficiency of a PV system is high or low using a novel hybrid model that combines rough set theory (RST, data envelopment analysis (DEA, and genetic programming (GP. Finally, real data-set are utilized to demonstrate the accuracy of the proposed method.

  8. Leveraging Open Standard Interfaces in Providing Efficient Discovery, Retrieval, and Information of NASA-Sponsored Observations and Predictions

    Science.gov (United States)

    Cole, M.; Alameh, N.; Bambacus, M.

    2006-05-01

    The Applied Sciences Program at NASA focuses on extending the results of NASA's Earth-Sun system science research beyond the science and research communities to contribute to national priority applications with societal benefits. By employing a systems engineering approach, supporting interoperable data discovery and access, and developing partnerships with federal agencies and national organizations, the Applied Sciences Program facilitates the transition from research to operations in national applications. In particular, the Applied Sciences Program identifies twelve national applications, listed at http://science.hq.nasa.gov/earth-sun/applications/, which can be best served by the results of NASA aerospace research and development of science and technologies. The ability to use and integrate NASA data and science results into these national applications results in enhanced decision support and significant socio-economic benefits for each of the applications. This paper focuses on leveraging the power of interoperability and specifically open standard interfaces in providing efficient discovery, retrieval, and integration of NASA's science research results. Interoperability (the ability to access multiple, heterogeneous geoprocessing environments, either local or remote by means of open and standard software interfaces) can significantly increase the value of NASA-related data by increasing the opportunities to discover, access and integrate that data in the twelve identified national applications (particularly in non-traditional settings). Furthermore, access to data, observations, and analytical models from diverse sources can facilitate interdisciplinary and exploratory research and analysis. To streamline this process, the NASA GeoSciences Interoperability Office (GIO) is developing the NASA Earth-Sun System Gateway (ESG) to enable access to remote geospatial data, imagery, models, and visualizations through open, standard web protocols. The gateway (online

  9. Leveraging Web Services in Providing Efficient Discovery, Retrieval, and Integration of NASA-Sponsored Observations and Predictions

    Science.gov (United States)

    Bambacus, M.; Alameh, N.; Cole, M.

    2006-12-01

    The Applied Sciences Program at NASA focuses on extending the results of NASA's Earth-Sun system science research beyond the science and research communities to contribute to national priority applications with societal benefits. By employing a systems engineering approach, supporting interoperable data discovery and access, and developing partnerships with federal agencies and national organizations, the Applied Sciences Program facilitates the transition from research to operations in national applications. In particular, the Applied Sciences Program identifies twelve national applications, listed at http://science.hq.nasa.gov/earth-sun/applications/, which can be best served by the results of NASA aerospace research and development of science and technologies. The ability to use and integrate NASA data and science results into these national applications results in enhanced decision support and significant socio-economic benefits for each of the applications. This paper focuses on leveraging the power of interoperability and specifically open standard interfaces in providing efficient discovery, retrieval, and integration of NASA's science research results. Interoperability (the ability to access multiple, heterogeneous geoprocessing environments, either local or remote by means of open and standard software interfaces) can significantly increase the value of NASA-related data by increasing the opportunities to discover, access and integrate that data in the twelve identified national applications (particularly in non-traditional settings). Furthermore, access to data, observations, and analytical models from diverse sources can facilitate interdisciplinary and exploratory research and analysis. To streamline this process, the NASA GeoSciences Interoperability Office (GIO) is developing the NASA Earth-Sun System Gateway (ESG) to enable access to remote geospatial data, imagery, models, and visualizations through open, standard web protocols. The gateway (online

  10. Higher energy efficiency and better water quality by using model predictive flow control at water supply systems

    NARCIS (Netherlands)

    Bakker, M.; Verberk, J.Q.J.C.; Palmen, L.J.; Sperber, V.; Bakker, G.

    2011-01-01

    Half of all water supply systems in the Netherlands are controlled by model predictive flow control; the other half are controlled by conventional level based control. The differences between conventional level based control and model predictive control were investigated in experiments at five full

  11. First and Second-Law Efficiency Analysis and ANN Prediction of a Diesel Cycle with Internal Irreversibility, Variable Specific Heats, Heat Loss, and Friction Considerations

    Directory of Open Access Journals (Sweden)

    M. M. Rashidi

    2014-04-01

    Full Text Available The variability of specific heats, internal irreversibility, heat and frictional losses are neglected in air-standard analysis for different internal combustion engine cycles. In this paper, the performance of an air-standard Diesel cycle with considerations of internal irreversibility described by using the compression and expansion efficiencies, variable specific heats, and losses due to heat transfer and friction is investigated by using finite-time thermodynamics. Artificial neural network (ANN is proposed for predicting the thermal efficiency and power output values versus the minimum and the maximum temperatures of the cycle and also the compression ratio. Results show that the first-law efficiency and the output power reach their maximum at a critical compression ratio for specific fixed parameters. The first-law efficiency increases as the heat leakage decreases; however the heat leakage has no direct effect on the output power. The results also show that irreversibilities have depressing effects on the performance of the cycle. Finally, a comparison between the results of the thermodynamic analysis and the ANN prediction shows a maximum difference of 0.181% and 0.194% in estimating the thermal efficiency and the output power. The obtained results in this paper can be useful for evaluating and improving the performance of practical Diesel engines.

  12. Comparison of particle-wall interaction boundary conditions in the prediction of cyclone collection efficiency in computational fluid dynamics (CFD) modeling

    International Nuclear Information System (INIS)

    Valverde Ramirez, M.; Coury, J.R.; Goncalves, J.A.S.

    2009-01-01

    In recent years, many computational fluid dynamics (CFD) studies have appeared attempting to predict cyclone pressure drop and collection efficiency. While these studies have been able to predict pressure drop well, they have been only moderately successful in predicting collection efficiency. Part of the reason for this failure has been attributed to the relatively simple wall boundary conditions implemented in the commercially available CFD software, which are not capable of accurately describing the complex particle-wall interaction present in a cyclone. According, researches have proposed a number of different boundary conditions in order to improve the model performance. This work implemented the critical velocity boundary condition through a user defined function (UDF) in the Fluent software and compared its predictions both with experimental data and with the predictions obtained when using Fluent's built-in boundary conditions. Experimental data was obtained from eight laboratory scale cyclones with varying geometric ratios. The CFD simulations were made using the software Fluent 6.3.26. (author)

  13. Moderate efficiency of clinicians' predictions decreased for blurred clinical conditions and benefits from the use of BRASS index. A longitudinal study on geriatric patients' outcomes.

    Science.gov (United States)

    Signorini, Giulia; Dagani, Jessica; Bulgari, Viola; Ferrari, Clarissa; de Girolamo, Giovanni

    2016-01-01

    Accurate prognosis is an essential aspect of good clinical practice and efficient health services, particularly for chronic and disabling diseases, as in geriatric populations. This study aims to examine the accuracy of clinical prognostic predictions and to devise prediction models combining clinical variables and clinicians' prognosis for a geriatric patient sample. In a sample of 329 consecutive older patients admitted to 10 geriatric units, we evaluated the accuracy of clinicians' prognosis regarding three outcomes at discharge: global functioning, length of stay (LoS) in hospital, and destination at discharge (DD). A comprehensive set of sociodemographic, clinical, and treatment-related information were also collected. Moderate predictive performance was found for all three outcomes: area under receiver operating characteristic curve of 0.79 and 0.78 for functioning and LoS, respectively, and moderate concordance, Cohen's K = 0.45, between predicted and observed DD. Predictive models found the Blaylock Risk Assessment Screening Score together with clinicians' judgment relevant to improve predictions for all outcomes (absolute improvement in adjusted and pseudo-R(2) up to 19%). Although the clinicians' estimates were important factors in predicting global functioning, LoS, and DD, more research is needed regarding both methodological aspects and clinical measurements, to improve prognostic clinical indices. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Predicting the losses and the efficiency of a 1-3 piezo-composite transducer through a partial homogeneization method

    Directory of Open Access Journals (Sweden)

    Pastor, J.

    2002-02-01

    Full Text Available The topic of this work is the theoretical study of a disc-shaped acoustic transducer made with a 1-3 piezoelectric composite material. This material consists of PZT ceramic rods embedded in a polymer matrix. A modeling of this ideal transversally periodic structure is proposed. It is based on a finite element approach derived from homogenization techniques mainly used for composite material studies. The analysis focuses on a representative unit cell with specific boundary conditions on the lateral surfaces taking accurately into account the periodicity of the structure. The first step proposed is the development of a three-dimensional Fortran code with complex variables, especially adapted for this problem. Using the principle of correspondence of Lee-Mandel, this technique allows the prediction of the damping properties of the transducer from the complex modulus of the constituents. Both the versatility of the method and the rigorous character of the model are pointed out through various boundary conditions and mixed loadings. An interesting result is that, despite the lossy polymer matrix, a 1-3 composite can advantageously replace a much heavier massive transducer, in terms of efficiency and loss factor.

    El objetivo de este trabajo es el estudio teórico de un transductor acústico con forma de disco construido con un composite piezoeléctrico 1-3. Este material consiste en barras cerámicas de PZT embebidas en una matriz polimérica. Se propone un modelo de su estructura periódica transversal ideal basándose en una aproximación mediante elementos finitos derivada de técnicas de homogeneización usadas principalmente para estudios de materiales compuestos. El análisis se enfoca a una celdilla unidad representativa con unas condiciones de contorno específicas sobre las superficies laterales, teniendo en cuenta adecuadamente la periodicidad de la estructura. El primer paso propuesto es el desarrollo de un código Fortran

  15. Prediction efficiency of the hydrographical parameters as related to distribution patterns of the Pleuromamma species in the Indian Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Jayalakshmy, K.V.; Saraswathy, M.

    . Multiple regression model of P. indica abundance on the parameters: temperature, salinity, dissolved oxygen and phosphate-phosphorus could explain more than 85% of the variation in the predicted abundance, while those of 8 species obtained from...

  16. An integrative approach to CTL epitope prediction: A combined algorithm integrating MHC class I binding, TAP transport efficiency, and proteasomal cleavage predictions

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Lundegaard, Claus; Lamberth, K

    2005-01-01

    Reverse immunogenetic approaches attempt to optimize the selection of candidate epitopes, and thus minimize the experimental effort needed to identify new epitopes. When predicting cytotoxic T cell epitopes, the main focus has been on the highly specific MHC class I binding event. Methods have al.......The method is available at http://www.cbs.dtu.dk/services/NetCTL. Supplementary material is available at http://www.cbs.dtu.dk/suppl/immunology/CTL.php....

  17. Prediction of geomagnetic storm using neural networks: Comparison of the efficiency of the Satellite and ground-based input parameters

    International Nuclear Information System (INIS)

    Stepanova, Marina; Antonova, Elizavieta; Munos-Uribe, F A; Gordo, S L Gomez; Torres-Sanchez, M V

    2008-01-01

    Different kinds of neural networks have established themselves as an effective tool in the prediction of different geomagnetic indices, including the Dst being the most important constituent for determination of the impact of Space Weather on the human life. Feed-forward networks with one hidden layer are used to forecast the Dst variation, using separately the solar wind paramenters, polar cap index, and auroral electrojet index as input parameters. It was found that in all three cases the storm-time intervals were predicted much more precisely as quite time intervals. The majority of cross-correlation coefficients between predicted and observed Dst of strong geomagnetic storms are situated between 0.8 and 0.9. Changes in the neural network architecture, including the number of nodes in the input and hidden layers and the transfer functions between them lead to an improvement of a network performance up to 10%.

  18. Efficient computing procedures and impossibility to solve the problem of exact prediction of events in the quantum world

    International Nuclear Information System (INIS)

    Namiot, V.A.; Chernavskii, D.S.

    2003-01-01

    It is well known, that in the classical mechanics the dynamic chaos is possible. When it takes place, the exact prediction of events in the future appears impossible. But in the quantum theory the dynamic chaos (connected with perturbations of the initial conditions) formally is absent. Nevertheless, as it is shown in this Letter, in case of the quantum theory there are other reasons related directly to so-called paradoxes of formal logic which do not allow one to predict the future precisely

  19. Efficiency of a clinical prediction model for selective rapid testing in children with pharyngitis: A prospective, multicenter study

    NARCIS (Netherlands)

    Cohen, Jérémie F.; Cohen, Robert; Bidet, Philippe; Elbez, Annie; Levy, Corinne; Bossuyt, Patrick M.; Chalumeau, Martin

    2017-01-01

    There is controversy whether physicians can rely on signs and symptoms to select children with pharyngitis who should undergo a rapid antigen detection test (RADT) for group A streptococcus (GAS). Our objective was to evaluate the efficiency of signs and symptoms in selectively testing children with

  20. Modelling impacts of performance on the probability of reproducing, and thereby on productive lifespan, allow prediction of lifetime efficiency in dairy cows.

    Science.gov (United States)

    Phuong, H N; Blavy, P; Martin, O; Schmidely, P; Friggens, N C

    2016-01-01

    Reproductive success is a key component of lifetime efficiency - which is the ratio of energy in milk (MJ) to energy intake (MJ) over the lifespan, of cows. At the animal level, breeding and feeding management can substantially impact milk yield, body condition and energy balance of cows, which are known as major contributors to reproductive failure in dairy cattle. This study extended an existing lifetime performance model to incorporate the impacts that performance changes due to changing breeding and feeding strategies have on the probability of reproducing and thereby on the productive lifespan, and thus allow the prediction of a cow's lifetime efficiency. The model is dynamic and stochastic, with an individual cow being the unit modelled and one day being the unit of time. To evaluate the model, data from a French study including Holstein and Normande cows fed high-concentrate diets and data from a Scottish study including Holstein cows selected for high and average genetic merit for fat plus protein that were fed high- v. low-concentrate diets were used. Generally, the model consistently simulated productive and reproductive performance of various genotypes of cows across feeding systems. In the French data, the model adequately simulated the reproductive performance of Holsteins but significantly under-predicted that of Normande cows. In the Scottish data, conception to first service was comparably simulated, whereas interval traits were slightly under-predicted. Selection for greater milk production impaired the reproductive performance and lifespan but not lifetime efficiency. The definition of lifetime efficiency used in this model did not include associated costs or herd-level effects. Further works should include such economic indicators to allow more accurate simulation of lifetime profitability in different production scenarios.

  1. Laccase from Pycnoporus cinnabarinus and phenolic compounds: can the efficiency of an enzyme mediator for delignifying kenaf pulp be predicted?

    Science.gov (United States)

    Andreu, Glòria; Vidal, Teresa

    2013-03-01

    In this work, kenaf pulp was delignified by using laccase in combination with various redox mediators and the efficiency of the different laccase–mediator systems assessed in terms of the changes in pulp properties after bleaching. The oxidative ability of the individual mediators used (acetosyringone, syringaldehyde, p-coumaric acid, vanillin and actovanillone) and the laccase–mediator systems was determined by monitoring the oxidation–reduction potential (ORP) during process. The results confirmed the production of phenoxy radicals of variable reactivity and stressed the significant role of lignin structure in the enzymatic process. Although changes in ORP were correlated with the oxidative ability of the mediators, pulp properties as determined after the bleaching stage were also influenced by condensation and grafting reactions. As shown here, ORP measurements provide a first estimation of the delignification efficiency of a laccase–mediator system. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. An Efficient Computational Model to Predict Protonation at the Amide Nitrogen and Reactivity along the C–N Rotational Pathway

    Science.gov (United States)

    Szostak, Roman; Aubé, Jeffrey

    2015-01-01

    N-protonation of amides is critical in numerous biological processes, including amide bonds proteolysis and protein folding, as well as in organic synthesis as a method to activate amide bonds towards unconventional reactivity. A computational model enabling prediction of protonation at the amide bond nitrogen atom along the C–N rotational pathway is reported. Notably, this study provides a blueprint for the rational design and application of amides with a controlled degree of rotation in synthetic chemistry and biology. PMID:25766378

  3. Efficient and accurate two-scale FE-FFT-based prediction of the effective material behavior of elasto-viscoplastic polycrystals

    Science.gov (United States)

    Kochmann, Julian; Wulfinghoff, Stephan; Ehle, Lisa; Mayer, Joachim; Svendsen, Bob; Reese, Stefanie

    2017-09-01

    Recently, two-scale FE-FFT-based methods (e.g., Spahn et al. in Comput Methods Appl Mech Eng 268:871-883, 2014; Kochmann et al. in Comput Methods Appl Mech Eng 305:89-110, 2016) have been proposed to predict the microscopic and overall mechanical behavior of heterogeneous materials. The purpose of this work is the extension to elasto-viscoplastic polycrystals, efficient and robust Fourier solvers and the prediction of micromechanical fields during macroscopic deformation processes. Assuming scale separation, the macroscopic problem is solved using the finite element method. The solution of the microscopic problem, which is embedded as a periodic unit cell (UC) in each macroscopic integration point, is found by employing fast Fourier transforms, fixed-point and Newton-Krylov methods. The overall material behavior is defined by the mean UC response. In order to ensure spatially converged micromechanical fields as well as feasible overall CPU times, an efficient but simple solution strategy for two-scale simulations is proposed. As an example, the constitutive behavior of 42CrMo4 steel is predicted during macroscopic three-point bending tests.

  4. Assessing the Efficiency of Phenotyping Early Traits in a Greenhouse Automated Platform for Predicting Drought Tolerance of Soybean in the Field.

    Science.gov (United States)

    Peirone, Laura S; Pereyra Irujo, Gustavo A; Bolton, Alejandro; Erreguerena, Ignacio; Aguirrezábal, Luis A N

    2018-01-01

    Conventional field phenotyping for drought tolerance, the most important factor limiting yield at a global scale, is labor-intensive and time-consuming. Automated greenhouse platforms can increase the precision and throughput of plant phenotyping and contribute to a faster release of drought tolerant varieties. The aim of this work was to establish a framework of analysis to identify early traits which could be efficiently measured in a greenhouse automated phenotyping platform, for predicting the drought tolerance of field grown soybean genotypes. A group of genotypes was evaluated, which showed variation in their drought susceptibility index (DSI) for final biomass and leaf area. A large number of traits were measured before and after the onset of a water deficit treatment, which were analyzed under several criteria: the significance of the regression with the DSI, phenotyping cost, earliness, and repeatability. The most efficient trait was found to be transpiration efficiency measured at 13 days after emergence. This trait was further tested in a second experiment with different water deficit intensities, and validated using a different set of genotypes against field data from a trial network in a third experiment. The framework applied in this work for assessing traits under different criteria could be helpful for selecting those most efficient for automated phenotyping.

  5. Assessing the Efficiency of Phenotyping Early Traits in a Greenhouse Automated Platform for Predicting Drought Tolerance of Soybean in the Field

    Directory of Open Access Journals (Sweden)

    Laura S. Peirone

    2018-05-01

    Full Text Available Conventional field phenotyping for drought tolerance, the most important factor limiting yield at a global scale, is labor-intensive and time-consuming. Automated greenhouse platforms can increase the precision and throughput of plant phenotyping and contribute to a faster release of drought tolerant varieties. The aim of this work was to establish a framework of analysis to identify early traits which could be efficiently measured in a greenhouse automated phenotyping platform, for predicting the drought tolerance of field grown soybean genotypes. A group of genotypes was evaluated, which showed variation in their drought susceptibility index (DSI for final biomass and leaf area. A large number of traits were measured before and after the onset of a water deficit treatment, which were analyzed under several criteria: the significance of the regression with the DSI, phenotyping cost, earliness, and repeatability. The most efficient trait was found to be transpiration efficiency measured at 13 days after emergence. This trait was further tested in a second experiment with different water deficit intensities, and validated using a different set of genotypes against field data from a trial network in a third experiment. The framework applied in this work for assessing traits under different criteria could be helpful for selecting those most efficient for automated phenotyping.

  6. Capability of the "Ball-Berry" model for predicting stomatal conductance and water use efficiency of potato leaves under different irrigation regimes

    DEFF Research Database (Denmark)

    Liu, Fulai; Andersen, Mathias N.; Jensen, Christian Richardt

    2009-01-01

    was used for model parameterization, where measurements of midday leaf gas exchange of potted potatoes were done during progressive soil drying for 2 weeks at tuber initiation and earlier bulking stages. The measured photosynthetic rate (An) was used as an input for the model. To account for the effects......The capability of the ‘Ball-Berry' model (BB-model) in predicting stomatal conductance (gs) and water use efficiency (WUE) of potato (Solanum tuberosum L.) leaves under different irrigation regimes was tested using data from two independent pot experiments in 2004 and 2007. Data obtained from 2004...... of soil water deficits on gs, a simple equation modifying the slope (m) based on the mean soil water potential (Ψs) in the soil columns was incorporated into the original BB-model. Compared with the original BB-model, the modified BB-model showed better predictability for both gs and WUE of potato leaves...

  7. Role of wing color and seasonal changes in ambient temperature and solar irradiation on predicted flight efficiency of the Albatross.

    Science.gov (United States)

    Hassanalian, M; Throneberry, G; Ali, M; Ben Ayed, S; Abdelkefi, A

    2018-01-01

    Drag reduction of the wings of migrating birds is crucial to their flight efficiency. Wing color impacts absorption of solar irradiation which may affect drag but there is little known in this area. To this end, the drag reduction induced by the thermal effect of the wing color of migrating birds with unpowered flight modes is presented in this study. Considering this natural phenomenon in the albatross as an example of migrating birds, and applying an energy balance for this biological system, a thermal analysis is performed on the wings during the summer and winter to obtain different ranges of air density, viscosity, and wing surface temperature brought about from a range of ambient temperatures and climatic conditions seen in different seasons and to study their effects. The exact shape of the albatross wing is used and nine different wing colors are considered in order to gain a better understanding of the effect different colors' absorptivities make on the change in aerodynamic performances. The thermal effect is found to be more important during the summer than during the winter due to the higher values of solar irradiation and a maximum drag reduction of 7.8% is found in summer changing the wing color from light white to dark black. The obtained results show that albatrosses with darker colored wings are more efficient (constant lift to drag ratio and drag reduction) and have better endurance due to this drag reduction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Predicting core losses and efficiency of SRM in continuous current mode of operation using improved analytical technique

    International Nuclear Information System (INIS)

    Parsapour, Amir; Dehkordi, Behzad Mirzaeian; Moallem, Mehdi

    2015-01-01

    In applications in which the high torque per ampere at low speed and rated power at high speed are required, the continuous current method is the best solution. However, there is no report on calculating the core loss of SRM in continuous current mode of operation. Efficiency and iron loss calculation which are complex tasks in case of conventional mode of operation is even more involved in continuous current mode of operation. In this paper, the Switched Reluctance Motor (SRM) is modeled using finite element method and core loss and copper loss of SRM in discontinuous and continuous current modes of operation are calculated using improved analytical techniques to include the minor loop losses in continuous current mode of operation. Motor efficiency versus speed in both operation modes is obtained and compared. - Highlights: • Continuous current method for Switched Reluctance Motor (SRM) is explained. • An improved analytical technique is presented for SRM core loss calculation. • SRM losses in discontinuous and continuous current operation modes are presented. • Effect of mutual inductances on SRM performance is investigated

  9. Predicting core losses and efficiency of SRM in continuous current mode of operation using improved analytical technique

    Energy Technology Data Exchange (ETDEWEB)

    Parsapour, Amir, E-mail: amirparsapour@gmail.com [Department of Electrical Engineering, University of Isfahan, Isfahan (Iran, Islamic Republic of); Dehkordi, Behzad Mirzaeian, E-mail: mirzaeian@eng.ui.ac.ir [Department of Electrical Engineering, University of Isfahan, Isfahan (Iran, Islamic Republic of); Moallem, Mehdi, E-mail: moallem@cc.iut.ac.ir [Department of Electrical Engineering, Isfahan University of Technology, Isfahan (Iran, Islamic Republic of)

    2015-03-15

    In applications in which the high torque per ampere at low speed and rated power at high speed are required, the continuous current method is the best solution. However, there is no report on calculating the core loss of SRM in continuous current mode of operation. Efficiency and iron loss calculation which are complex tasks in case of conventional mode of operation is even more involved in continuous current mode of operation. In this paper, the Switched Reluctance Motor (SRM) is modeled using finite element method and core loss and copper loss of SRM in discontinuous and continuous current modes of operation are calculated using improved analytical techniques to include the minor loop losses in continuous current mode of operation. Motor efficiency versus speed in both operation modes is obtained and compared. - Highlights: • Continuous current method for Switched Reluctance Motor (SRM) is explained. • An improved analytical technique is presented for SRM core loss calculation. • SRM losses in discontinuous and continuous current operation modes are presented. • Effect of mutual inductances on SRM performance is investigated.

  10. Only 7% of the variation in feed efficiency in veal calves can be predicted from variation in feeding motivation, digestion, metabolism, immunology, and behavioral traits in early life.

    Science.gov (United States)

    Gilbert, M S; van den Borne, J J G C; van Reenen, C G; Gerrits, W J J

    2017-10-01

    High interindividual variation in growth performance is commonly observed in veal calf production and appears to depend on milk replacer (MR) composition. Our first objective was to examine whether variation in growth performance in healthy veal calves can be predicted from early life characterization of these calves. Our second objective was to determine whether these predictions differ between calves that are fed a high- or low-lactose MR in later life. A total of 180 male Holstein-Friesian calves arrived at the facilities at 17 ± 3.4 d of age, and blood samples were collected before the first feeding. Subsequently, calves were characterized in the following 9 wk (period 1) using targeted challenges related to traits within each of 5 categories: feeding motivation, digestion, postabsorptive metabolism, behavior and stress, and immunology. In period 2 (wk 10-26), 130 calves were equally divided over 2 MR treatments: a control MR that contained lactose as the only carbohydrate source and a low-lactose MR in which 51% of the lactose was isocalorically replaced by glucose, fructose, and glycerol (2:1:2 ratio). Relations between early life characteristics and growth performance in later life were assessed in 117 clinically healthy calves. Average daily gain (ADG) in period 2 tended to be greater for control calves (1,292 ± 111 g/d) than for calves receiving the low-lactose MR (1,267 ± 103 g/d). Observations in period 1 were clustered per category using principal component analysis, and the resulting principal components were used to predict performance in period 2 using multiple regression procedures. Variation in observations in period 1 predicted 17% of variation in ADG in period 2. However, this was mainly related to variation in solid feed refusals. When ADG was adjusted to equal solid feed intake, only 7% of the variation in standardized ADG in period 2, in fact reflecting feed efficiency, could be explained by early life measurements. This indicates that >90

  11. Predicting the efficiencies of 2-mercaptobenzothiazole collectors used as chelating agents in flotation processes: a density-functional study.

    Science.gov (United States)

    Yekeler, Hülya; Yekeler, Meftuni

    2006-09-01

    In recent years, several new chelating reagents have been synthesized and tested for their collecting power in sulfide and non-sulfide minerals flotation. Many researchers have indicated that chelating reagents have the advantage of offering better selectivity and specificity as flotation collectors. Therefore, density functional theory (DFT) calculations at the B3LYP/6-31G(d,p) level were performed to investigate the observed activities of 2-mercaptobenzothiazole, 6-methyl-2-mercaptobenzothiazole and 6-methoxy-2-mercaptobenzothiazole as the most popular flotation collectors. The molecular properties and activity relationships were determined by the HOMO localizations, the HOMO energies, Mulliken charges and the electrostatic potentials at the thioamide functional group, which is the key site in the forming efficiency of the collectors studied. It is concluded that these quantities can be used successfully for understanding the collecting abilities of 2-mercaptobenzothiazoles. The results obtained theoretically are consistent with the experimental data reported in the literature.

  12. An efficient heuristic method for active feature acquisition and its application to protein-protein interaction prediction

    Directory of Open Access Journals (Sweden)

    Thahir Mohamed

    2012-11-01

    Full Text Available Abstract Background Machine learning approaches for classification learn the pattern of the feature space of different classes, or learn a boundary that separates the feature space into different classes. The features of the data instances are usually available, and it is only the class-labels of the instances that are unavailable. For example, to classify text documents into different topic categories, the words in the documents are features and they are readily available, whereas the topic is what is predicted. However, in some domains obtaining features may be resource-intensive because of which not all features may be available. An example is that of protein-protein interaction prediction, where not only are the labels ('interacting' or 'non-interacting' unavailable, but so are some of the features. It may be possible to obtain at least some of the missing features by carrying out a few experiments as permitted by the available resources. If only a few experiments can be carried out to acquire missing features, which proteins should be studied and which features of those proteins should be determined? From the perspective of machine learning for PPI prediction, it would be desirable that those features be acquired which when used in training the classifier, the accuracy of the classifier is improved the most. That is, the utility of the feature-acquisition is measured in terms of how much acquired features contribute to improving the accuracy of the classifier. Active feature acquisition (AFA is a strategy to preselect such instance-feature combinations (i.e. protein and experiment combinations for maximum utility. The goal of AFA is the creation of optimal training set that would result in the best classifier, and not in determining the best classification model itself. Results We present a heuristic method for active feature acquisition to calculate the utility of acquiring a missing feature. This heuristic takes into account the change in

  13. Efficiency Improvement of Kalman Filter for GNSS/INS through One-Step Prediction of P Matrix

    Directory of Open Access Journals (Sweden)

    Qingli Li

    2015-01-01

    Full Text Available To meet the real-time and low power consumption demands in MEMS navigation and guidance field, an improved Kalman filter algorithm for GNSS/INS was proposed in this paper named as one-step prediction of P matrix. Quantitative analysis of field test datasets was made to compare the navigation accuracy with the standard algorithm, which indicated that the degradation caused by the simplified algorithm is small enough compared to the navigation errors of the GNSS/INS system itself. Meanwhile, the computation load and time consumption of the algorithm decreased over 50% by the improved algorithm. The work has special significance for navigation applications that request low power consumption and strict real-time response, such as cellphone, wearable devices, and deeply coupled GNSS/INS systems.

  14. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    Science.gov (United States)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  15. The ability of in vitro antioxidant assays to predict the efficiency of a cod protein hydrolysate and brown seaweed extract to prevent oxidation in marine food model systems.

    Science.gov (United States)

    Jónsdóttir, Rósa; Geirsdóttir, Margrét; Hamaguchi, Patricia Y; Jamnik, Polona; Kristinsson, Hordur G; Undeland, Ingrid

    2016-04-01

    The ability of different in vitro antioxidant assays to predict the efficiency of cod protein hydrolysate (CPH) and Fucus vesiculosus ethyl acetate extract (EA) towards lipid oxidation in haemoglobin-fortified washed cod mince and iron-containing cod liver oil emulsion was evaluated. The progression of oxidation was followed by sensory analysis, lipid hydroperoxides and thiobarbituric acid-reactive substances (TBARS) in both systems, as well as loss of redness and protein carbonyls in the cod system. The in vitro tests revealed high reducing capacity, high DPPH radical scavenging properties and a high oxygen radical absorbance capacity (ORAC) value of the EA which also inhibited lipid and protein oxidation in the cod model system. The CPH had a high metal chelating capacity and was efficient against oxidation in the cod liver oil emulsion. The results indicate that the F. vesiculosus extract has a potential as an excellent natural antioxidant against lipid oxidation in fish muscle foods while protein hydrolysates are more promising for fish oil emulsions. The usefulness of in vitro assays to predict the antioxidative properties of new natural ingredients in foods thus depends on the knowledge about the food systems, particularly the main pro-oxidants present. © 2015 Society of Chemical Industry.

  16. Exceptional influenza morbidity in summer season of 2017 in Israel may predict the vaccine efficiency in the coming winter.

    Science.gov (United States)

    Pando, Rakefet; Sharabi, Sivan; Mandelboim, Michal

    2018-03-07

    Influenza infections are the leading cause of respiratory viral infections worldwide, and are mostly common in the winter season. The seasonal influenza vaccine is currently the most effective preventive modality against influenza infection. Immediately following each winter season the World Health Organization (WHO) announces the vaccine composition for the following winter. Unexpectedly, during the summer of 2017, in Israel, we observed in hospitalized patients, an exceptionally high numbers of Influenza positive cases. The majority of the influenza B infections were caused by influenza B/Yamagata lineage, which did not circulate in Israel in the previous winter, and most of the influenza A infections were caused by influenza A/H3N2, a strain similar to the strain that circulated in Israel in the previous winter. We therefore predict that these two viruses will circulate in the coming winter of 2017/18 and that the trivalent vaccine, which includes antigenically different viruses will be inefficient. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. An efficient method for the prediction of deleterious multiple-point mutations in the secondary structure of RNAs using suboptimal folding solutions

    Directory of Open Access Journals (Sweden)

    Barash Danny

    2008-04-01

    Full Text Available Abstract Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3, for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary

  18. Combining Microbial Enzyme Kinetics Models with Light Use Efficiency Models to Predict CO2 and CH4 Ecosystem Exchange from Flooded and Drained Peatland Systems

    Science.gov (United States)

    Oikawa, P. Y.; Jenerette, D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Baldocchi, D. D.

    2014-12-01

    Under California's Cap-and-Trade program, companies are looking to invest in land-use practices that will reduce greenhouse gas (GHG) emissions. The Sacramento-San Joaquin River Delta is a drained cultivated peatland system and a large source of CO2. To slow soil subsidence and reduce CO2 emissions, there is growing interest in converting drained peatlands to wetlands. However, wetlands are large sources of CH4 that could offset CO2-based GHG reductions. The goal of our research is to provide accurate measurements and model predictions of the changes in GHG budgets that occur when drained peatlands are restored to wetland conditions. We have installed a network of eddy covariance towers across multiple land use types in the Delta and have been measuring CO2 and CH4 ecosystem exchange for multiple years. In order to upscale these measurements through space and time we are using these data to parameterize and validate a process-based biogeochemical model. To predict gross primary productivity (GPP), we are using a simple light use efficiency (LUE) model which requires estimates of light, leaf area index and air temperature and can explain 90% of the observed variation in GPP in a mature wetland. To predict ecosystem respiration we have adapted the Dual Arrhenius Michaelis-Menten (DAMM) model. The LUE-DAMM model allows accurate simulation of half-hourly net ecosystem exchange (NEE) in a mature wetland (r2=0.85). We are working to expand the model to pasture, rice and alfalfa systems in the Delta. To predict methanogenesis, we again apply a modified DAMM model, using simple enzyme kinetics. However CH4 exchange is complex and we have thus expanded the model to predict not only microbial CH4 production, but also CH4 oxidation, CH4 storage and the physical processes regulating the release of CH4 to the atmosphere. The CH4-DAMM model allows accurate simulation of daily CH4 ecosystem exchange in a mature wetland (r2=0.55) and robust estimates of annual CH4 budgets. The LUE

  19. Condensational Growth of Combination Drug-Excipient Submicrometer Particles for Targeted High Efficiency Pulmonary Delivery: Comparison of CFD Predictions with Experimental Results

    Science.gov (United States)

    Hindle, Michael

    2011-01-01

    Purpose The objective of this study was to investigate the hygroscopic growth of combination drug and excipient submicrometer aerosols for respiratory drug delivery using in vitro experiments and a newly developed computational fluid dynamics (CFD) model. Methods Submicrometer combination drug and excipient particles were generated experimentally using both the capillary aerosol generator and the Respimat inhaler. Aerosol hygroscopic growth was evaluated in vitro and with CFD in a coiled tube geometry designed to provide residence times and thermodynamic conditions consistent with the airways. Results The in vitro results and CFD predictions both indicated that the initially submicrometer particles increased in mean size to a range of 1.6–2.5 µm for the 50:50 combination of a non-hygroscopic drug (budesonide) and different hygroscopic excipients. CFD results matched the in vitro predictions to within 10% and highlighted gradual and steady size increase of the droplets, which will be effective for minimizing extrathoracic deposition and producing deposition deep within the respiratory tract. Conclusions Enhanced excipient growth (EEG) appears to provide an effective technique to increase pharmaceutical aerosol size, and the developed CFD model will provide a powerful design tool for optimizing this technique to produce high efficiency pulmonary delivery. PMID:21948458

  20. Condensational growth of combination drug-excipient submicrometer particles for targeted high efficiency pulmonary delivery: comparison of CFD predictions with experimental results.

    Science.gov (United States)

    Longest, P Worth; Hindle, Michael

    2012-03-01

    The objective of this study was to investigate the hygroscopic growth of combination drug and excipient submicrometer aerosols for respiratory drug delivery using in vitro experiments and a newly developed computational fluid dynamics (CFD) model. Submicrometer combination drug and excipient particles were generated experimentally using both the capillary aerosol generator and the Respimat inhaler. Aerosol hygroscopic growth was evaluated in vitro and with CFD in a coiled tube geometry designed to provide residence times and thermodynamic conditions consistent with the airways. The in vitro results and CFD predictions both indicated that the initially submicrometer particles increased in mean size to a range of 1.6-2.5 μm for the 50:50 combination of a non-hygroscopic drug (budesonide) and different hygroscopic excipients. CFD results matched the in vitro predictions to within 10% and highlighted gradual and steady size increase of the droplets, which will be effective for minimizing extrathoracic deposition and producing deposition deep within the respiratory tract. Enhanced excipient growth (EEG) appears to provide an effective technique to increase pharmaceutical aerosol size, and the developed CFD model will provide a powerful design tool for optimizing this technique to produce high efficiency pulmonary delivery.

  1. A hydrological prediction system based on the SVS land-surface scheme: efficient calibration of GEM-Hydro for streamflow simulation over the Lake Ontario basin

    Directory of Open Access Journals (Sweden)

    É. Gaborit

    2017-09-01

    Full Text Available This work explores the potential of the distributed GEM-Hydro runoff modeling platform, developed at Environment and Climate Change Canada (ECCC over the last decade. More precisely, the aim is to develop a robust implementation methodology to perform reliable streamflow simulations with a distributed model over large and partly ungauged basins, in an efficient manner. The latest version of GEM-Hydro combines the SVS (Soil, Vegetation and Snow land-surface scheme and the WATROUTE routing scheme. SVS has never been evaluated from a hydrological point of view, which is done here for all major rivers flowing into Lake Ontario. Two established hydrological models are confronted to GEM-Hydro, namely MESH and WATFLOOD, which share the same routing scheme (WATROUTE but rely on different land-surface schemes. All models are calibrated using the same meteorological forcings, objective function, calibration algorithm, and basin delineation. GEM-Hydro is shown to be competitive with MESH and WATFLOOD: the NSE  √  (Nash–Sutcliffe criterion computed on the square root of the flows is for example equal to 0.83 for MESH and GEM-Hydro in validation on the Moira River basin, and to 0.68 for WATFLOOD. A computationally efficient strategy is proposed to calibrate SVS: a simple unit hydrograph is used for routing instead of WATROUTE. Global and local calibration strategies are compared in order to estimate runoff for ungauged portions of the Lake Ontario basin. Overall, streamflow predictions obtained using a global calibration strategy, in which a single parameter set is identified for the whole basin of Lake Ontario, show accuracy comparable to the predictions based on local calibration: the average NSE  √  in validation and over seven subbasins is 0.73 and 0.61, respectively for local and global calibrations. Hence, global calibration provides spatially consistent parameter values, robust performance at gauged locations, and reduces the

  2. A hydrological prediction system based on the SVS land-surface scheme: efficient calibration of GEM-Hydro for streamflow simulation over the Lake Ontario basin

    Science.gov (United States)

    Gaborit, Étienne; Fortin, Vincent; Xu, Xiaoyong; Seglenieks, Frank; Tolson, Bryan; Fry, Lauren M.; Hunter, Tim; Anctil, François; Gronewold, Andrew D.

    2017-09-01

    This work explores the potential of the distributed GEM-Hydro runoff modeling platform, developed at Environment and Climate Change Canada (ECCC) over the last decade. More precisely, the aim is to develop a robust implementation methodology to perform reliable streamflow simulations with a distributed model over large and partly ungauged basins, in an efficient manner. The latest version of GEM-Hydro combines the SVS (Soil, Vegetation and Snow) land-surface scheme and the WATROUTE routing scheme. SVS has never been evaluated from a hydrological point of view, which is done here for all major rivers flowing into Lake Ontario. Two established hydrological models are confronted to GEM-Hydro, namely MESH and WATFLOOD, which share the same routing scheme (WATROUTE) but rely on different land-surface schemes. All models are calibrated using the same meteorological forcings, objective function, calibration algorithm, and basin delineation. GEM-Hydro is shown to be competitive with MESH and WATFLOOD: the NSE √ (Nash-Sutcliffe criterion computed on the square root of the flows) is for example equal to 0.83 for MESH and GEM-Hydro in validation on the Moira River basin, and to 0.68 for WATFLOOD. A computationally efficient strategy is proposed to calibrate SVS: a simple unit hydrograph is used for routing instead of WATROUTE. Global and local calibration strategies are compared in order to estimate runoff for ungauged portions of the Lake Ontario basin. Overall, streamflow predictions obtained using a global calibration strategy, in which a single parameter set is identified for the whole basin of Lake Ontario, show accuracy comparable to the predictions based on local calibration: the average NSE √ in validation and over seven subbasins is 0.73 and 0.61, respectively for local and global calibrations. Hence, global calibration provides spatially consistent parameter values, robust performance at gauged locations, and reduces the complexity and computation burden of the

  3. CRISPRpred: A flexible and efficient tool for sgRNAs on-target activity prediction in CRISPR/Cas9 systems.

    Directory of Open Access Journals (Sweden)

    Md Khaledur Rahman

    Full Text Available The CRISPR/Cas9-sgRNA system has recently become a popular tool for genome editing and a very hot topic in the field of medical research. In this system, Cas9 protein is directed to a desired location for gene engineering and cleaves target DNA sequence which is complementary to a 20-nucleotide guide sequence found within the sgRNA. A lot of experimental efforts, ranging from in vivo selection to in silico modeling, have been made for efficient designing of sgRNAs in CRISPR/Cas9 system. In this article, we present a novel tool, called CRISPRpred, for efficient in silico prediction of sgRNAs on-target activity which is based on the applications of Support Vector Machine (SVM model. To conduct experiments, we have used a benchmark dataset of 17 genes and 5310 guide sequences where there are only 20% true values. CRISPRpred achieves Area Under Receiver Operating Characteristics Curve (AUROC-Curve, Area Under Precision Recall Curve (AUPR-Curve and maximum Matthews Correlation Coefficient (MCC as 0.85, 0.56 and 0.48, respectively. Our tool shows approximately 5% improvement in AUPR-Curve and after analyzing all evaluation metrics, we find that CRISPRpred is better than the current state-of-the-art. CRISPRpred is enough flexible to extract relevant features and use them in a learning algorithm. The source code of our entire software with relevant dataset can be found in the following link: https://github.com/khaled-buet/CRISPRpred.

  4. Seismic prediction on the favorable efficient development areas of the Longwangmiao Fm gas reservoir in the Gaoshiti–Moxi area, Sichuan Basin

    Directory of Open Access Journals (Sweden)

    Guangrong Zhang

    2017-05-01

    Full Text Available The Lower Cambrian Longwangmiao Fm gas reservoir in the Gaoshiti–Moxi area, the Sichuan Basin, is a super giant monoblock marine carbonate gas reservoir with its single size being the largest in China. The key to the realization of high and stable production gas wells in this gas reservoir is to identify accurately high-permeability zones where there are dissolved pores or dissolved pores are superimposed with fractures. However, high quality dolomite reservoirs are characterized by large burial depth and strong heterogeneity, so reservoir prediction is of difficult. In this paper, related seismic researches were carried out and supporting technologies were developed as follows. First, a geologic model was built after an analysis of the existing data and forward modeling was carried out to establish a reservoir seismic response model. Second, by virtue of well-oriented amplitude processing technology, spherical diffusion compensation factor was obtained based on VSP well logging data and the true amplitude of seismic data was recovered. Third, the resolution of deep seismic data was improved by using the well-oriented high-resolution frequency-expanding technology and prestack time migration data of high quality was acquired. And fourth, multiple shoal facies reservoirs were traced by using the global automatic seismic interpretation technology which is based on stratigraphic model, multiple reservoirs which are laterally continuous and vertically superimposed could be predicted, and the areal distribution of high quality reservoirs could be described accurately and efficiently. By virtue of the supporting technologies, drilling trajectory is positioned accurately, and the deployed development wells all have high yield. These technologies also promote the construction of a modern supergiant gas field of tens of billions of cubic meters.

  5. Numerical model for predicting thermodynamic cycle and thermal efficiency of a beta-type Stirling engine with rhombic-drive mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chin-Hsiang; Yu, Ying-Ju [Department of Aeronautics and Astronautics, National Cheng Kung University, No. 1, Ta-Shieh Road, Tainan 70101, Taiwan (China)

    2010-11-15

    This study is aimed at development of a numerical model for a beta-type Stirling engine with rhombic-drive mechanism. By taking into account the non-isothermal effects, the effectiveness of the regenerative channel, and the thermal resistance of the heating head, the energy equations for the control volumes in the expansion chamber, the compression chamber, and the regenerative channel can be derived and solved. Meanwhile, a fully developed flow velocity profile in the regenerative channel, in terms of the reciprocating velocity of the displacer and the instantaneous pressure difference between the expansion and the compression chambers, is derived for calculation of the mass flow rate through the regenerative channel. In this manner, the internal irreversibility caused by pressure difference in the two chambers and the viscous shear effects due to the motion of the reciprocating displacer on the fluid flow in the regenerative channel gap are included. Periodic variation of pressures, volumes, temperatures, masses, and heat transfers in the expansion and the compression chambers are predicted. A parametric study of the dependence of the power output and thermal efficiency on the geometrical and physical parameters, involving regenerative gap, distance between two gears, offset distance from the crank to the center of gear, and the heat source temperature, has been performed. (author)

  6. Isothermal approach to predict the removal efficiency of β-carotene adsorption from CPO using activated carbon produced from tea waste

    Science.gov (United States)

    Harahap, S. A. A.; Nazar, A.; Yunita, M.; Pasaribu, RA; Panjaitan, F.; Yanuar, F.; Misran, E.

    2018-02-01

    Adsorption of β-carotene in crude palm oil (CPO) was studied using activated carbon produced from tea waste (ACTW) an adsorbent. Isothermal studies were carried out at 60 °C with the ratio of activated carbon to CPO were 1:3, 1:4, 1:5, and 1:6, respectively. The ACTW showed excellent performance as the percentage of adsorption of β-carotene from CPO was > 99%. The best percentage removal (R) was achieved at ACTW to CPO ratio equal to 1:3, which was 99.61%. The appropriate isotherm model for this study was Freundlich isotherm model. The combination of Freundlich isotherm equation and mass balance equation showed a good agreement when validated to the experimental data. The equation subsequently executed to predict the removal efficiency under given sets of operating conditions. At a targetted R, CPO volume can be estimated for a certain initial concentration β-carotene in CPO C0 and mass of ACTW adsorbent M used.

  7. Transpulmonary thermodilution (TPTD before, during and after Sustained Low Efficiency Dialysis (SLED. A Prospective Study on Feasibility of TPTD and Prediction of Successful Fluid Removal.

    Directory of Open Access Journals (Sweden)

    Wolfgang Huber

    Full Text Available Acute kidney injury (AKI is common in critically ill patients. AKI requires renal replacement therapy (RRT in up to 10% of patients. Particularly during connection and fluid removal, RRT frequently impairs haemodyamics which impedes recovery from AKI. Therefore, "acute" connection with prefilled tubing and prolonged periods of RRT including sustained low efficiency dialysis (SLED has been suggested. Furthermore, advanced haemodynamic monitoring using trans-pulmonary thermodilution (TPTD and pulse contour analysis (PCA might help to define appropriate fluid removal goals.Since data on TPTD to guide RRT are scarce, we investigated the capabilities of TPTD- and PCA-derived parameters to predict feasibility of fluid removal in 51 SLED-sessions (Genius; Fresenius, Germany; blood-flow 150 mL/min in 32 patients with PiCCO-monitoring (Pulsion Medical Systems, Germany. Furthermore, we sought to validate the reliability of TPTD during RRT and investigated the impact of "acute" connection and of disconnection with re-transfusion on haemodynamics. TPTDs were performed immediately before and after connection as well as disconnection.Comparison of cardiac index derived from TPTD (CItd and PCA (CIpc before, during and after RRT did not give hints for confounding of TPTD by ongoing RRT. Connection to RRT did not result in relevant changes in haemodynamic parameters including CItd. However, disconnection with re-transfusion of the tubing volume resulted in significant increases in CItd, CIpc, CVP, global end-diastolic volume index GEDVI and cardiac power index CPI. Feasibility of the pre-defined ultrafiltration goal without increasing catecholamines by >10% (primary endpoint was significantly predicted by baseline CPI (ROC-AUC 0.712; p = 0.010 and CItd (ROC-AUC 0.662; p = 0.049.TPTD is feasible during SLED. "Acute" connection does not substantially impair haemodynamics. Disconnection with re-transfusion increases preload, CI and CPI. The extent of these changes

  8. Prediction of FAD binding sites in electron transport proteins according to efficient radial basis function networks and significant amino acid pairs.

    Science.gov (United States)

    Le, Nguyen-Quoc-Khanh; Ou, Yu-Yen

    2016-07-30

    Cellular respiration is a catabolic pathway for producing adenosine triphosphate (ATP) and is the most efficient process through which cells harvest energy from consumed food. When cells undergo cellular respiration, they require a pathway to keep and transfer electrons (i.e., the electron transport chain). Due to oxidation-reduction reactions, the electron transport chain produces a transmembrane proton electrochemical gradient. In case protons flow back through this membrane, this mechanical energy is converted into chemical energy by ATP synthase. The convert process is involved in producing ATP which provides energy in a lot of cellular processes. In the electron transport chain process, flavin adenine dinucleotide (FAD) is one of the most vital molecules for carrying and transferring electrons. Therefore, predicting FAD binding sites in the electron transport chain is vital for helping biologists understand the electron transport chain process and energy production in cells. We used an independent data set to evaluate the performance of the proposed method, which had an accuracy of 69.84 %. We compared the performance of the proposed method in analyzing two newly discovered electron transport protein sequences with that of the general FAD binding predictor presented by Mishra and Raghava and determined that the accuracy of the proposed method improved by 9-45 % and its Matthew's correlation coefficient was 0.14-0.5. Furthermore, the proposed method enabled reducing the number of false positives significantly and can provide useful information for biologists. We developed a method that is based on PSSM profiles and SAAPs for identifying FAD binding sites in newly discovered electron transport protein sequences. This approach achieved a significant improvement after we added SAAPs to PSSM features to analyze FAD binding proteins in the electron transport chain. The proposed method can serve as an effective tool for predicting FAD binding sites in electron

  9. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites

    Czech Academy of Sciences Publication Activity Database

    Zhou, Y.; Wu, X.; Weiming, J.; Chen, J.; Wang, S.; Wang, H.; Wenping, Y.; Black, T. A.; Jassal, R.; Ibrom, A.; Han, S.; Yan, J.; Margolis, H.; Roupsard, O.; Li, Y.; Zhao, F.; Kiely, G.; Starr, G.; Pavelka, Marian; Montagnani, L.; Wohlfahrt, G.; D'Odorico, P.; Cook, D.; Altaf Arain, M.; Bonal, D.; Beringer, J.; Blanken, P. D.; Loubet, B.; Leclerc, M. Y.; Matteucci, G.; Nagy, Z.; Olejnik, Janusz; U., K. T. P.; Varlagin, A.

    2016-01-01

    Roč. 36, č. 7 (2016), s. 2743-2760 ISSN 2169-8953 Institutional support: RVO:67179843 Keywords : global parametrization * predicting model * FlUXNET Subject RIV: EH - Ecology, Behaviour Impact factor: 3.395, year: 2016

  10. Only 7% of the variation in feed efficiency in veal calves can be predicted from variation in feeding motivation, digestion, metabolism, immunology, and behavioral traits in early life

    NARCIS (Netherlands)

    Gilbert, M.S.; Borne, van den J.J.G.C.; Reenen, van C.G.; Gerrits, W.J.J.

    2017-01-01

    High interindividual variation in growth performance is commonly observed in veal calf production and appears to depend on milk replacer (MR) composition. Our first objective was to examine whether variation in growth performance in healthy veal calves can be predicted from early life

  11. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites

    DEFF Research Database (Denmark)

    Zhou, Yanlian; Wu, Xiaocui; Ju, Weimin

    2015-01-01

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed...... to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at six FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using...... data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8 day GPP. Optimized maximum light use efficiency of shaded leaves (epsilon(msh)) was 2.63 to 4.59 times that of sunlit leaves...

  12. The efficiency of therapeutic erythrocytapheresis compared to phlebotomy: a mathematical tool for predicting response in hereditary hemochromatosis, polycythemia vera, and secondary erythrocytosis.

    Science.gov (United States)

    Evers, Dorothea; Kerkhoffs, Jean-Louis; Van Egmond, Liane; Schipperus, Martin R; Wijermans, Pierre W

    2014-06-01

    Recently, therapeutic erythrocytapheresis (TE) was suggested to be more efficient in depletion of red blood cells (RBC) compared to manual phlebotomy in the treatment of hereditary hemochromatosis (HH), polycythemia vera (PV), and secondary erythrocytosis (SE). The efficiency rate (ER) of TE, that is, the increase in RBC depletion achieved with one TE cycle compared to one phlebotomy procedure, can be calculated based on estimated blood volume (BV), preprocedural hematocrit (Hct(B)), and delta-hematocrit (ΔHct). In a retrospective evaluation of 843 TE procedures (in 45 HH, 33 PV, and 40 SE patients) the mean ER was 1.86 ± 0.62 with the highest rates achieved in HH patients. An ER of 1.5 was not reached in 37.9% of all procedures mainly concerning patients with a BV below 4,500 ml. In 12 newly diagnosed homozygous HH patients, the induction phase duration was medially 38.4 weeks (medially 10.5 procedures). During the maintenance treatment of HH, PV, and SE, the interval between TE procedures was medially 13.4 weeks. This mathematical model can help select the proper treatment modality for the individual patient. Especially for patients with a large BV and high achievable ΔHct, TE appears to be more efficient than manual phlebotomy in RBC depletion thereby potentially reducing the numbers of procedures and expanding the interprocedural time period for HH, PV, and SE. © 2013 Wiley Periodicals, Inc.

  13. Global parameterization and validation of a two-leaf light use efficiency model for predicting gross primary production across FLUXNET sites: TL-LUE Parameterization and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yanlian [Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wu, Xiaocui [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Ju, Weimin [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Jiangsu Center for Collaborative Innovation in Geographic Information Resource Development and Application, Nanjing China; Chen, Jing M. [International Institute for Earth System Sciences, Nanjing University, Nanjing China; Joint Center for Global Change Studies, Beijing China; Wang, Shaoqiang [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Wang, Huimin [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Yuan, Wenping [State Key Laboratory of Earth Surface Processes and Resource Ecology, Future Earth Research Institute, Beijing Normal University, Beijing China; Andrew Black, T. [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Jassal, Rachhpal [Faculty of Land and Food Systems, University of British Columbia, Vancouver British Columbia Canada; Ibrom, Andreas [Department of Environmental Engineering, Technical University of Denmark (DTU), Kgs. Lyngby Denmark; Han, Shijie [Institute of Applied Ecology, Chinese Academy of Sciences, Shenyang China; Yan, Junhua [South China Botanical Garden, Chinese Academy of Sciences, Guangzhou China; Margolis, Hank [Centre for Forest Studies, Faculty of Forestry, Geography and Geomatics, Laval University, Quebec City Quebec Canada; Roupsard, Olivier [CIRAD-Persyst, UMR Ecologie Fonctionnelle and Biogéochimie des Sols et Agroécosystèmes, SupAgro-CIRAD-INRA-IRD, Montpellier France; CATIE (Tropical Agricultural Centre for Research and Higher Education), Turrialba Costa Rica; Li, Yingnian [Northwest Institute of Plateau Biology, Chinese Academy of Sciences, Xining China; Zhao, Fenghua [Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Science, Beijing China; Kiely, Gerard [Environmental Research Institute, Civil and Environmental Engineering Department, University College Cork, Cork Ireland; Starr, Gregory [Department of Biological Sciences, University of Alabama, Tuscaloosa Alabama USA; Pavelka, Marian [Laboratory of Plants Ecological Physiology, Institute of Systems Biology and Ecology AS CR, Prague Czech Republic; Montagnani, Leonardo [Forest Services, Autonomous Province of Bolzano, Bolzano Italy; Faculty of Sciences and Technology, Free University of Bolzano, Bolzano Italy; Wohlfahrt, Georg [Institute for Ecology, University of Innsbruck, Innsbruck Austria; European Academy of Bolzano, Bolzano Italy; D' Odorico, Petra [Grassland Sciences Group, Institute of Agricultural Sciences, ETH Zurich Switzerland; Cook, David [Atmospheric and Climate Research Program, Environmental Science Division, Argonne National Laboratory, Argonne Illinois USA; Arain, M. Altaf [McMaster Centre for Climate Change and School of Geography and Earth Sciences, McMaster University, Hamilton Ontario Canada; Bonal, Damien [INRA Nancy, UMR EEF, Champenoux France; Beringer, Jason [School of Earth and Environment, The University of Western Australia, Crawley Australia; Blanken, Peter D. [Department of Geography, University of Colorado Boulder, Boulder Colorado USA; Loubet, Benjamin [UMR ECOSYS, INRA, AgroParisTech, Université Paris-Saclay, Thiverval-Grignon France; Leclerc, Monique Y. [Department of Crop and Soil Sciences, College of Agricultural and Environmental Sciences, University of Georgia, Athens Georgia USA; Matteucci, Giorgio [Viea San Camillo Ed LellisViterbo, University of Tuscia, Viterbo Italy; Nagy, Zoltan [MTA-SZIE Plant Ecology Research Group, Szent Istvan University, Godollo Hungary; Olejnik, Janusz [Meteorology Department, Poznan University of Life Sciences, Poznan Poland; Department of Matter and Energy Fluxes, Global Change Research Center, Brno Czech Republic; Paw U, Kyaw Tha [Department of Land, Air and Water Resources, University of California, Davis California USA; Joint Program on the Science and Policy of Global Change, Massachusetts Institute of Technology, Cambridge USA; Varlagin, Andrej [A.N. Severtsov Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow Russia

    2016-04-06

    Light use efficiency (LUE) models are widely used to simulate gross primary production (GPP). However, the treatment of the plant canopy as a big leaf by these models can introduce large uncertainties in simulated GPP. Recently, a two-leaf light use efficiency (TL-LUE) model was developed to simulate GPP separately for sunlit and shaded leaves and has been shown to outperform the big-leaf MOD17 model at 6 FLUX sites in China. In this study we investigated the performance of the TL-LUE model for a wider range of biomes. For this we optimized the parameters and tested the TL-LUE model using data from 98 FLUXNET sites which are distributed across the globe. The results showed that the TL-LUE model performed in general better than the MOD17 model in simulating 8-day GPP. Optimized maximum light use efficiency of shaded leaves (εmsh) was 2.63 to 4.59 times that of sunlit leaves (εmsu). Generally, the relationships of εmsh and εmsu with εmax were well described by linear equations, indicating the existence of general patterns across biomes. GPP simulated by the TL-LUE model was much less sensitive to biases in the photosynthetically active radiation (PAR) input than the MOD17 model. The results of this study suggest that the proposed TL-LUE model has the potential for simulating regional and global GPP of terrestrial ecosystems and it is more robust with regard to usual biases in input data than existing approaches which neglect the bi-modal within-canopy distribution of PAR.

  14. Disruption of Pseudomonas putida by high pressure homogenization: a comparison of the predictive capacity of three process models for the efficient release of arginine deiminase.

    Science.gov (United States)

    Patil, Mahesh D; Patel, Gopal; Surywanshi, Balaji; Shaikh, Naeem; Garg, Prabha; Chisti, Yusuf; Banerjee, Uttam Chand

    2016-12-01

    Disruption of Pseudomonas putida KT2440 by high-pressure homogenization in a French press is discussed for the release of arginine deiminase (ADI). The enzyme release response of the disruption process was modelled for the experimental factors of biomass concentration in the broth being disrupted, the homogenization pressure and the number of passes of the cell slurry through the homogenizer. For the same data, the response surface method (RSM), the artificial neural network (ANN) and the support vector machine (SVM) models were compared for their ability to predict the performance parameters of the cell disruption. The ANN model proved to be best for predicting the ADI release. The fractional disruption of the cells was best modelled by the RSM. The fraction of the cells disrupted depended mainly on the operating pressure of the homogenizer. The concentration of the biomass in the slurry was the most influential factor in determining the total protein release. Nearly 27 U/mL of ADI was released within a single pass from slurry with a biomass concentration of 260 g/L at an operating pressure of 510 bar. Using a biomass concentration of 100 g/L, the ADI release by French press was 2.7-fold greater than in a conventional high-speed bead mill. In the French press, the total protein release was 5.8-fold more than in the bead mill. The statistical analysis of the completely unseen data exhibited ANN and SVM modelling as proficient alternatives to RSM for the prediction and generalization of the cell disruption process in French press.

  15. Experimental validation of plant peroxisomal targeting prediction algorithms by systematic comparison of in vivo import efficiency and in vitro PTS1 binding affinity.

    Science.gov (United States)

    Skoulding, Nicola S; Chowdhary, Gopal; Deus, Mara J; Baker, Alison; Reumann, Sigrun; Warriner, Stuart L

    2015-03-13

    Most peroxisomal matrix proteins possess a C-terminal targeting signal type 1 (PTS1). Accurate prediction of functional PTS1 sequences and their relative strength by computational methods is essential for determination of peroxisomal proteomes in silico but has proved challenging due to high levels of sequence variability of non-canonical targeting signals, particularly in higher plants, and low levels of availability of experimentally validated non-canonical examples. In this study, in silico predictions were compared with in vivo targeting analyses and in vitro thermodynamic binding of mutated variants within the context of one model targeting sequence. There was broad agreement between the methods for entire PTS1 domains and position-specific single amino acid residues, including residues upstream of the PTS1 tripeptide. The hierarchy Leu>Met>Ile>Val at the C-terminal position was determined for all methods but both experimental approaches suggest that Tyr is underweighted in the prediction algorithm due to the absence of this residue in the positive training dataset. A combination of methods better defines the score range that discriminates a functional PTS1. In vitro binding to the PEX5 receptor could discriminate among strong targeting signals while in vivo targeting assays were more sensitive, allowing detection of weak functional import signals that were below the limit of detection in the binding assay. Together, the data provide a comprehensive assessment of the factors driving PTS1 efficacy and provide a framework for the more quantitative assessment of the protein import pathway in higher plants. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. A study on an efficient prediction of welding deformation for T-joint laser welding of sandwich panel PART I : Proposal of a heat source model

    Directory of Open Access Journals (Sweden)

    Jae Woong Kim

    2013-09-01

    Full Text Available The use of I-Core sandwich panel has increased in cruise ship deck structure since it can provide similar bending strength with conventional stiffened plate while keeping lighter weight and lower web height. However, due to its thin plate thickness, i.e. about 4~6 mm at most, it is assembled by high power CO2 laser welding to minimize the welding deformation. This research proposes a volumetric heat source model for T-joint of the I-Core sandwich panel and a method to use shell element model for a thermal elasto-plastic analysis to predict welding deformation. This paper, Part I, focuses on the heat source model. A circular cone type heat source model is newly suggested in heat transfer analysis to realize similar melting zone with that observed in experiment. An additional suggestion is made to consider negative defocus, which is commonly applied in T-joint laser welding since it can provide deeper penetration than zero defocus. The proposed heat source is also verified through 3D thermal elasto-plastic analysis to compare welding deformation with experimental results. A parametric study for different welding speeds, defocus values, and welding powers is performed to investigate the effect on the melting zone and welding deformation. In Part II, focuses on the proposed method to employ shell element model to predict welding deformation in thermal elasto-plastic analysis instead of solid element model.

  17. Multiparametric magnetic resonance imaging and frozen-section analysis efficiently predict upgrading, upstaging, and extraprostatic extension in patients undergoing nerve-sparing robotic-assisted radical prostatectomy.

    Science.gov (United States)

    Bianchi, Roberto; Cozzi, Gabriele; Petralia, Giuseppe; Alessi, Sarah; Renne, Giuseppe; Bottero, Danilo; Brescia, Antonio; Cioffi, Antonio; Cordima, Giovanni; Ferro, Matteo; Matei, Deliu Victor; Mazzoleni, Federica; Musi, Gennaro; Mistretta, Francesco Alessandro; Serino, Alessandro; Tringali, Valeria Maria Lucia; Coman, Ioan; De Cobelli, Ottavio

    2016-10-01

    To evaluate the role of multiparametric magnetic resonance imaging (mpMRI) in predicting upgrading, upstaging, and extraprostatic extension in patients with low-risk prostate cancer (PCa). MpMRI may reduce positive surgical margins (PSM) and improve nerve-sparing during robotic-assisted radical prostatectomy (RARP) for localized prostate cancer PCa.This was a retrospective, monocentric, observational study. We retrieved the records of patients undergoing RARP from January 2012 to December 2013 at our Institution. Inclusion criteria were: PSA <10 ng/mL; clinical stage predict upgrading and/or upstaging at final pathology.

  18. [Absolute numbers of peripheral blood CD34+ hematopoietic stem cells prior to a leukapheresis procedure as a parameter predicting the efficiency of stem cell collection].

    Science.gov (United States)

    Galtseva, I V; Davydova, Yu O; Gaponova, T V; Kapranov, N M; Kuzmina, L A; Troitskaya, V V; Gribanova, E O; Kravchenko, S K; Mangasarova, Ya K; Zvonkov, E E; Parovichnikova, E N; Mendeleeva, L P; Savchenko, V G

    To identify a parameter predicting a collection of at least 2·106 CD34+ hematopoietic stem cells (HSC)/kg body weight per leukapheresis (LA) procedure. The investigation included 189 patients with hematological malignancies and 3 HSC donors, who underwent mobilization of stem cells with their subsequent collection by LA. Absolute numbers of peripheral blood leukocytes and CD34+ cells before a LA procedure, as well as a number of CD34+ cells/kg body weight (BW) in the LA product stored on the same day were determined in each patient (donor). There was no correlation between the number of leukocytes and that of stored CD34+ cells/kg BW. There was a close correlation between the count of peripheral blood CD34+ cells prior to LA and that of collected CD34+ cells calculated with reference to kg BW. The optimal absolute blood CD34+ cell count was estimated to 20 per µl, at which a LA procedure makes it possible to collect 2·106 or more CD34+ cells/kg BW.

  19. Energy efficiency

    International Nuclear Information System (INIS)

    2010-01-01

    After a speech of the CEA's (Commissariat a l'Energie Atomique) general administrator about energy efficiency as a first rank challenge for the planet and for France, this publications proposes several contributions: a discussion of the efficiency of nuclear energy, an economic analysis of R and D's value in the field of fourth generation fast reactors, discussions about biofuels and the relationship between energy efficiency and economic competitiveness, and a discussion about solar photovoltaic efficiency

  20. Computationally efficient prediction of area per lipid

    DEFF Research Database (Denmark)

    Chaban, Vitaly V.

    2014-01-01

    dynamics increases exponentially with respect to temperature. APL dependence on temperature is linear over an entire temperature range. I provide numerical evidence that thermal expansion coefficient of a lipid bilayer can be computed at elevated temperatures and extrapolated to the temperature of interest...

  1. Juggling Efficiency

    DEFF Research Database (Denmark)

    Andersen, Rikke Sand; Vedsted, Peter

    2015-01-01

    on institutional logics, we illustrate how a logic of efficiency organise and give shape to healthcare seeking practices as they manifest in local clinical settings. Overall, patient concerns are reconfigured to fit the local clinical setting and healthcare professionals and patients are required to juggle...... efficiency in order to deal with uncertainties and meet more complex or unpredictable needs. Lastly, building on the empirical case of cancer diagnostics, we discuss the implications of the pervasiveness of the logic of efficiency in the clinical setting and argue that provision of medical care in today......'s primary care settings requires careful balancing of increasing demands of efficiency, greater complexity of biomedical knowledge and consideration for individual patient needs....

  2. Batch efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Schwickerath, Ulrich; Silva, Ricardo; Uria, Christian, E-mail: Ulrich.Schwickerath@cern.c, E-mail: Ricardo.Silva@cern.c [CERN IT, 1211 Geneve 23 (Switzerland)

    2010-04-01

    A frequent source of concern for resource providers is the efficient use of computing resources in their centers. This has a direct impact on requests for new resources. There are two different but strongly correlated aspects to be considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage of their available resources. Both things, the box usage and the efficiency of individual user jobs, need to be closely monitored so that the sources of the inefficiencies can be identified. At CERN, the Lemon monitoring system is used for both purposes. Examples of such sources are poorly written user code, inefficient access to mass storage systems, and dedication of resources to specific user groups. As a first step for improvements CERN has launched a project to develop a scheduler add-on that allows careful overloading of worker nodes that run idle jobs.

  3. Span efficiency in hawkmoths.

    Science.gov (United States)

    Henningsson, Per; Bomphrey, Richard J

    2013-07-06

    Flight in animals is the result of aerodynamic forces generated as flight muscles drive the wings through air. Aerial performance is therefore limited by the efficiency with which momentum is imparted to the air, a property that can be measured using modern techniques. We measured the induced flow fields around six hawkmoth species flying tethered in a wind tunnel to assess span efficiency, ei, and from these measurements, determined the morphological and kinematic characters that predict efficient flight. The species were selected to represent a range in wingspan from 40 to 110 mm (2.75 times) and in mass from 0.2 to 1.5 g (7.5 times) but they were similar in their overall shape and their ecology. From high spatio-temporal resolution quantitative wake images, we extracted time-resolved downwash distributions behind the hawkmoths, calculating instantaneous values of ei throughout the wingbeat cycle as well as multi-wingbeat averages. Span efficiency correlated positively with normalized lift and negatively with advance ratio. Average span efficiencies for the moths ranged from 0.31 to 0.60 showing that the standard generic value of 0.83 used in previous studies of animal flight is not a suitable approximation of aerodynamic performance in insects.

  4. Energy efficiency

    International Nuclear Information System (INIS)

    Marvillet, Ch.; Tochon, P.; Mercier, P.

    2004-01-01

    World energy demand is constantly rising. This is a legitimate trend, insofar as access to energy enables enhanced quality of life and sanitation levels for populations. On the other hand, such increased consumption generates effects that may be catastrophic for the future of the planet (climate change, environmental imbalance), should this growth conform to the patterns followed, up to recent times, by most industrialized countries. Reduction of greenhouse gas emissions, development of new energy sources and energy efficiency are seen as the major challenges to be taken up for the world of tomorrow. In France, the National Energy Debate indeed emphasized, in 2003, the requirement to control both demand for, and offer of, energy, through a strategic orientation law for energy. The French position corresponds to a slightly singular situation - and a privileged one, compared to other countries - owing to massive use of nuclear power for electricity generation. This option allows France to be responsible for a mere 2% of worldwide greenhouse gas emissions. Real advances can nonetheless still be achieved as regards improved energy efficiency, particularly in the transportation and residential-tertiary sectors, following the lead, in this respect, shown by industry. These two sectors indeed account for over half of the country CO 2 emissions (26% and 25% respectively). With respect to transportation, the work carried out by CEA on the hydrogen pathway, energy converters, and electricity storage has been covered by the preceding chapters. As regards housing, a topic addressed by one of the papers in this chapter, investigations at CEA concern integration of the various devices enabling value-added use of renewable energies. At the same time, the organization is carrying through its activity in the extensive area of heat exchangers, allowing industry to benefit from improved understanding in the modeling of flows. An activity evidenced by advances in energy efficiency for

  5. Offsetting efficiency

    International Nuclear Information System (INIS)

    Katz, M.

    1995-01-01

    Whichever way the local distribution company (LDC) tries to convert residential customers to gas or expand their use of it, the process itself has become essential for the natural gas industry. The amount of gas used by each residential customer has been decreasing for 25 years -- since the energy crisis of the early 1970s. It's a direct result of better-insulated homes and more-efficient gas appliances, and that trend is continuing. So, LDCs have a choice of either finding new users and uses for gas, or recognizing that their throughput per customer is going to continue declining. The paper discusses strategies that several gas utilities are using to increase the number of gas appliances in the customer's homes. These and other strategies keep the gas industry optimistic about the future of the residential market: A.G.A. has projected that by 2010 demand will expand, from 1994's 5.1 quadrillion Btu (quads) to 5.7 quads, even with continued improvements in appliance efficiency. That estimate, however, will depend on the industry-s utilities and whether they keep converting, proselytizing, persuading and influencing customers to use more natural gas

  6. WALS Prediction

    NARCIS (Netherlands)

    Magnus, J.R.; Wang, W.; Zhang, Xinyu

    2012-01-01

    Abstract: Prediction under model uncertainty is an important and difficult issue. Traditional prediction methods (such as pretesting) are based on model selection followed by prediction in the selected model, but the reported prediction and the reported prediction variance ignore the uncertainty

  7. Modeling of venturi scrubber efficiency

    Science.gov (United States)

    Crowder, Jerry W.; Noll, Kenneth E.; Davis, Wayne T.

    The parameters affecting venturi scrubber performance have been rationally examined and modifications to the current modeling theory have been developed. The modified model has been validated with available experimental data for a range of throat gas velocities, liquid-to-gas ratios and particle diameters and is used to study the effect of some design parameters on collection efficiency. Most striking among the observations is the prediction of a new design parameter termed the minimum contactor length. Also noted is the prediction of little effect on collection efficiency with increasing liquid-to-gas ratio above about 2ℓ m-3. Indeed, for some cases a decrease in collection efficiency is predicted for liquid rates above this value.

  8. Efficient STFT

    International Nuclear Information System (INIS)

    Aamir, K.M.; Maud, M.A.

    2004-01-01

    Small perturbations in signals (or any time series), at some particular instant, affect the whole frequency spectrum due to the global function e/sup j omega t/ in Fourier Transform formulation. However, the Fourier spectrum does not convey the time instant at which the perturbation occurred. Consequently the information on the particular time instance of occurrence of that perturbation is lost when spectrum is observed. Apparently Fourier analysis seems to be inadequate in such situations. This inadequacy is overcome by the use of Short Time Fourier Transform (STFT), which keeps track of time as well as frequency information. In STFT analysis, a fixed length window, say of length N, is moved sample by sample as the data arrives. The Discrete Fourier Transform (DFT) of this fixed window of length N is calculated using Fast Fourier Transform (FFT) algorithm. If the total number of points is M > N, the computational complexity of this scheme works out to be at least ((M-N) N log/sub 2/N). On the other hand, STFT is shown to be of computational complexity 6NM and 8NM in the literature. In this paper, two algorithms are presented which compute the same STFT more efficiently. The computational complexity of the proposed algorithms works out to be MN of one algorithm and even lesser in the other algorithm. This reduction in complexity becomes significant for large data sets. This algorithm also remains valid if a stationary part of signal is skipped. (author)

  9. Prediction of intermetallic compounds

    International Nuclear Information System (INIS)

    Burkhanov, Gennady S; Kiselyova, N N

    2009-01-01

    The problems of predicting not yet synthesized intermetallic compounds are discussed. It is noted that the use of classical physicochemical analysis in the study of multicomponent metallic systems is faced with the complexity of presenting multidimensional phase diagrams. One way of predicting new intermetallics with specified properties is the use of modern processing technology with application of teaching of image recognition by the computer. The algorithms used most often in these methods are briefly considered and the efficiency of their use for predicting new compounds is demonstrated.

  10. Climate prediction and predictability

    Science.gov (United States)

    Allen, Myles

    2010-05-01

    Climate prediction is generally accepted to be one of the grand challenges of the Geophysical Sciences. What is less widely acknowledged is that fundamental issues have yet to be resolved concerning the nature of the challenge, even after decades of research in this area. How do we verify or falsify a probabilistic forecast of a singular event such as anthropogenic warming over the 21st century? How do we determine the information content of a climate forecast? What does it mean for a modelling system to be "good enough" to forecast a particular variable? How will we know when models and forecasting systems are "good enough" to provide detailed forecasts of weather at specific locations or, for example, the risks associated with global geo-engineering schemes. This talk will provide an overview of these questions in the light of recent developments in multi-decade climate forecasting, drawing on concepts from information theory, machine learning and statistics. I will draw extensively but not exclusively from the experience of the climateprediction.net project, running multiple versions of climate models on personal computers.

  11. Contrasting water-use efficiency (WUE) responses of a potato mapping population and capability of modified ball-berry model to predict stomatal conductance and WUE measured at different environmental conditions

    DEFF Research Database (Denmark)

    Kaminski, Kacper Piotr; Kørup, Kirsten; Kristensen, K.

    2015-01-01

    Potatoes (Solanum tuberosum L.) are drought-sensitive and more efficient water use, while maintaining high yields is required. Here, water-use efficiency (WUE) of a mapping population comprising 144 clones from a cross between 90-HAF-01 (Solanum tuberosum1) and 90-HAG-15 (S. tuberosum2 × S....... sparsipilum) was measured on well-watered plants under controlled-environment conditions combining three levels of each of the factors: [CO2], temperature, light, and relative humidity in growth chambers. The clones were grouped according to their photosynthetic WUE (pWUE) and whole-plant WUE (wpWUE) during...... (34 %) and dry matter accumulation (55 %, P water use (16 %). The pWUE correlated negatively to the ratio between leaf-internal and leaf-external [CO2] (R2 = -0.86 in 2010 and R2 = -0.83 in 2011, P

  12. Measurement Of Technical Efficiency In Irrigated Vegetable ...

    African Journals Online (AJOL)

    This study measured technical efficiency and identified its determinants in irrigated vegetable production in Nasarawa State of Nigeria using a stochastic frontier model. A complete enumeration of 193 NADP-registered vegetable farmers was done. The predicted farm technical efficiency ranges from 25.94 to 96.24 per cent ...

  13. Pump efficiency in solar-energy systems

    Science.gov (United States)

    1978-01-01

    Study investigates characteristics of typical off-the-shelf pumping systems that might be used in solar systems. Report includes discussion of difficulties in predicting pump efficiency from manufacturers' data. Sample calculations are given. Peak efficiencies, flow-rate control, and noise levels are investigated. Review or theory of pumps types and operating characteristics is presented.

  14. Numerical calculation of particle collection efficiency in an ...

    Indian Academy of Sciences (India)

    Theoretical and numerical research has been previously done on ESPs to predict the efficiency ... Lagrangian simulations of particle transport in wire–plate ESP were .... The collection efficiency can be defined as the ratio of the number of ...

  15. Can a Satellite-Derived Estimate of the Fraction of PAR Absorbed by Chlorophyll (FAPAR(sub chl)) Improve Predictions of Light-Use Efficiency and Ecosystem Photosynthesis for a Boreal Aspen Forest?

    Science.gov (United States)

    Zhang, Qingyuan; Middleton, Elizabeth M.; Margolis, Hank A.; Drolet, Guillaume G.; Barr, Alan A.; Black, T. Andrew

    2009-01-01

    Gross primary production (GPP) is a key terrestrial ecophysiological process that links atmospheric composition and vegetation processes. Study of GPP is important to global carbon cycles and global warming. One of the most important of these processes, plant photosynthesis, requires solar radiation in the 0.4-0.7 micron range (also known as photosynthetically active radiation or PAR), water, carbon dioxide (CO2), and nutrients. A vegetation canopy is composed primarily of photosynthetically active vegetation (PAV) and non-photosynthetic vegetation (NPV; e.g., senescent foliage, branches and stems). A green leaf is composed of chlorophyll and various proportions of nonphotosynthetic components (e.g., other pigments in the leaf, primary/secondary/tertiary veins, and cell walls). The fraction of PAR absorbed by whole vegetation canopy (FAPAR(sub canopy)) has been widely used in satellite-based Production Efficiency Models to estimate GPP (as a product of FAPAR(sub canopy)x PAR x LUE(sub canopy), where LUE(sub canopy) is light use efficiency at canopy level). However, only the PAR absorbed by chlorophyll (a product of FAPAR(sub chl) x PAR) is used for photosynthesis. Therefore, remote sensing driven biogeochemical models that use FAPAR(sub chl) in estimating GPP (as a product of FAPAR(sub chl x PAR x LUE(sub chl) are more likely to be consistent with plant photosynthesis processes.

  16. Earthquake prediction

    International Nuclear Information System (INIS)

    Ward, P.L.

    1978-01-01

    The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures

  17. Quantum-chemical prediction of the effects of Ni-loading on the hydrogenation and water-splitting efficiency of TiO2 nanoparticles with an experimental test

    Science.gov (United States)

    Lin, Cheng-Kuo; Chuang, Chung-Ching; Raghunath, Putikam; Srinivasadesikan, V.; Wang, T. T.; Lin, M. C.

    2017-01-01

    The effects of Ni-loading on TiO2 nanoparticles can pronouncedly reduce the barriers for dissociation of H2 from 48 kcal/mol on the pure TiO2 to as low as 1-3 kcal/mol on the loaded samples facilitating the hydrogenation of NPs. Preliminary data of our test indicate that the hydrogenation of Ni-loaded TiO2 NPs results in a significant UV-visible absorption extending well beyond 750 nm with an increase in water splitting efficiency by as much as 67 times over those of pure and hydrogenated TiO2 NPs without Ni-loading under our mild hydrogenation condition using 800 Torr of H2 at 300 °C for 3 h.

  18. Energy Conversion Alternatives Study (ECAS), Westinghouse phase 1. Volume 4: Open recuperated and bottomed gas turbine cycles. [performance prediction and energy conversion efficiency of gas turbines in electric power plants (thermodynamic cycles)

    Science.gov (United States)

    Amos, D. J.; Grube, J. E.

    1976-01-01

    Open-cycle recuperated gas turbine plant with inlet temperatures of 1255 to 1644 K (1800 to 2500 F) and recuperators with effectiveness values of 0, 70, 80 and 90% are considered. A 1644 K (2500 F) gas turbine would have a 33.5% plant efficiency in a simple cycle, 37.6% in a recuperated cycle and 47.6% when combined with a sulfur dioxide bottomer. The distillate burning recuperated plant was calculated to produce electricity at a cost of 8.19 mills/MJ (29.5 mills/kWh). Due to their low capital cost $170 to 200 $/kW, the open cycle gas turbine plant should see duty for peaking and intermediate load duty.

  19. Predictive medicine

    NARCIS (Netherlands)

    Boenink, Marianne; ten Have, Henk

    2015-01-01

    In the last part of the twentieth century, predictive medicine has gained currency as an important ideal in biomedical research and health care. Research in the genetic and molecular basis of disease suggested that the insights gained might be used to develop tests that predict the future health

  20. Deep Visual Attention Prediction

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  1. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  2. A study on an efficient prediction of welding deformation for T-joint laser welding of sandwich panel Part II: Proposal of a method to use shell element model

    Directory of Open Access Journals (Sweden)

    Jae Woong Kim

    2014-06-01

    Full Text Available I-core sandwich panel that has been used more widely is assembled using high power CO2 laser welding. Kim et al. (2013 proposed a circular cone type heat source model for the T-joint laser welding between face plate and core. It can cover the negative defocus which is commonly adopted in T-joint laser welding to provide deeper penetration. In part I, a volumetric heat source model is proposed and it is verified thorough a comparison of melting zone on the cross section with experiment results. The proposed model can be used for heat transfer analysis and thermal elasto-plastic analysis to predict welding deformation that occurs during laser welding. In terms of computational time, since the thermal elasto-plastic analysis using 3D solid elements is quite time consuming, shell element model with multi-layers have been employed instead. However, the conventional layered approach is not appropriate for the application of heat load at T-Joint. This paper, Part II, suggests a new method to arrange different number of layers for face plate and core in order to impose heat load only to the face plate.

  3. A study on an efficient prediction of welding deformation for T-joint laser welding of sandwich panel Part II : Proposal of a method to use shell element model

    Directory of Open Access Journals (Sweden)

    Kim Jae Woong

    2014-06-01

    Full Text Available I-core sandwich panel that has been used more widely is assembled using high power CO₂laser welding. Kim et al. (2013 proposed a circular cone type heat source model for the T-joint laser welding between face plate and core. It can cover the negative defocus which is commonly adopted in T-joint laser welding to provide deeper penetration. In part I, a volumetric heat source model is proposed and it is verified thorough a comparison of melting zone on the cross section with experiment results. The proposed model can be used for heat transfer analysis and thermal elasto-plastic analysis to predict welding deformation that occurs during laser welding. In terms of computational time, since the thermal elasto-plastic analysis using 3D solid elements is quite time consuming, shell element model with multi-layers have been employed instead. However, the conventional layered approach is not appropriate for the application of heat load at T-Joint. This paper, Part II, suggests a new method to arrange different number of layers for face plate and core in order to impose heat load only to the face plate.

  4. Reproductive efficiency and shade avoidance plasticity under simulated competition

    OpenAIRE

    Fazlioglu, Fatih; Al?Namazi, Ali; Bonser, Stephen P.

    2016-01-01

    Abstract Plant strategy and life?history theories make different predictions about reproductive efficiency under competition. While strategy theory suggests under intense competition iteroparous perennial plants delay reproduction and semelparous annuals reproduce quickly, life?history theory predicts both annual and perennial plants increase resource allocation to reproduction under intense competition. We tested (1) how simulated competition influences reproductive efficiency and competitiv...

  5. Efficient polarimetric BRDF model.

    Science.gov (United States)

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.

  6. Prediction Markets

    DEFF Research Database (Denmark)

    Horn, Christian Franz; Ivens, Bjørn Sven; Ohneberg, Michael

    2014-01-01

    In recent years, Prediction Markets gained growing interest as a forecasting tool among researchers as well as practitioners, which resulted in an increasing number of publications. In order to track the latest development of research, comprising the extent and focus of research, this article...... provides a comprehensive review and classification of the literature related to the topic of Prediction Markets. Overall, 316 relevant articles, published in the timeframe from 2007 through 2013, were identified and assigned to a herein presented classification scheme, differentiating between descriptive...... works, articles of theoretical nature, application-oriented studies and articles dealing with the topic of law and policy. The analysis of the research results reveals that more than half of the literature pool deals with the application and actual function tests of Prediction Markets. The results...

  7. Predicting unpredictability

    Science.gov (United States)

    Davis, Steven J.

    2018-04-01

    Analysts and markets have struggled to predict a number of phenomena, such as the rise of natural gas, in US energy markets over the past decade or so. Research shows the challenge may grow because the industry — and consequently the market — is becoming increasingly volatile.

  8. An efficient training scheme for supermodels

    Science.gov (United States)

    Schevenhoven, Francine J.; Selten, Frank M.

    2017-06-01

    Weather and climate models have improved steadily over time as witnessed by objective skill scores, although significant model errors remain. Given these imperfect models, predictions might be improved by combining them dynamically into a so-called supermodel. In this paper a new training scheme to construct such a supermodel is explored using a technique called cross pollination in time (CPT). In the CPT approach the models exchange states during the prediction. The number of possible predictions grows quickly with time, and a strategy to retain only a small number of predictions, called pruning, needs to be developed. The method is explored using low-order dynamical systems and applied to a global atmospheric model. The results indicate that the CPT training is efficient and leads to a supermodel with improved forecast quality as compared to the individual models. Due to its computational efficiency, the technique is suited for application to state-of-the art high-dimensional weather and climate models.

  9. Prediction of regulatory elements

    DEFF Research Database (Denmark)

    Sandelin, Albin

    2008-01-01

    Finding the regulatory mechanisms responsible for gene expression remains one of the most important challenges for biomedical research. A major focus in cellular biology is to find functional transcription factor binding sites (TFBS) responsible for the regulation of a downstream gene. As wet......-lab methods are time consuming and expensive, it is not realistic to identify TFBS for all uncharacterized genes in the genome by purely experimental means. Computational methods aimed at predicting potential regulatory regions can increase the efficiency of wet-lab experiments significantly. Here, methods...

  10. Unification predictions

    International Nuclear Information System (INIS)

    Ghilencea, D.; Ross, G.G.; Lanzagorta, M.

    1997-07-01

    The unification of gauge couplings suggests that there is an underlying (supersymmetric) unification of the strong, electromagnetic and weak interactions. The prediction of the unification scale may be the first quantitative indication that this unification may extend to unification with gravity. We make a precise determination of these predictions for a class of models which extend the multiplet structure of the Minimal Supersymmetric Standard Model to include the heavy states expected in many Grand Unified and/or superstring theories. We show that there is a strong cancellation between the 2-loop and threshold effects. As a result the net effect is smaller than previously thought, giving a small increase in both the unification scale and the value of the strong coupling at low energies. (author). 15 refs, 5 figs

  11. Reconsidering energy efficiency

    International Nuclear Information System (INIS)

    Goldoni, Giovanni

    2007-01-01

    Energy and environmental policies are reconsidering energy efficiency. In a perfect market, rational and well informed consumers reach economic efficiency which, at the given prices of energy and capital, corresponds to physical efficiency. In the real world, market failures and cognitive frictions distort the consumers from perfectly rational and informed choices. Green incentive schemes aim at balancing market failures and directing consumers toward more efficient goods and services. The problem is to fine tune the incentive schemes [it

  12. Energy efficiency; Efficacite energetique

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-06-15

    This road-map proposes by the Group Total aims to inform the public on the energy efficiency. It presents the energy efficiency and intensity around the world with a particular focus on Europe, the energy efficiency in industry and Total commitment. (A.L.B.)

  13. Anytime Prediction: Efficient Ensemble Methods for Any Computational Budget

    Science.gov (United States)

    2014-01-21

    eliminate both classes [Sochman and Matas, 2005] exist. Many other variations on building and optimizing cascades exist [Saberian and Vasconcelos , 10 CHAPTER...International Conference on Machine Learning (ICML), 2013. M. J. Saberian and N. Vasconcelos . Boosting classifier cascades. In Proceedings of the 24th Annual

  14. The effect of rounding on payment efficiency

    NARCIS (Netherlands)

    Bijwaard, G.E.; Franses, P.H.

    2009-01-01

    Theory predicts that dismissing the 1 and 2 euro cent coins from the denominational range of the euro facilitates payment efficiency. To examine whether this theory holds true in practice, data were collected for the Netherlands before and after September 2004, which marks the day that retail stores

  15. Cost Efficiency in Public Higher Education.

    Science.gov (United States)

    Robst, John

    This study used the frontier cost function framework to examine cost efficiency in public higher education. The frontier cost function estimates the minimum predicted cost for producing a given amount of output. Data from the annual Almanac issues of the "Chronicle of Higher Education" were used to calculate state level enrollments at two-year and…

  16. Efficient boiler operations sourcebook

    Energy Technology Data Exchange (ETDEWEB)

    Payne, F.W. (comp.)

    1985-01-01

    This book emphasizes the practical aspects of industrial and commercial boiler operations. It starts with a comprehensive review of general combustion and boiler fundamentals and then deals with specific efficiency improvement methods, and the cost savings which result. The book has the following chapter headings: boiler combustion fundamentals; boiler efficiency goals; major factors controlling boiler efficiency; boiler efficiency calculations; heat loss; graphical solutions; preparation for boiler testing; boiler test procedures; efficiency-related boiler maintenance procedures; boiler tune-up; boiler operational modifications; effect of water side and gas side scale deposits; load management; auxillary equipment to increase boiler efficiency; air preheaters and economizers; other types of auxillary equipment; combustion control systems and instrumentation; boiler O/sub 2/ trim controls; should you purchase a new boiler.; financial evaluation procedures; case studies. The last chapter includes a case study of a boiler burning pulverized coal and a case study of stoker-fired coal.

  17. Educated for Efficiency

    DEFF Research Database (Denmark)

    Amore, Mario Daniele; Bennedsen, Morten; Larsen, Birthe

    We study the effect of CEO education on a firm’s energy efficiency. Using a unique dataset of Danish firms, we document that firms led by more educated CEOs exhibit greater energy efficiency. We establish causality by employing exogenous CEO hospitalization episodes: the hospitalization of highly......-educated CEOs induces a drop in a firm’s energy efficiency, whereas the hospitalization of low-education CEOs does not have any significant effect. Disentangling the effect of educational length from that of the field of study, we find that the greater energy efficiency is mostly driven by the cumulated years...

  18. Institutions, Equilibria and Efficiency

    DEFF Research Database (Denmark)

    Competition and efficiency is at the core of economic theory. This volume collects papers of leading scholars, which extend the conventional general equilibrium model in important ways. Efficiency and price regulation are studied when markets are incomplete and existence of equilibria in such set...... in OLG, learning in OLG and in games, optimal pricing of derivative securities, the impact of heterogeneity...

  19. Energy Efficiency Collaboratives

    Energy Technology Data Exchange (ETDEWEB)

    Li, Michael [US Department of Energy, Washington, DC (United States); Bryson, Joe [US Environmental Protection Agency, Washington, DC (United States)

    2015-09-01

    Collaboratives for energy efficiency have a long and successful history and are currently used, in some form, in more than half of the states. Historically, many state utility commissions have used some form of collaborative group process to resolve complex issues that emerge during a rate proceeding. Rather than debate the issues through the formality of a commission proceeding, disagreeing parties are sent to discuss issues in a less-formal setting and bring back resolutions to the commission. Energy efficiency collaboratives take this concept and apply it specifically to energy efficiency programs—often in anticipation of future issues as opposed to reacting to a present disagreement. Energy efficiency collaboratives can operate long term and can address the full suite of issues associated with designing, implementing, and improving energy efficiency programs. Collaboratives can be useful to gather stakeholder input on changing program budgets and program changes in response to performance or market shifts, as well as to provide continuity while regulators come and go, identify additional energy efficiency opportunities and innovations, assess the role of energy efficiency in new regulatory contexts, and draw on lessons learned and best practices from a diverse group. Details about specific collaboratives in the United States are in the appendix to this guide. Collectively, they demonstrate the value of collaborative stakeholder processes in producing successful energy efficiency programs.

  20. efficience technico-economique

    African Journals Online (AJOL)

    USER

    ABSTRACT. TECHNICO-ECONOMIC EFFICIENCY : CASE OF THE PRODUCERS OF ONION AND POTATO IN KNOW IN MOROCCO. In the new context ... Key words : Technico-economic efficiency, stochastic frontier, potato production, onion production, Morocco ..... sont le labour, le cover-cropage et le traçage. 82 % des ...

  1. Nitrogen use efficiency (NUE)

    NARCIS (Netherlands)

    Oenema, O.

    2015-01-01

    There is a need for communications about resource use efficiency and for measures to increase the use efficiency of nutrients in relation to food production. This holds especially for nitrogen. Nitrogen (N) is essential for life and a main nutrient element. It is needed in relatively large

  2. Logistics, Management and Efficiency

    OpenAIRE

    Mircea UDRESCU; Sandu CUTURELA

    2014-01-01

    The problem related to the efficiency of the management for organization is general being the concern off all managers. In the present essay we consider that the efficacy of the organization begins from the structural systemization of the organizational management into general management, management of logistics and management of production which demands a new managerial process, more competitive based on economic efficiency.

  3. Energy efficiency and behaviour

    DEFF Research Database (Denmark)

    Carstensen, Trine Agervig; Kunnasvirta, Annika; Kiviluoto, Katariina

    separate key aspects hinders strategic energy efficiency planning. For this reason, the PLEEC project – “Planning for Energy Efficient Cities” – funded by the EU Seventh Framework Programme uses an integrative approach to achieve the sus‐ tainable, energy– efficient, smart city. By coordinating strategies...... to conduct behavioural interventions, to be presented in Deliverable 5.5., the final report. This report will also provide valuable information for the WP6 general model for an Energy-Smart City. Altogether 38 behavioural interventions are analysed in this report. Each collected and analysed case study...... of the European Union’s 20‐20‐20 plan is to improve energy efficiency by 20% in 2020. However, holistic knowledge about energy efficiency potentials in cities is far from complete. Currently, a WP4 location in PLEEC project page 3 variety of individual strategies and approaches by different stakeholders tackling...

  4. Quantification of tannins in tree foliage. A laboratory manual for the FAO/IAEA co-ordinated research project on 'Use of nuclear and related techniques to develop simple tannin assays for predicting and improving the safety and efficiency of feeding ruminants on tanniniferous tree foliage'

    International Nuclear Information System (INIS)

    2000-01-01

    Tanniniferous trees and shrubs are of importance in animal production because they can provide significant protein supplements, but unfortunately the amounts of tannins that they contain vary widely and largely unpredictably, and their effects on animals range from beneficial to toxicity and death. The toxic or antinutritional effects tend to occur in times of stress when a very large proportion of the diet is tanniniferous. With a better understanding of tannin properties and proper management, they could become an invaluable source of protein for strategic supplementation. As the demand for food rises, tanniniferous plants must play an increasingly important part in the diet of animals, in particular for ruminants in smallholder subsistence farming in developing countries. It is therefore critical that techniques be developed to measure and manage the anti-nutritional components that they contain. Keeping the above in mind, a Joint FAO/IAEA Co-ordinated Research Project (CRP) on 'Use of Nuclear and Related Techniques to Develop Simple Tannin Assays for Predicting and Improving the Safety and Efficiency of Feeding Ruminants on Tanniniferous Tree Foliage' has been initiated. In order to provide sound basis for this CRP, an FAO/IAEA Consultants Meeting was held in August 1997 in Vienna, at which the tanniniferous plants to be studied, the analytical methods, the test animals and the animal response evaluation techniques were defined. This publication contains methodologies for the analysis of tannins using chemical-, protein precipitation/binding- and bio-assays recommended by the consultants

  5. Predictable Medea

    Directory of Open Access Journals (Sweden)

    Elisabetta Bertolino

    2010-01-01

    Full Text Available By focusing on the tragedy of the 'unpredictable' infanticide perpetrated by Medea, the paper speculates on the possibility of a non-violent ontological subjectivity for women victims of gendered violence and whether it is possible to respond to violent actions in non-violent ways; it argues that Medea did not act in an unpredictable way, rather through the very predictable subject of resentment and violence. 'Medea' represents the story of all of us who require justice as retribution against any wrong. The presupposition is that the empowered female subjectivity of women’s rights contains the same desire of mastering others of the masculine current legal and philosophical subject. The subject of women’s rights is grounded on the emotions of resentment and retribution and refuses the categories of the private by appropriating those of the righteous, masculine and public subject. The essay opposes the essentialised stereotypes of the feminine and the maternal with an ontological approach of people as singular, corporeal, vulnerable and dependent. There is therefore an emphasis on the excluded categories of the private. Forgiveness is taken into account as a category of the private and a possibility of responding to violence with newness. A violent act is seen in relations to the community of human beings rather than through an isolated setting as in the case of the individual of human rights. In this context, forgiveness allows to risk again and being with. The result is also a rethinking of feminist actions, feminine subjectivity and of the maternal. Overall the paper opens up the Arendtian category of action and forgiveness and the Cavarerian unique and corporeal ontology of the selfhood beyond gendered stereotypes.

  6. Blazed Grating Resonance Conditions and Diffraction Efficiency Optical Transfer Function

    KAUST Repository

    Stegenburgs, Edgars

    2017-01-08

    We introduce a general approach to study diffraction harmonics or resonances and resonance conditions for blazed reflecting gratings providing knowledge of fundamental diffraction pattern and qualitative understanding of predicting parameters for the most efficient diffraction.

  7. Blazed Grating Resonance Conditions and Diffraction Efficiency Optical Transfer Function

    KAUST Repository

    Stegenburgs, Edgars; Alias, Mohd Sharizal B.; Ng, Tien Khee; Ooi, Boon S.

    2017-01-01

    We introduce a general approach to study diffraction harmonics or resonances and resonance conditions for blazed reflecting gratings providing knowledge of fundamental diffraction pattern and qualitative understanding of predicting parameters for the most efficient diffraction.

  8. Collective motion of predictive swarms.

    Directory of Open Access Journals (Sweden)

    Nathaniel Rupprecht

    Full Text Available Theoretical models of populations and swarms typically start with the assumption that the motion of agents is governed by the local stimuli. However, an intelligent agent, with some understanding of the laws that govern its habitat, can anticipate the future, and make predictions to gather resources more efficiently. Here we study a specific model of this kind, where agents aim to maximize their consumption of a diffusing resource, by attempting to predict the future of a resource field and the actions of other agents. Once the agents make a prediction, they are attracted to move towards regions that have, and will have, denser resources. We find that the further the agents attempt to see into the future, the more their attempts at prediction fail, and the less resources they consume. We also study the case where predictive agents compete against non-predictive agents and find the predictors perform better than the non-predictors only when their relative numbers are very small. We conclude that predictivity pays off either when the predictors do not see too far into the future or the number of predictors is small.

  9. Efficiency of emergency exercises

    International Nuclear Information System (INIS)

    Zander, N.; Sogalla, M.

    2011-01-01

    In order to cope with accidents beyond the design basis within German nuclear power plants which possibly lead to relevant radiological consequences, the utilities as well as the competent authorities exist emergency organisations. The efficiency, capacity for teamwork and preparedness of such organisations should be tested by regular, efficient exercise activities. Such activities can suitably be based on scenarios which provide challenging tasks for all units of the respective emergency organisation. Thus, the demonstration and further development of the efficiency of the respective organisational structures, including their ability to collaborate, is promoted. (orig.)

  10. Learning efficient correlated equilibria

    KAUST Repository

    Borowski, Holly P.; Marden, Jason R.; Shamma, Jeff S.

    2014-01-01

    The majority of distributed learning literature focuses on convergence to Nash equilibria. Correlated equilibria, on the other hand, can often characterize more efficient collective behavior than even the best Nash equilibrium. However, there are no existing distributed learning algorithms that converge to specific correlated equilibria. In this paper, we provide one such algorithm which guarantees that the agents' collective joint strategy will constitute an efficient correlated equilibrium with high probability. The key to attaining efficient correlated behavior through distributed learning involves incorporating a common random signal into the learning environment.

  11. Learning efficient correlated equilibria

    KAUST Repository

    Borowski, Holly P.

    2014-12-15

    The majority of distributed learning literature focuses on convergence to Nash equilibria. Correlated equilibria, on the other hand, can often characterize more efficient collective behavior than even the best Nash equilibrium. However, there are no existing distributed learning algorithms that converge to specific correlated equilibria. In this paper, we provide one such algorithm which guarantees that the agents\\' collective joint strategy will constitute an efficient correlated equilibrium with high probability. The key to attaining efficient correlated behavior through distributed learning involves incorporating a common random signal into the learning environment.

  12. Shrew trap efficiency

    DEFF Research Database (Denmark)

    Gambalemoke, Mbalitini; Mukinzi, Itoka; Amundala, Drazo

    2008-01-01

    We investigated the efficiency of four trap types (pitfall, Sherman LFA, Victor snap and Museum Special snap traps) to capture shrews. This experiment was conducted in five inter-riverine forest blocks in the region of Kisangani. The total trapping effort was 6,300, 9,240, 5,280 and 5,460 trap......, our results indicate that pitfall traps are the most efficient for capturing shrews: not only do they have a higher efficiency (yield), but the taxonomic diversity of shrews is also higher when pitfall traps are used....

  13. The Efficient Windows Collaborative

    Energy Technology Data Exchange (ETDEWEB)

    Petermann, Nils

    2006-03-31

    The Efficient Windows Collaborative (EWC) is a coalition of manufacturers, component suppliers, government agencies, research institutions, and others who partner to expand the market for energy efficient window products. Funded through a cooperative agreement with the U.S. Department of Energy, the EWC provides education, communication and outreach in order to transform the residential window market to 70% energy efficient products by 2005. Implementation of the EWC is managed by the Alliance to Save Energy, with support from the University of Minnesota and Lawrence Berkeley National Laboratory.

  14. Transport Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-07-01

    Transport is the sector with the highest final energy consumption and, without any significant policy changes, is forecast to remain so. In 2008, the IEA published 25 energy efficiency recommendations, among which four are for the transport sector. The recommendations focus on road transport and include policies on improving tyre energy efficiency, fuel economy standards for both light-duty vehicles and heavy-duty vehicles, and eco-driving. Implementation of the recommendations has been weaker in the transport sector than others. This paper updates the progress that has been made in implementing the transport energy efficiency recommendations in IEA countries since March 2009. Many countries have in the last year moved from 'planning to implement' to 'implementation underway', but none have fully implemented all transport energy efficiency recommendations. The IEA calls therefore for full and immediate implementation of the recommendations.

  15. Efficient incremental relaying

    KAUST Repository

    Fareed, Muhammad Mehboob; Alouini, Mohamed-Slim

    2013-01-01

    We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission

  16. Improving efficiency in stereology

    DEFF Research Database (Denmark)

    Keller, Kresten Krarup; Andersen, Ina Trolle; Andersen, Johnnie Bremholm

    2013-01-01

    of the study was to investigate the time efficiency of the proportionator and the autodisector on virtual slides compared with traditional methods in a practical application, namely the estimation of osteoclast numbers in paws from mice with experimental arthritis and control mice. Tissue slides were scanned......, a proportionator sampling and a systematic, uniform random sampling were simulated. We found that the proportionator was 50% to 90% more time efficient than systematic, uniform random sampling. The time efficiency of the autodisector on virtual slides was 60% to 100% better than the disector on tissue slides. We...... conclude that both the proportionator and the autodisector on virtual slides may improve efficiency of cell counting in stereology....

  17. Energy Efficient Cryogenics

    Science.gov (United States)

    Meneghelli, Barry J.; Notardonato, William; Fesmire, James E.

    2016-01-01

    The Cryogenics Test Laboratory, NASA Kennedy Space Center, works to provide practical solutions to low-temperature problems while focusing on long-term technology targets for the energy-efficient use of cryogenics on Earth and in space.

  18. Efficient incremental relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2013-07-01

    We propose a novel relaying scheme which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying scheme with both amplify and forward and decode and forward relaying. Numerical results are also presented to verify their analytical counterparts. © 2013 IEEE.

  19. More efficient together

    DEFF Research Database (Denmark)

    Zhang, Tian

    2015-01-01

    The solar-to-biomass conversion efficiency of natural photosynthesis is between 2.9 and 4.3% for most crops (1, 2). Improving the efficiency of photosynthesis could help increase the appeal of biologically derived fuels and chemicals in comparison with traditional petrochemical processes. One app...... competition between biotechnology and the food industry and avoid the environmental perturbation caused by intensive agriculture (3)....

  20. Thermodynamically efficient solar concentrators

    Science.gov (United States)

    Winston, Roland

    2012-10-01

    Non-imaging Optics is the theory of thermodynamically efficient optics and as such depends more on thermodynamics than on optics. Hence in this paper a condition for the "best" design is proposed based on purely thermodynamic arguments, which we believe has profound consequences for design of thermal and even photovoltaic systems. This new way of looking at the problem of efficient concentration depends on probabilities, the ingredients of entropy and information theory while "optics" in the conventional sense recedes into the background.

  1. Efficient Windows Collaborative

    Energy Technology Data Exchange (ETDEWEB)

    Nils Petermann

    2010-02-28

    The project goals covered both the residential and commercial windows markets and involved a range of audiences such as window manufacturers, builders, homeowners, design professionals, utilities, and public agencies. Essential goals included: (1) Creation of 'Master Toolkits' of information that integrate diverse tools, rating systems, and incentive programs, customized for key audiences such as window manufacturers, design professionals, and utility programs. (2) Delivery of education and outreach programs to multiple audiences through conference presentations, publication of articles for builders and other industry professionals, and targeted dissemination of efficient window curricula to professionals and students. (3) Design and implementation of mechanisms to encourage and track sales of more efficient products through the existing Window Products Database as an incentive for manufacturers to improve products and participate in programs such as NFRC and ENERGY STAR. (4) Development of utility incentive programs to promote more efficient residential and commercial windows. Partnership with regional and local entities on the development of programs and customized information to move the market toward the highest performing products. An overarching project goal was to ensure that different audiences adopt and use the developed information, design and promotion tools and thus increase the market penetration of energy efficient fenestration products. In particular, a crucial success criterion was to move gas and electric utilities to increase the promotion of energy efficient windows through demand side management programs as an important step toward increasing the market share of energy efficient windows.

  2. Feedback and efficient behavior.

    Directory of Open Access Journals (Sweden)

    Sandro Casal

    Full Text Available Feedback is an effective tool for promoting efficient behavior: it enhances individuals' awareness of choice consequences in complex settings. Our study aims to isolate the mechanisms underlying the effects of feedback on achieving efficient behavior in a controlled environment. We design a laboratory experiment in which individuals are not aware of the consequences of different alternatives and, thus, cannot easily identify the efficient ones. We introduce feedback as a mechanism to enhance the awareness of consequences and to stimulate exploration and search for efficient alternatives. We assess the efficacy of three different types of intervention: provision of social information, manipulation of the frequency, and framing of feedback. We find that feedback is most effective when it is framed in terms of losses, that it reduces efficiency when it includes information about inefficient peers' behavior, and that a lower frequency of feedback does not disrupt efficiency. By quantifying the effect of different types of feedback, our study suggests useful insights for policymakers.

  3. Ratchetting strain prediction

    International Nuclear Information System (INIS)

    Noban, Mohammad; Jahed, Hamid

    2007-01-01

    A time-efficient method for predicting ratchetting strain is proposed. The ratchetting strain at any cycle is determined by finding the ratchetting rate at only a few cycles. This determination is done by first defining the trajectory of the origin of stress in the deviatoric stress space and then incorporating this moving origin into a cyclic plasticity model. It is shown that at the beginning of the loading, the starting point of this trajectory coincides with the initial stress origin and approaches the mean stress, displaying a power-law relationship with the number of loading cycles. The method of obtaining this trajectory from a standard uniaxial asymmetric cyclic loading is presented. Ratchetting rates are calculated with the help of this trajectory and through the use of a constitutive cyclic plasticity model which incorporates deviatoric stresses and back stresses that are measured with respect to this moving frame. The proposed model is used to predict the ratchetting strain of two types of steels under single- and multi-step loadings. Results obtained agree well with the available experimental measurements

  4. Ionization efficiency calculations for cavity thermoionization ion source

    International Nuclear Information System (INIS)

    Turek, M.; Pyszniak, K.; Drozdziel, A.; Sielanko, J.; Maczka, D.; Yuskevich, Yu.V.; Vaganov, Yu.A.

    2009-01-01

    The numerical model of ionization in a thermoionization ion source is presented. The review of ion source ionization efficiency calculation results for various kinds of extraction field is given. The dependence of ionization efficiency on working parameters like ionizer length and extraction voltage is discussed. Numerical simulations results are compared to theoretical predictions obtained from a simplified ionization model

  5. Supernovae Discovery Efficiency

    Science.gov (United States)

    John, Colin

    2018-01-01

    Abstract:We present supernovae (SN) search efficiency measurements for recent Hubble Space Telescope (HST) surveys. Efficiency is a key component to any search, and is important parameter as a correction factor for SN rates. To achieve an accurate value for efficiency, many supernovae need to be discoverable in surveys. This cannot be achieved from real SN only, due to their scarcity, so fake SN are planted. These fake supernovae—with a goal of realism in mind—yield an understanding of efficiency based on position related to other celestial objects, and brightness. To improve realism, we built a more accurate model of supernovae using a point-spread function. The next improvement to realism is planting these objects close to galaxies and of various parameters of brightness, magnitude, local galactic brightness and redshift. Once these are planted, a very accurate SN is visible and discoverable by the searcher. It is very important to find factors that affect this discovery efficiency. Exploring the factors that effect detection yields a more accurate correction factor. Further inquires into efficiency give us a better understanding of image processing, searching techniques and survey strategies, and result in an overall higher likelihood to find these events in future surveys with Hubble, James Webb, and WFIRST telescopes. After efficiency is discovered and refined with many unique surveys, it factors into measurements of SN rates versus redshift. By comparing SN rates vs redshift against the star formation rate we can test models to determine how long star systems take from the point of inception to explosion (delay time distribution). This delay time distribution is compared to SN progenitors models to get an accurate idea of what these stars were like before their deaths.

  6. Productivity and energy efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Lovins, H. [Rocky Mountain Inst., Snowmass, CO (United States)

    1995-12-31

    Energy efficient building and office design offers the possibility of significantly increased worker productivity. By improving lighting, heating and cooling, workers can be made more comfortable and productive. An increase of 1 percent in productivity can provide savings to a company that exceed its entire energy bill. Efficient design practices are cost effective just from their energy savings. The resulting productivity gains make them indispensable. This paper documents eight cases in which efficient lighting, heating, and cooling have measurably increased worker productivity, decreased absenteeism, and/or improved the quality of work performed. They also show that efficient lighting can measurably increase work quality by removing errors and manufacturing defects. The case studies presented include retrofit of existing buildings and the design of new facilities, and cover a variety of commercial and industrial settings. Each case study identifies the design changes that were most responsible for increased productivity. As the eight case studies illustrate, energy efficient design may be one of the least expensive ways for a business to improve the productivity of its workers and the quality of its product. (author). 15 refs.

  7. Energy efficiency in pumps

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Durmus; Yagmur, E. Alptekin [TUBITAK-MRC, P.O. Box 21, 41470 Gebze, Kocaeli (Turkey); Yigit, K. Suleyman; Eren, A. Salih; Celik, Cenk [Engineering Faculty, Kocaeli University, Kocaeli (Turkey); Kilic, Fatma Canka [Department of Air Conditioning and Refrigeration, Kocaeli University, Kullar, Kocaeli (Turkey)

    2008-06-15

    In this paper, ''energy efficiency'' studies, done in a big industrial facility's pumps, are reported. For this purpose; the flow rate, pressure and temperature have been measured for each pump in different operating conditions and at maximum load. In addition, the electrical power drawn by the electric motor has been measured. The efficiencies of the existing pumps and electric motor have been calculated by using the measured data. Potential energy saving opportunities have been studied by taking into account the results of the calculations for each pump and electric motor. As a conclusion, improvements should be made each system. The required investment costs for these improvements have been determined, and simple payback periods have been calculated. The main energy saving opportunities result from: replacements of the existing low efficiency pumps, maintenance of the pumps whose efficiencies start to decline at certain range, replacements of high power electric motors with electric motors that have suitable power, usage of high efficiency electric motors and elimination of cavitation problems. (author)

  8. Energy efficiency in pumps

    International Nuclear Information System (INIS)

    Kaya, Durmus; Yagmur, E. Alptekin; Yigit, K. Suleyman; Kilic, Fatma Canka; Eren, A. Salih; Celik, Cenk

    2008-01-01

    In this paper, 'energy efficiency' studies, done in a big industrial facility's pumps, are reported. For this purpose; the flow rate, pressure and temperature have been measured for each pump in different operating conditions and at maximum load. In addition, the electrical power drawn by the electric motor has been measured. The efficiencies of the existing pumps and electric motor have been calculated by using the measured data. Potential energy saving opportunities have been studied by taking into account the results of the calculations for each pump and electric motor. As a conclusion, improvements should be made each system. The required investment costs for these improvements have been determined, and simple payback periods have been calculated. The main energy saving opportunities result from: replacements of the existing low efficiency pumps, maintenance of the pumps whose efficiencies start to decline at certain range, replacements of high power electric motors with electric motors that have suitable power, usage of high efficiency electric motors and elimination of cavitation problems

  9. National energy efficiency programme

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This paper focusses on energy conservation and specifically on energy efficiency which includes efficiency in the production, delivery and utilisation of energy as part of the total energy system of the economy. A National Energy Efficiency Programme is being launched in the Eighth Plan that will take into account both macro level and policy and planning considerations as well as micro level responses for different category of users in the industry, agriculture, transport and domestic sectors. The need for such a National Energy Efficiency Programme after making an assessment of existing energy conservation activities in the country is discussed. The broad framework and contents of the National Energy Efficiency Programme have been outlined and the Eighth Plan targets for energy conservation and their break-up have been given. These targets, as per the Eighth Plan document are 5000 MW in electricity installed capacity and 6 million tonnes of petroleum products by the terminal year of the Eighth Plan. The issues that need to be examined for each sector for achieving the above targets for energy conservation in the Eighth Plan are discussed briefly. They are: (a) policy and planning, (b) implementation arrangements which include the institutional setup and selective legislation, (c) technological requirements, and (d) resource requirements which include human resources and financial resources. (author)

  10. Energy efficient design

    International Nuclear Information System (INIS)

    1991-01-01

    Solar Applications and Energy Efficiency in Building Design and Town Planning (RER/87/006) is a United Nations Development Programme (UNDP) project of the Governments of Albania, Bulgaria, Cyprus, The Czech and Slovak Federal Republic, France, Hungary, Malta, Poland, Turkey, United Kingdom and Yugoslavia. The project began in 1988 and comes to a conclusion at the end of 1991. It is to enhance the professional skills of practicing architects, engineers and town planners in European countries to design energy efficient buildings which reduce energy consumption and make greater use of passive solar heating and natural cooling techniques. The United Nations Economic Commission for Europe (ECE) is the Executing Agency of the project which is implemented under the auspices of the Committee on Energy, General Energy Programme of Work for 1990-1994, sub-programme 5 Energy Conservation and Efficiency (ECE/ENERGY/15). The project has five main outputs or results: an international network of institutions for low energy building design; a state-of-the-art survey of energy use in the built environment of European IPF countries; a simple computer program for energy efficient building design; a design guide and computer program operators' manual; and a series of international training courses in participating European IPF countries. Energy Efficient Design is the fourth output of the project. It comprises the design guide for practicing architects and engineers, for use mainly in mid-career training courses, and the operators' manual for the project's computer program

  11. Energy Efficiency Center - Overview

    International Nuclear Information System (INIS)

    Obryk, E.

    2000-01-01

    Full text: The Energy Efficiency Center (EEC) activities have been concentrated on Energy Efficiency Network (SEGE), education and training of energy auditors. EEC has started studies related to renewable fuels (bio fuel, wastes) and other topics related to environment protection. EEC has continued close collaboration with Institute for Energy Technology, Kjeller, Norway. It has been organized and conducted Seminar and Workshop on ''How to Reduce Energy and Water Cost in Higher Education Buildings'' for general and technical managers of the higher education institutions. This Seminar was proceeded by the working meeting on energy efficiency strategy in higher education at the Ministry of National Education. EEC has worked out proposal for activities of Cracow Regional Agency for Energy Efficiency and Environment and has made offer to provide services for this Agency in the field of training, education and consulting. The vast knowledge and experiences in the field of energy audits have been used by the members of EEC in lecturing at energy auditors courses authorized by the National Energy Efficiency Agency (KAPE). Altogether 20 lectures have been delivered. (author)

  12. Intelligent Prediction of Ship Maneuvering

    Directory of Open Access Journals (Sweden)

    Miroslaw Lacki

    2016-09-01

    Full Text Available In this paper the author presents an idea of the intelligent ship maneuvering prediction system with the usage of neuroevolution. This may be also be seen as the ship handling system that simulates a learning process of an autonomous control unit, created with artificial neural network. The control unit observes input signals and calculates the values of required parameters of the vessel maneuvering in confined waters. In neuroevolution such units are treated as individuals in population of artificial neural networks, which through environmental sensing and evolutionary algorithms learn to perform given task efficiently. The main task of the system is to learn continuously and predict the values of a navigational parameters of the vessel after certain amount of time, regarding an influence of its environment. The result of a prediction may occur as a warning to navigator to aware him about incoming threat.

  13. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  14. Efficiency of scanning automatons

    International Nuclear Information System (INIS)

    Shkundenkov, V.N.

    1977-01-01

    Investigated are the methods for improving the efficiency of the picture processing system based on an automatic scanner. Discussed are two types of such a system. In the first case the system contains both automatic and semi-automatic scanners. In the second case the system includes only the automatic scanners with the man-to-computer dialog facilities. For analyzing the role of the automatic scanner and the role of the operator in the processing system use is made of the processing system balance equation. It is proved that the picture processing system should be designed in two steps. The first step should, by all means, insure high efficiency in processing but the high capacity is not obligatory. The second step is aimed at higher capacity along with high efficiency. So, such a two-step designing makes it possible to solve the problem of higher capacity and lesser cost of picture processing

  15. Efficiency in Microfinance Cooperatives

    Directory of Open Access Journals (Sweden)

    HARTARSKA, Valentina

    2012-12-01

    Full Text Available In recognition of cooperatives’ contribution to the socio-economic well-being of their participants, the United Nations has declared 2012 as the International Year of Cooperatives. Microfinance cooperatives make a large part of the microfinance industry. We study efficiency of microfinance cooperatives and provide estimates of the optimal size of such organizations. We employ the classical efficiency analysis consisting of estimating a system of equations and identify the optimal size of microfinance cooperatives in terms of their number of clients (outreach efficiency, as well as dollar value of lending and deposits (sustainability. We find that microfinance cooperatives have increasing returns to scale which means that the vast majority can lower cost if they become larger. We calculate that the optimal size is around $100 million in lending and half of that in deposits. We find less robust estimates in terms of reaching many clients with a range from 40,000 to 180,000 borrowers.

  16. Efficient Learning Design

    DEFF Research Database (Denmark)

    Godsk, Mikkel

    This paper presents the current approach to implementing educational technology with learning design at the Faculty of Science and Technology, Aarhus University, by introducing the concept of ‘efficient learning design’. The underlying hypothesis is that implementing learning design is more than...... engaging educators in the design process and developing teaching and learning, it is a shift in educational practice that potentially requires a stakeholder analysis and ultimately a business model for the deployment. What is most important is to balance the institutional, educator, and student...... perspectives and to consider all these in conjunction in order to obtain a sustainable, efficient learning design. The approach to deploying learning design in terms of the concept of efficient learning design, the catalyst for educational development, i.e. the learning design model and how it is being used...

  17. Measuring efficiency in logistics

    Directory of Open Access Journals (Sweden)

    Milan Milovan Andrejić

    2013-06-01

    Full Text Available Dynamic market and environmental changes greatly affect operating of logistics systems. Logistics systems have to realize their activities and processes in an efficient way. The main objective of this paper is to analyze different aspects of efficiency measurement in logistics and to propose appropriate models of measurement. Measuring efficiency in logistics is a complex process that requires consideration of all subsystems, processes and activities as well as the impact of various financial, operational, environmental, quality and other factors. The proposed models have a basis in the Data Envelopment Analysis method. They could help managers in decision making and corrective actions processes. The tests and results of the model show the importance of input and output variables selection.

  18. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  19. Modelación de episodios críticos de contaminación por material particulado (PM10 en Santiago de Chile: Comparación de la eficiencia predictiva de los modelos paramétricos y no paramétricos Modeling critical episodes of air pollution by PM10 in Santiago, Chile: Comparison of the predictive efficiency of parametric and non-parametric statistical models

    Directory of Open Access Journals (Sweden)

    Sergio A. Alvarado

    2010-12-01

    Full Text Available Objetivo: Evaluar la eficiencia predictiva de modelos estadísticos paramétricos y no paramétricos para predecir episodios críticos de contaminación por material particulado PM10 del día siguiente, que superen en Santiago de Chile la norma de calidad diaria. Una predicción adecuada de tales episodios permite a la autoridad decretar medidas restrictivas que aminoren la gravedad del episodio, y consecuentemente proteger la salud de la comunidad. Método: Se trabajó con las concentraciones de material particulado PM10 registradas en una estación asociada a la red de monitorización de la calidad del aire MACAM-2, considerando 152 observaciones diarias de 14 variables, y con información meteorológica registrada durante los años 2001 a 2004. Se ajustaron modelos estadísticos paramétricos Gamma usando el paquete estadístico STATA v11, y no paramétricos usando una demo del software estadístico MARS v 2.0 distribuida por Salford-Systems. Resultados: Ambos métodos de modelación presentan una alta correlación entre los valores observados y los predichos. Los modelos Gamma presentan mejores aciertos que MARS para las concentraciones de PM10 con valores Objective: To evaluate the predictive efficiency of two statistical models (one parametric and the other non-parametric to predict critical episodes of air pollution exceeding daily air quality standards in Santiago, Chile by using the next day PM10 maximum 24h value. Accurate prediction of such episodes would allow restrictive measures to be applied by health authorities to reduce their seriousness and protect the community´s health. Methods: We used the PM10 concentrations registered by a station of the Air Quality Monitoring Network (152 daily observations of 14 variables and meteorological information gathered from 2001 to 2004. To construct predictive models, we fitted a parametric Gamma model using STATA v11 software and a non-parametric MARS model by using a demo version of Salford

  20. Energy efficiency; Energieffektivisering

    Energy Technology Data Exchange (ETDEWEB)

    2009-06-15

    The Low Energy Panel will halve the consumption in buildings. The Panel has proposed a halving of consumption in the construction within 2040 and 20 percent reduction in the consumption in the industry within 2020. The Panel consider it as possible to gradually reduce consumption in buildings from the current level of 80 TWh with 10 TWh in 2020, 25 TWh in 2030 and 40 TWh in 2040. According the committee one such halving can be reached by significant efforts relating to energy efficiency, by greater rehabilitations, energy efficiency in consisting building stock and stricter requirements for new construction. For the industry field the Panel recommend a political goal to be set at least 20 percent reduction in specific energy consumption in the industry and primary industry beyond general technological development by the end of 2020. This is equivalent to approximately 17 TWh based on current level of activity. The Panel believes that a 5 percent reduction should be achieved by the end of 2012 by carrying out simple measures. The Low Energy Panel has since March 2009 considered possibilities to strengthen the authorities' work with energy efficiency in Norway. The wide complex panel adds up proposals for a comprehensive approach for increased energy efficiency in particular in the building- and industry field. The Panel has looked into the potential for energy efficiency, barriers for energy efficiency, assessment of strengths and weaknesses in the existing policy instruments and members of the Panel's recommendations. In addition the report contains a review of theoretical principles for effects of instruments together with an extensive background. One of the committee members have chosen to take special notes on the main recommendations in the report. (AG)

  1. Financing Energy Efficient Homes

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-07-01

    Existing buildings require over 40% of the world's total final energy consumption, and account for 24% of world CO2 emissions (IEA, 2006). Much of this consumption could be avoided through improved efficiency of building energy systems (IEA, 2006) using current, commercially-viable technology. In most cases, these technologies make economic sense on a life-cycle cost analysis (IEA, 2006b). Moreover, to the extent that they reduce dependence on risk-prone fossil energy sources, energy efficient technologies also address concerns of energy security.

  2. Financing Energy Efficient Homes

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-07-01

    Existing buildings require over 40% of the world's total final energy consumption, and account for 24% of world CO2 emissions (IEA, 2006). Much of this consumption could be avoided through improved efficiency of building energy systems (IEA, 2006) using current, commercially-viable technology. In most cases, these technologies make economic sense on a life-cycle cost analysis (IEA, 2006b). Moreover, to the extent that they reduce dependence on risk-prone fossil energy sources, energy efficient technologies also address concerns of energy security.

  3. The Energy Efficient Enterprise

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, Bashir

    2010-09-15

    Since rising energy costs have become a crucial factor for the economy of production processes, the optimization of energy efficiency is of essential importance for industrial enterprises. Enterprises establish energy saving programs, specific to their needs. The most important elements of these energy efficiency programs are energy savings, energy controlling, energy optimization, and energy management. This article highlights the industrial enterprise approach to establish sustainable energy management programs based on the above elements. Globally, if organizations follow this approach, they can significantly reduce the overall energy consumption and cost.

  4. Dimensions of energy efficiency

    International Nuclear Information System (INIS)

    Ramani, K.V.

    1992-01-01

    In this address the author describes three dimensions of energy efficiency in order of increasing costs: conservation, resource and technology substitution, and changes in economic structure. He emphasizes the importance of economic rather than environmental rationales for energy efficiency improvements in developing countries. These countries do not place high priority on the problems of global climate change. Opportunities for new technologies may exist in resource transfer, new fuels and, possibly, small reactors. More research on economic and social impacts of technologies with greater sensitivity to user preferences is needed

  5. Efficient use of energy

    CERN Document Server

    Dryden, IGC

    2013-01-01

    The Efficient Use of Energy, Second Edition is a compendium of papers discussing the efficiency with which energy is used in industry. The collection covers relevant topics in energy handling and describes the more important features of plant and equipment. The book is organized into six parts. Part I presents the various methods of heat production. The second part discusses the use of heat in industry and includes topics in furnace design, industrial heating, boiler plants, and water treatment. Part III deals with the production of mechanical and electrical energy. It tackles the principles o

  6. Efficient computation of hashes

    International Nuclear Information System (INIS)

    Lopes, Raul H C; Franqueira, Virginia N L; Hobson, Peter R

    2014-01-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  7. Probabilistic Forecasting of Photovoltaic Generation: An Efficient Statistical Approach

    DEFF Research Database (Denmark)

    Wan, Can; Lin, Jin; Song, Yonghua

    2017-01-01

    This letter proposes a novel efficient probabilistic forecasting approach to accurately quantify the variability and uncertainty of the power production from photovoltaic (PV) systems. Distinguished from most existing models, a linear programming based prediction interval construction model for P...... power generation is proposed based on extreme learning machine and quantile regression, featuring high reliability and computational efficiency. The proposed approach is validated through the numerical studies on PV data from Denmark.......This letter proposes a novel efficient probabilistic forecasting approach to accurately quantify the variability and uncertainty of the power production from photovoltaic (PV) systems. Distinguished from most existing models, a linear programming based prediction interval construction model for PV...

  8. Renewable and efficient electric power systems

    CERN Document Server

    Masters, Gilbert M

    2013-01-01

    A solid, quantitative, practical introduction to a wide range of renewable energy systems-in a completely updated, new edition The second edition of Renewable and Efficient Electric Power Systems provides a solid, quantitative, practical introduction to a wide range of renewable energy systems. For each topic, essential theoretical background is introduced, practical engineering considerations associated with designing systems and predicting their performance are provided, and methods for evaluating the economics of these systems are presented. While the book focuses on

  9. Corporate efficiency in Europe

    Czech Academy of Sciences Publication Activity Database

    Hanousek, Jan; Kočenda, E.; Shamshur, Anastasiya

    2015-01-01

    Roč. 32, June (2015), s. 24-40 ISSN 0929-1199 R&D Projects: GA ČR(CZ) GA15-15927S Institutional support: PRVOUK-P23 Keywords : efficiency * ownership structure * firms Subject RIV: AH - Economics Impact factor: 1.286, year: 2015

  10. Robust efficient video fingerprinting

    Science.gov (United States)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  11. Efficient Immutable Collections

    NARCIS (Netherlands)

    Steindorfer, M.J.

    2017-01-01

    This thesis proposes novel and efficient data structures, suitable for immutable collection libraries, that carefully balance memory footprint and runtime performance of operations, and are aware of constraints and platform co-design challenges on the Java Virtual Machine (JVM). Collection data

  12. Efficient XPath Evaluation

    NARCIS (Netherlands)

    Wang, B.; Feng, L.; Shen, Y.

    Inspired by the best querying performance of ViST among the rest of the approaches in the literature, and meanwhile to overcome its shortcomings, in this paper, we present another efficient and novel geometric sequence mechanism, which transforms XML documents and XPath queries into the

  13. ERP=Efficiency

    Science.gov (United States)

    Violino, Bob

    2008-01-01

    This article discusses the enterprise resource planning (ERP) system. Deploying an ERP system is one of the most extensive--and expensive--IT projects a college or university can undertake. The potential benefits of ERP are significant: a more smoothly running operation with efficiencies in virtually every area of administration, from automated…

  14. Microeconomics : Equilibrium and Efficiency

    NARCIS (Netherlands)

    Ten Raa, T.

    2013-01-01

    Microeconomics: Equilibrium and Efficiency teaches how to apply microeconomic theory in an innovative, intuitive and concise way. Using real-world, empirical examples, this book not only covers the building blocks of the subject, but helps gain a broad understanding of microeconomic theory and

  15. Fuzzy efficiency without convexity

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Balezentis, Tomas

    2014-01-01

    approach builds directly upon the definition of Farrell's indexes of technical efficiency used in crisp FDH. Therefore we do not require the use of fuzzy programming techniques but only utilize ranking probabilities of intervals as well as a related definition of dominance between pairs of intervals. We...

  16. Institutions, Equilibria and Efficiency

    DEFF Research Database (Denmark)

    Competition and efficiency is at the core of economic theory. This volume collects papers of leading scholars, which extend the conventional general equilibrium model in important ways. Efficiency and price regulation are studied when markets are incomplete and existence of equilibria in such set......Competition and efficiency is at the core of economic theory. This volume collects papers of leading scholars, which extend the conventional general equilibrium model in important ways. Efficiency and price regulation are studied when markets are incomplete and existence of equilibria...... in such settings is proven under very general preference assumptions. The model is extended to include geographical location choice, a commodity space incorporating manufacturing imprecision and preferences for club-membership, schools and firms. Inefficiencies arising from household externalities or group...... membership are evaluated. Core equivalence is shown for bargaining economies. The theory of risk aversion is extended and the relation between risk taking and wealth is experimentally investigated. Other topics include: determinacy in OLG with cash-in-advance constraints, income distribution and democracy...

  17. Web anonymization efficiency study

    Science.gov (United States)

    Sochor, Tomas

    2017-11-01

    The analysis of TOR, JonDo and CyberGhost efficiency (measured the as latency increase and transmission speed decrease) is presented in the paper. Results showed that all tools have relatively favorable latency increase (no more than 60% RTT increase). The transmission speed increase was much more significant (more than 60%), and even more for JonDo (above 90%).

  18. Energy efficiency in Finland

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    In Finland a significant portion of energy originates from renewable sources and cogeneration, that is, combined production of electricity and heat. Combined heat and electricity production is typical in the Finnish industry and in the district heating sector. One third of all electricity and 15 % of district heating is produced by cogeneration. District heating schemes provide about 45 % of heat in buildings. Overall efficiency in industry exceeds 80 % and is even higher in the district heating sector. In 1996 25 % of Finland`s primary energy was produced from renewable energy sources which is a far higher proportion than the European Union average of 6 %. Finland is one of the leading users of bioenergy. Biomass including peat, provides approximately 50 % of fuel consumed by industry and is utilised in significant amounts in combined heat and electricity plants. For example, in the pulp and paper industry, by burning black liquor and bark during the production of chemical pulp, significant amounts of energy are generated and used in paper mills. Conservation and efficient use of energy are central to the Finnish Government`s Energy Strategy. The energy conservation programme aims to increase energy efficiency by 10-20 % by the year 2010. Energy saving technology plays a key role in making the production and use of energy more efficient. In 1996 of FIM 335 million (ECU 57 million) spent on funding research, FIM 120 million (ECU 20 million) was spent on research into energy conservation

  19. Cataloging Efficiency and Effectiveness.

    Science.gov (United States)

    McCain, Cheryl; Shorten, Jay

    2002-01-01

    Reports on a survey of academic libraries that was conducted to supplement findings of cost studies by providing measures of efficiency and effectiveness for cataloging departments based on reported productivity, number of staff, task distribution, and quality measures including backlogs, authority control, and database maintenance. Identifies…

  20. Jet Inlet Efficiency

    Science.gov (United States)

    2013-08-08

    AFRL-RW-EG-TR-2014-044 Jet Inlet Efficiency Nigel Plumb Taylor Sykes-Green Keith Williams John Wohleber Munitions Aerodynamics Sciences...CONTRACT NUMBER N/A 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. AUTHOR(S) Nigel Plumb Taylor Sykes-Green Keith Williams John

  1. An efficiency correction model

    NARCIS (Netherlands)

    Francke, M.K.; de Vos, A.F.

    2009-01-01

    We analyze a dataset containing costs and outputs of 67 American local exchange carriers in a period of 11 years. This data has been used to judge the efficiency of BT and KPN using static stochastic frontier models. We show that these models are dynamically misspecified. As an alternative we

  2. Higher Efficiency HVAC Motors

    Energy Technology Data Exchange (ETDEWEB)

    Flynn, Charles Joseph [QM Power, Inc., Kansas City, MO (United States)

    2018-02-13

    The objective of this project was to design and build a cost competitive, more efficient heating, ventilation, and air conditioning (HVAC) motor than what is currently available on the market. Though different potential motor architectures among QMP’s primary technology platforms were investigated and evaluated, including through the building of numerous prototypes, the project ultimately focused on scaling up QM Power, Inc.’s (QMP) Q-Sync permanent magnet synchronous motors from available sub-fractional horsepower (HP) sizes for commercial refrigeration fan applications to larger fractional horsepower sizes appropriate for HVAC applications, and to add multi-speed functionality. The more specific goal became the research, design, development, and testing of a prototype 1/2 HP Q-Sync motor that has at least two operating speeds and 87% peak efficiency compared to incumbent electronically commutated motors (EC or ECM, also known as brushless direct current (DC) motors), the heretofore highest efficiency HVACR fan motor solution, at approximately 82% peak efficiency. The resulting motor prototype built achieved these goals, hitting 90% efficiency and .95 power factor at full load and speed, and 80% efficiency and .7 power factor at half speed. Q-Sync, developed in part through a DOE SBIR grant (Award # DE-SC0006311), is a novel, patented motor technology that improves on electronically commutated permanent magnet motors through an advanced electronic circuit technology. It allows a motor to “sync” with the alternating current (AC) power flow. It does so by eliminating the constant, wasteful power conversions from AC to DC and back to AC through the synthetic creation of a new AC wave on the primary circuit board (PCB) by a process called pulse width modulation (PWM; aka electronic commutation) that is incessantly required to sustain motor operation in an EC permanent magnet motor. The Q-Sync circuit improves the power factor of the motor by removing all

  3. Appliance Efficiency Standards and Price Discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Spurlock, Cecily Anna [Univ. of California, Berkeley, CA (United States)

    2013-05-08

    I explore the effects of two simultaneous changes in minimum energy efficiency and ENERGY STAR standards for clothes washers. Adapting the Mussa and Rosen (1978) and Ronnen (1991) second-degree price discrimination model, I demonstrate that clothes washer prices and menus adjusted to the new standards in patterns consistent with a market in which firms had been price discriminating. In particular, I show evidence of discontinuous price drops at the time the standards were imposed, driven largely by mid-low efficiency segments of the market. The price discrimination model predicts this result. On the other hand, in a perfectly competition market, prices should increase for these market segments. Additionally, new models proliferated in the highest efficiency market segment following the standard changes. Finally, I show that firms appeared to use different adaptation strategies at the two instances of the standards changing.

  4. Making detailed predictions makes (some) predictions worse

    Science.gov (United States)

    Kelly, Theresa F.

    In this paper, we investigate whether making detailed predictions about an event makes other predictions worse. Across 19 experiments, 10,895 participants, and 415,960 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes information that is relatively useless for predicting the winning team more readily accessible in memory and therefore incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of games will and will not be susceptible to the negative effect of making detailed predictions.

  5. Energy Efficiency Project Development

    Energy Technology Data Exchange (ETDEWEB)

    IUEP

    2004-03-01

    The International Utility Efficiency Partnerships, Inc. (IUEP) has been a leader among the industry groups that have supported voluntary initiatives to promote international energy efficiency projects and address global climate change. The IUEP maintains its leadership by both supporting international greenhouse gas (GHG) reduction projects under the auspices of the U.S. Department of Energy (DOE) and by partnering with U.S. and international organizations to develop and implement strategies and specific energy efficiency projects. The goals of the IUEP program are to (1) provide a way for U.S. industry to maintain a leadership role in international energy efficiency infrastructure projects; (2) identify international energy project development opportunities to continue its leadership in supporting voluntary market-based mechanisms to reduce GHG emissions; and (3) demonstrate private sector commitment to voluntary approaches to global climate issues. The IUEP is dedicated to identifying, promoting, managing, and assisting in the registration of international energy efficiency projects that result in demonstrated voluntary reductions of GHG emissions. This Final Technical Report summarizes the IUEP's work in identifying, promoting, managing, and assisting in development of these projects and IUEP's effort in creating international cooperative partnerships to support project development activities that develop and deploy technologies that (1) increase efficiency in the production, delivery and use of energy; (2) increase the use of cleaner, low-carbon fuels in processing products; and (3) capture/sequester carbon gases from energy systems. Through international cooperative efforts, the IUEP intends to strengthen partnerships for energy technology innovation and demonstration projects capable of providing cleaner energy in a cost-effective manner. As detailed in this report, the IUEP met program objectives and goals during the reporting period January 1

  6. Efficiency and Logistics

    CERN Document Server

    Hompel, Michael; Klumpp, Matthias

    2013-01-01

    The „EffizienzCluster LogistikRuhr“ was a winner in the Leading Edge Science Cluster competition run by the German federal Ministry of Education and Research. The mission and aim of the „EffizienzCluster LogistikRuhr“ is to facilitate tomorrow’s individuality – in the sense of individual goods supply, mobility, and production – using 75 percent of today’s resources. Efficiency – both in economical and ecological terms – is enabled by state-of-the-art and innovative logistical solutions including transportation, production and intralogistics. These proceedings “Efficiency and Logistics” give first answers from 27 research projects as an insight into the current state of research of Europe’s leading research and development cluster in logistics and as a contribution to the discussion on how logistics as a science can help to cope with foreseeable resource shortage and sustainability as global challenges.

  7. Danish Energy Efficiency Policy

    DEFF Research Database (Denmark)

    Togeby, Mikael; Larsen, Anders; Dyhr-Mikkelsen, Kirsten

    2009-01-01

    Ten groups of policy instruments for promoting energy efficiency are actively used in Denmark. Among these are the EU instruments such as the CO2 emissions trading scheme and labelling of appliances, labelling of all buildings, combined with national instruments such as high taxes especially...... of the entire Danish energy efficiency policy portfolio must be carried out before end 2008 and put forward for discussion among governing parties no later than February 2009. A consortium comprising Ea Energy Analyses, Niras, the Department of Society and Globalisation (Roskilde University) and 4-Fact...... on households and the public sector, obligations for energy companies (electricity, natural gas, district heating, and oil) to deliver documented savings, strict building codes, special instructions for the public sector, and an Electricity Saving Trust. A political agreement from 2005 states that an evaluation...

  8. Energy efficiency system development

    Science.gov (United States)

    Leman, A. M.; Rahman, K. A.; Chong, Haw Jie; Salleh, Mohd Najib Mohd; Yusof, M. Z. M.

    2017-09-01

    By subjecting to the massive usage of electrical energy in Malaysia, energy efficiency is now one of the key areas of focus in climate change mitigation. This paper focuses on the development of an energy efficiency system of household electrical appliances for residential areas. Distribution of Questionnaires and pay a visit to few selected residential areas are conducted during the fulfilment of the project as well as some advice on how to save energy are shared with the participants. Based on the collected data, the system developed by the UTHM Energy Team is then evaluated from the aspect of the consumers' behaviour in using electrical appliances and the potential reduction targeted by the team. By the end of the project, 60% of the participants had successfully reduced the electrical power consumption set by the UTHM Energy Team. The reasons for whether the success and the failure is further analysed in this project.

  9. Negotiating Efficient PPP Contracts

    DEFF Research Database (Denmark)

    Tvarnø, Christina D.

    . An opportunity the member states should consider using when procuring a PPP. This paper looks at the negotiation and contracting of a PPP in an economic theoretical and EU public procurement perspective and discusses how to establish an efficient PPP contract under a strong public law doctrine. Governments......This paper concerns Public Private Partnership (PPP) contracts in concern to the coming new 2014/24IEU public procurement directive. The new EU public procurement directive gives the public authority the opportunity to negotiate PPPs much more when they are implemented in national law...... procurement law. Furthermore, the paper seeks to establish a connection between public law, private law and the efficient PPP contract by drawing upon economic theory and empirical contract data from UK, US and Danish partnering contracts from the construction industry and the aim of contracting joint utility...

  10. Carbon Efficient Building Solutions

    Directory of Open Access Journals (Sweden)

    Pellervo Matilainen

    2010-03-01

    Full Text Available Traditionally, the Finnish legislation have focused on energy use and especially on energy used for heating space in buildings. However, in many cases this does not lead to the optimal concept in respect to minimizing green house gases. This paper studies how CO2 emission levels are affected by different measures to reduce energy use in buildings. This paper presents two real apartment buildings with different options of energy efficiency and power sources. The calculations clearly show that in the future electricity and domestic hot water use will have high importance in respect to energy efficiency, and therefore also CO2 equivalent (eq emissions. The importance increases when the energy efficiency of the building increases. There are big differences between average Finnish production and individual power plants; CO2 eq emissions might nearly double depending on the energy source and the power plant type. Both a building with an efficient district heating as a power source, and a building with ground heat in addition to nuclear power electricity as a complimentary electricity source performed very similarly to each other in respect to CO2 eq emissions. However, it is dangerous to conclude that it is not important which energy source is chosen. If hypothetically, the use of district heating would dramatically drop, the primary energy factor and CO2 eq emissions from electricity would rise, which in turn would lead to the increase of the ground heat systems emissions. A problem in the yearly calculations is that the fact that it is very important, sometimes even crucial, when energy is needed, is always excluded.

  11. Stirling Engine Cycle Efficiency

    OpenAIRE

    Naddaf, Nasrollah

    2012-01-01

    ABSTRACT This study strives to provide a clear explanation of the Stirling engine and its efficiency using new automation technology and the Lab View software. This heat engine was invented by Stirling, a Scottish in 1918. The engine’s working principles are based on the laws of thermodynamics and ability of volume expansion of ideal gases at different temperatures. Basically there are three types of Stirling engines: the gamma, beta and alpha models. The commissioner of the thesis ...

  12. Comminution efficiency attracts attention

    International Nuclear Information System (INIS)

    Daniel, M.J.; Lewis-Gray, E.

    2011-01-01

    The mining sector, both at a technical and board level is pursuing opportunities to achieve cost savings and reduce energy usage in their operations. Research and debate on step change efficiency benefits is particularly evident in the field of comminution (crushing and grinding) circuit design and operation. Published literature that quantifies mining related energy consumption in South Africa and Australia has been reviewed by the authors.

  13. HIGH EFFICIENCY TURBINE

    OpenAIRE

    VARMA, VIJAYA KRUSHNA

    2012-01-01

    Varma designed ultra modern and high efficiency turbines which can use gas, steam or fuels as feed to produce electricity or mechanical work for wide range of usages and applications in industries or at work sites. Varma turbine engines can be used in all types of vehicles. These turbines can also be used in aircraft, ships, battle tanks, dredgers, mining equipment, earth moving machines etc, Salient features of Varma Turbines. 1. Varma turbines are simple in design, easy to manufac...

  14. Efficient Fingercode Classification

    Science.gov (United States)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  15. Efficient Non Linear Loudspeakers

    DEFF Research Database (Denmark)

    Petersen, Bo R.; Agerkvist, Finn T.

    2006-01-01

    Loudspeakers have traditionally been designed to be as linear as possible. However, as techniques for compensating non linearities are emerging, it becomes possible to use other design criteria. This paper present and examines a new idea for improving the efficiency of loudspeakers at high levels...... by changing the voice coil layout. This deliberate non-linear design has the benefit that a smaller amplifier can be used, which has the benefit of reducing system cost as well as reducing power consumption....

  16. Efficient HVAC. New products

    International Nuclear Information System (INIS)

    2016-01-01

    Jung is responding to the challenge of energy efficiency, ease of operation and economic profitability in all of its solutions for the tertiary sector, whether for newly constructed buildings or refurbishments, for full management of the electrical system or the partial control of lighting, HVAC, mood settings, access control, etc., for the bedrooms or specific areas of the building. In the specific case of hotels, Jung offers each a custom-made solution in line with its possibilities and objectives. (Author)

  17. Economics of appliance efficiency

    International Nuclear Information System (INIS)

    Tiedemann, K.H.

    2009-01-01

    Several significant developments occurred in 2001 that affect the impact of market transformation programs. This paper presented and applied an econometric approach to the identification and estimation of market models for refrigerators, clothes washers, dishwashers and room air conditioners. The purpose of the paper was to understand the impact of energy conservation policy developments on sales of energy efficient appliances. The paper discussed the approach with particular reference to building a database of sales and drivers of sales using publicly available information; estimation of the determinants of sales using econometric models; and estimation of the individual impacts of prices, gross domestic product (GDP) and energy conservation policies on sales using regression results. Market and policy developments were also presented, such as change a light, save the world promotion; the California energy crisis; and the Pacific Northwest drought induced hydro power shortage. It was concluded that an increase in GDP increased the sales of both more efficient and less efficient refrigerators, clothes washers, dishwashers, and room air conditioners. An increase in electricity price increased sales of Energy Star refrigerators, clothes washers, dishwashers, and room air conditioners. 4 refs., 8 tabs.

  18. A Kernel for Protein Secondary Structure Prediction

    OpenAIRE

    Guermeur , Yann; Lifchitz , Alain; Vert , Régis

    2004-01-01

    http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10338&mode=toc; International audience; Multi-class support vector machines have already proved efficient in protein secondary structure prediction as ensemble methods, to combine the outputs of sets of classifiers based on different principles. In this chapter, their implementation as basic prediction methods, processing the primary structure or the profile of multiple alignments, is investigated. A kernel devoted to the task is in...

  19. Energy efficiency fallacies revisited

    International Nuclear Information System (INIS)

    Brookes, Leonard

    2000-01-01

    A number of governments including that of the UK subscribe to the belief that a national program devoted to raising energy efficiency throughout the economy provides a costless - indeed profitable - route to meeting international environmental obligations. This is a seductive policy. It constitutes the proverbial free lunch - not only avoiding politically unpopular measures like outlawing, taxing or rationing offending fuels or expanding non-carboniferous sources of energy like nuclear power but doing so with economic benefit. The author of this contribution came to doubt the validity of this solution when it was offered as a way of mitigating the effect of the OPEC price hikes of the 1970s, maintaining that economically justified improvement in energy efficiency led to higher levels of energy consumption at the economy-wide level than in the absence of any efficiency response. More fundamentally, he argues that there is no case for preferentially singling out energy, from among all the resources available to us, for efficiency maximisation. The least damaging policy is to determine targets, enact the restrictive measures needed to curb consumption, and then leave it to consumers - intermediate and final - to reallocate all the resources available to them to best effect subject to the new enacted constraints and any others they might be experiencing. There is no reason to suppose that it is right for all the economic adjustment following a new resource constraint to take the form of improvements in the productivity of that resource alone. As many others have argued, any action to impose resource constraint entails an inevitable economic cost in the shape of a reduction in production and consumption possibilities: there would be no free lunch. In the last few years debate about the validity of these contentions has blossomed, especially under the influence of writers on the western side of the Atlantic. In this contribution the author outlines the original arguments

  20. Are large farms more efficient? Tenure security, farm size and farm efficiency: evidence from northeast China

    Science.gov (United States)

    Zhou, Yuepeng; Ma, Xianlei; Shi, Xiaoping

    2017-04-01

    How to increase production efficiency, guarantee grain security, and increase farmers' income using the limited farmland is a great challenge that China is facing. Although theory predicts that secure property rights and moderate scale management of farmland can increase land productivity, reduce farm-related costs, and raise farmer's income, empirical studies on the size and magnitude of these effects are scarce. A number of studies have examined the impacts of land tenure or farm size on productivity or efficiency, respectively. There are also a few studies linking farm size, land tenure and efficiency together. However, to our best knowledge, there are no studies considering tenure security and farm efficiency together for different farm scales in China. In addition, there is little study analyzing the profit frontier. In this study, we particularly focus on the impacts of land tenure security and farm size on farm profit efficiency, using farm level data collected from 23 villages, 811 households in Liaoning in 2015. 7 different farm scales have been identified to further represent small farms, median farms, moderate-scale farms, and large farms. Technical efficiency is analyzed with stochastic frontier production function. The profit efficiency is regressed on a set of explanatory variables which includes farm size dummies, land tenure security indexes, and household characteristics. We found that: 1) The technical efficiency scores for production efficiency (average score = 0.998) indicate that it is already very close to the production frontier, and thus there is little room to improve production efficiency. However, there is larger space to raise profit efficiency (average score = 0.768) by investing more on farm size expansion, seed, hired labor, pesticide, and irrigation. 2) Farms between 50-80 mu are most efficient from the viewpoint of profit efficiency. The so-called moderate-scale farms (100-150 mu) according to the governmental guideline show no

  1. Can We Predict Patient Wait Time?

    Science.gov (United States)

    Pianykh, Oleg S; Rosenthal, Daniel I

    2015-10-01

    The importance of patient wait-time management and predictability can hardly be overestimated: For most hospitals, it is the patient queues that drive and define every bit of clinical workflow. The objective of this work was to study the predictability of patient wait time and identify its most influential predictors. To solve this problem, we developed a comprehensive list of 25 wait-related parameters, suggested in earlier work and observed in our own experiments. All parameters were chosen as derivable from a typical Hospital Information System dataset. The parameters were fed into several time-predicting models, and the best parameter subsets, discovered through exhaustive model search, were applied to a large sample of actual patient wait data. We were able to discover the most efficient wait-time prediction factors and models, such as the line-size models introduced in this work. Moreover, these models proved to be equally accurate and computationally efficient. Finally, the selected models were implemented in our patient waiting areas, displaying predicted wait times on the monitors located at the front desks. The limitations of these models are also discussed. Optimal regression models based on wait-line sizes can provide accurate and efficient predictions for patient wait time. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  2. Energy efficient home in Lebanon

    International Nuclear Information System (INIS)

    1997-01-01

    The purpose of the study is to present new methods or new products that could save money while improving the environment in Lebanon. Cost of energy is on the increase and is predicted to increase even more in the future. Environmental issues and awareness are gaining momentum in Lebanon. With electricity production directly linked to power plants that represent about 30% of the air pollution which is also linked to health related issues. There is an intermediate need to introduce more energy efficient products in the construction industry which require less energy to operate or could be linked indirectly to energy. In this context, cost-benefit analysis of heating, light, painting, energy consumption and energy lamp burning hours in addition to fuel burner, gas and electric heater in buildings are presented in tables. Finally, there is a lack of awareness on the positive impact on the environment reflected in the saving of natural resources, reducing pollution and creation of a better living environment

  3. Efficiency model of Russian banks

    OpenAIRE

    Pavlyuk, Dmitry

    2006-01-01

    The article deals with problems related to the stochastic frontier model of bank efficiency measurement. The model is used to study the efficiency of the banking sector of The Russian Federation. It is based on the stochastic approach both to the efficiency frontier location and to individual bank efficiency values. The model allows estimating bank efficiency values, finding relations with different macro- and microeconomic factors and testing some economic hypotheses.

  4. High-efficiency CARM

    Energy Technology Data Exchange (ETDEWEB)

    Bratman, V.L.; Kol`chugin, B.D.; Samsonov, S.V.; Volkov, A.B. [Institute of Applied Physics, Nizhny Novgorod (Russian Federation)

    1995-12-31

    The Cyclotron Autoresonance Maser (CARM) is a well-known variety of FEMs. Unlike the ubitron in which electrons move in a periodical undulator field, in the CARM the particles move along helical trajectories in a uniform magnetic field. Since it is much simpler to generate strong homogeneous magnetic fields than periodical ones for a relatively low electron energy ({Brit_pounds}{le}1-3 MeV) the period of particles` trajectories in the CARM can be sufficiently smaller than in the undulator in which, moreover, the field decreases rapidly in the transverse direction. In spite of this evident advantage, the number of papers on CARM is an order less than on ubitron, which is apparently caused by the low (not more than 10 %) CARM efficiency in experiments. At the same time, ubitrons operating in two rather complicated regimes-trapping and adiabatic deceleration of particles and combined undulator and reversed guiding fields - yielded efficiencies of 34 % and 27 %, respectively. The aim of this work is to demonstrate that high efficiency can be reached even for a simplest version of the CARM. In order to reduce sensitivity to an axial velocity spread of particles, a short interaction length where electrons underwent only 4-5 cyclotron oscillations was used in this work. Like experiments, a narrow anode outlet of a field-emission electron gun cut out the {open_quotes}most rectilinear{close_quotes} near-axis part of the electron beam. Additionally, magnetic field of a small correcting coil compensated spurious electron oscillations pumped by the anode aperture. A kicker in the form of a sloping to the axis frame with current provided a control value of rotary velocity at a small additional velocity spread. A simple cavity consisting of a cylindrical waveguide section restricted by a cut-off waveguide on the cathode side and by a Bragg reflector on the collector side was used as the CARM-oscillator microwave system.

  5. Late Washing efficiency

    International Nuclear Information System (INIS)

    Morrissey, M.F.

    1992-01-01

    Interim Waste Technology has demonstrated the Late Washing concept on the Experimental Laboratory Filter (ELF) at TNX. In two tests, washing reduced the [NO 2 - ] from 0.08 M to approximately 0.01 M on slurries with 2 year equivalent radiation exposures and 9.5 wt. % solids. For both washes, the [NO 2 - ] decreased at rates near theoretical for a constant volume stirred vessel, indicating approximately l00% washing efficiency. Permeate flux was greater than 0.05 gpm/ft 2 for both washes at a transmembrane pressure of 50 psi and flow velocity of 9 ft/sec

  6. Merging {DBMs} Efficiently

    DEFF Research Database (Denmark)

    David, Alexandre

    2005-01-01

    In this paper we present different algorithms to reduce the number of DBMs in federations by merging them. Federations are unions of DBMs and are used to represent non-convex zones. Inclusion checking between DBMs is a limited technique to reduce the size of federations and how to choose some DBMs...... to merge them into a larger one is a combi-natorial problem. We present a number of simple but efficient techniques to avoid searching the combinations while still being able to merge any number of DBMs...

  7. High efficiency positron moderation

    International Nuclear Information System (INIS)

    Taqqu, D.

    1990-01-01

    A new positron moderation scheme is proposed. It makes use of electric and magnetic fields to confine the β + emitted by a radioactive source forcing them to slow down within a thin foil. A specific arrangement is described where an intermediary slowed-down beam of energy below 10 keV is produced. By directing it towards a standard moderator optimal conversion into slow positrons is achieved. This scheme is best applied to short lived β + emitters for which a 25% moderation efficiency can be reached. Within the state of the art technology a slow positron source intensity exceeding 2 x 10 10 e + /sec is achievable. (orig.)

  8. Energy efficient data centers

    Energy Technology Data Exchange (ETDEWEB)

    Tschudi, William; Xu, Tengfang; Sartor, Dale; Koomey, Jon; Nordman, Bruce; Sezgen, Osman

    2004-03-30

    Data Center facilities, prevalent in many industries and institutions are essential to California's economy. Energy intensive data centers are crucial to California's industries, and many other institutions (such as universities) in the state, and they play an important role in the constantly evolving communications industry. To better understand the impact of the energy requirements and energy efficiency improvement potential in these facilities, the California Energy Commission's PIER Industrial Program initiated this project with two primary focus areas: First, to characterize current data center electricity use; and secondly, to develop a research ''roadmap'' defining and prioritizing possible future public interest research and deployment efforts that would improve energy efficiency. Although there are many opinions concerning the energy intensity of data centers and the aggregate effect on California's electrical power systems, there is very little publicly available information. Through this project, actual energy consumption at its end use was measured in a number of data centers. This benchmark data was documented in case study reports, along with site-specific energy efficiency recommendations. Additionally, other data center energy benchmarks were obtained through synergistic projects, prior PG&E studies, and industry contacts. In total, energy benchmarks for sixteen data centers were obtained. For this project, a broad definition of ''data center'' was adopted which included internet hosting, corporate, institutional, governmental, educational and other miscellaneous data centers. Typically these facilities require specialized infrastructure to provide high quality power and cooling for IT equipment. All of these data center types were considered in the development of an estimate of the total power consumption in California. Finally, a research ''roadmap'' was developed

  9. Efficiency improvements in transport

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, J. [Technical Univ. of Denmark. DTU Mechanical Engineering, Kgs. Lyngby (Denmark); Christensen, Linda; Jensen, Thomas C. [Technical Univ. of Denmark. DTU Transport, Kgs. Lyngby (Denmark)

    2012-11-15

    Transport of people, personal belongings and goods in private cars is fundamental to our modern welfare society and economic growth, and has grown steadily over many decades. Motor fuels have been based almost entirely on crude oil for the last century. During the last couple of decades engines built for traditional fuels have become more advanced and efficient; this has reduced fuel consumption by around 40% and emissions by more than 90%. Only in the same time span have we begun to look at alternatives to fossil fuels. Biofuels such as biodiesel, bioethanol, biomethanol and biogas can replace petrol and diesel, and in recent years algae have shown a new potential for diesel fuel. Natural gas is also becoming an interesting fuel due to its large resources worldwide. GTL, CTL and BTL are liquid fuels produced from solid or gaseous sources. GTL and CTL are expensive to produce and not very CO{sub 2}-friendly, but they are easily introduced and need little investment in infrastructure and vehicles. DME is an excellent fuel for diesel engines. Methanol and DME produced from biomass are among the most CO{sub 2}-reducing fuels and at the same time the most energy-efficient renewable fuels. Fuel cell vehicles (FCVs) are currently fuelled by hydrogen, but other fuels are also possible. There are, however, several barriers to the implementation of fuel cell vehicles. In particular, a hydrogen infrastructure needs to be developed. Electric vehicles (EVs) have the advantage that energy conversion is centralised at the power plant where it can be done at optimum efficiency and emissions. EVs have to be charged at home, and also away from home when travelling longer distances. With an acceptable fast charging infrastructure at least 85% of the one-car families in Denmark could be potential EV customers. Range improvements resulting from better batteries are expected to create a large increase in the number of EVs in Denmark between 2020 and 2030. The hybrid electric vehicle

  10. Energy Efficient Digital Networks

    Energy Technology Data Exchange (ETDEWEB)

    Lanzisera, Steven [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-01-01

    Digital networks are the foundation of the information services, and play an expanding and indispensable role in our lives, via the Internet, email, mobile phones, etc. However, these networks consume energy, both through the direct energy use of the network interfaces and equipment that comprise the network, and in the effect they have on the operating patterns of devices connected to the network. The purpose of this research was to investigate a variety of technology and policy issues related to the energy use caused by digital networks, and to further develop several energy-efficiency technologies targeted at networks.

  11. The PredictAD project

    DEFF Research Database (Denmark)

    Antila, Kari; Lötjönen, Jyrki; Thurfjell, Lennart

    2013-01-01

    Alzheimer's disease (AD) is the most common cause of dementia affecting 36 million people worldwide. As the demographic transition in the developed countries progresses towards older population, the worsening ratio of workers per retirees and the growing number of patients with age-related illnes...... candidates and implement the framework in software. The results are currently used in several research projects, licensed to commercial use and being tested for clinical use in several trials....... objective of the PredictAD project was to find and integrate efficient biomarkers from heterogeneous patient data to make early diagnosis and to monitor the progress of AD in a more efficient, reliable and objective manner. The project focused on discovering biomarkers from biomolecular data...

  12. Predicting Dyspnea Inducers by Molecular Topology

    Directory of Open Access Journals (Sweden)

    María Gálvez-Llompart

    2013-01-01

    Full Text Available QSAR based on molecular topology (MT is an excellent methodology used in predicting physicochemical and biological properties of compounds. This approach is applied here for the development of a mathematical model capable to recognize drugs showing dyspnea as a side effect. Using linear discriminant analysis, it was found a four-variable regression equations enabling a predictive rate of about 81% and 73% in the training and test sets of compounds, respectively. These results demonstrate that QSAR-MT is an efficient tool to predict the appearance of dyspnea associated with drug consumption.

  13. Time-Predictable Virtual Memory

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2016-01-01

    Virtual memory is an important feature of modern computer architectures. For hard real-time systems, memory protection is a particularly interesting feature of virtual memory. However, current memory management units are not designed for time-predictability and therefore cannot be used...... in such systems. This paper investigates the requirements on virtual memory from the perspective of hard real-time systems and presents the design of a time-predictable memory management unit. Our evaluation shows that the proposed design can be implemented efficiently. The design allows address translation...... and address range checking in constant time of two clock cycles on a cache miss. This constant time is in strong contrast to the possible cost of a miss in a translation look-aside buffer in traditional virtual memory organizations. Compared to a platform without a memory management unit, these two additional...

  14. Innovation and efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Haustein, H D; Maier, H

    1979-01-01

    Innovation, the process of creation, development, use, and diffusion of a new product or process for new or already-identified needs, has become a topic of concern for both developed and developing countries. Although the causes and motivations for the concern differ widely from country to country, the development of effective innovation policies is a universal problem. The International Institute for Applied Systems Analysis (IIASA) has been concerned with this problem for several years. The main purpose of an innovation is to improve the efficiency of the production unit that adopts the innovation, in comparison with the efficiency of the entire production system. To grasp the nature of the innovation process, its impact on the economic performance of the country, and to identify the appropriate managerial actions to shape and stimulate the innovation process, five different stages through which the innovation process usually runs are outlined. The IIASA has been concerned with supplanting the former approach of spontaneous innovation with a systems analysis approach to help implement new forms of social, innovative learning to be beneficial to mankind. 7 references, 2 figures, 1 table. (SAC)

  15. Using energy efficiently

    International Nuclear Information System (INIS)

    Nipkow, J.; Brunner, C. U.

    2005-01-01

    This comprehensive article discusses the perspectives for reducing electricity consumption in Switzerland. The increase in consumption is discussed that has occurred in spite of the efforts of the Swiss national energy programmes 'Energy 2000' and 'SwissEnergy'. The fact that energy consumption is still on the increase although efficient and economically-viable technology is available is commented on. The authors are of the opinion that the market alone cannot provide a complete solution and that national and international efforts are needed to remedy things. In particular, the external costs that are often not included when estimating costs are stressed. Several technical options available, such as the use of fluorescent lighting, LCD monitors and efficient electric motors, are looked at as are other technologies quoted as being a means of reducing power consumption. Ways of reducing stand-by losses and system optimisation are looked at as are various scenarios for further development and measures that can be implemented in order to reduce power consumption

  16. Multi-directional program efficiency

    DEFF Research Database (Denmark)

    Asmild, Mette; Balezentis, Tomas; Hougaard, Jens Leth

    2016-01-01

    The present paper analyses both managerial and program efficiencies of Lithuanian family farms, in the tradition of Charnes et al. (Manag Sci 27(6):668–697, 1981) but with the important difference that multi-directional efficiency analysis rather than the traditional data envelopment analysis...... approach is used to estimate efficiency. This enables a consideration of input-specific efficiencies. The study shows clear differences between the efficiency scores on the different inputs as well as between the farm types of crop, livestock and mixed farms respectively. We furthermore find that crop...... farms have the highest program efficiency, but the lowest managerial efficiency and that the mixed farms have the lowest program efficiency (yet not the highest managerial efficiency)....

  17. Customer Churn Prediction for Broadband Internet Services

    Science.gov (United States)

    Huang, B. Q.; Kechadi, M.-T.; Buckley, B.

    Although churn prediction has been an area of research in the voice branch of telecommunications services, more focused studies on the huge growth area of Broadband Internet services are limited. Therefore, this paper presents a new set of features for broadband Internet customer churn prediction, based on Henley segments, the broadband usage, dial types, the spend of dial-up, line-information, bill and payment information, account information. Then the four prediction techniques (Logistic Regressions, Decision Trees, Multilayer Perceptron Neural Networks and Support Vector Machines) are applied in customer churn, based on the new features. Finally, the evaluation of new features and a comparative analysis of the predictors are made for broadband customer churn prediction. The experimental results show that the new features with these four modelling techniques are efficient for customer churn prediction in the broadband service field.

  18. Hybrid Predictive Control for Dynamic Transport Problems

    CERN Document Server

    Núñez, Alfredo A; Cortés, Cristián E

    2013-01-01

    Hybrid Predictive Control for Dynamic Transport Problems develops methods for the design of predictive control strategies for nonlinear-dynamic hybrid discrete-/continuous-variable systems. The methodology is designed for real-time applications, particularly the study of dynamic transport systems. Operational and service policies are considered, as well as cost reduction. The control structure is based on a sound definition of the key variables and their evolution. A flexible objective function able to capture the predictive behaviour of the system variables is described. Coupled with efficient algorithms, mainly drawn from the area of computational intelligence, this is shown to optimize performance indices for real-time applications. The framework of the proposed predictive control methodology is generic and, being able to solve nonlinear mixed-integer optimization problems dynamically, is readily extendable to other industrial processes. The main topics of this book are: ●hybrid predictive control (HPC) ...

  19. ENERGY EFFICIENT DESALINATOR

    Directory of Open Access Journals (Sweden)

    T. A. Ismailov

    2017-01-01

    Full Text Available Objectives. The aim of the research is to develop a thin-film semiconductor thermoelectric heat pump of cylindrical shape for the desalination of sea water.Methods. To improve the efficiency of the desalination device, a  special thin-film semiconductor thermoelectric heat pump of  cylindrical shape is developed. The construction of the thin-film  semiconductor thermoelectric heat pump allows the flow rates of  incoming sea water and outflowing fresh water and brine to be  equalised by changing the geometric dimensions of the desalinator.  The cross-sectional area of the pipeline for incoming sea water is equal to the total area of outflowing fresh water and brine.Results. The use of thin-film semiconductor p- and n-type branches  in a thermo-module reduces their electrical resistance virtually to  zero and completely eliminates Joule's parasitic heat release. The  Peltier thermoelectric effect on heating and cooling is completely  preserved, bringing the efficiency of the heat pump to almost 100%, improving the energy-saving characteristics of the  desalinator as a whole. To further increase the efficiency of the  proposed desalinator, thermoelectric modules with radiation can be  used as thermoelectric devices.Conclusion. As a consequence of the creation of conditions of high rarefaction under which water will be converted to steam, which, at  20° C, is cold (as is the condensed distilled water, energy costs can  be reduced. In this case, the energy for heating and cooling is not  wasted; moreover, sterilisation is also achieved using the ultraviolet  radiation used in the thermoelectric devices, which, on the one hand, generate electromagnetic ultraviolet radiation, and, on the other, cooling. Such devices operate in optimal mode without heat  release. The desalination device can be used to produce fresh water and concentrated solutions from any aqueous solutions, including wastewater from industrial

  20. CONTROLLING AND BUSINESS EFFICIENCY

    Directory of Open Access Journals (Sweden)

    Tina Vuko

    2013-02-01

    Full Text Available Managing business successfully in dynamic environment requires effective controlling system. Controlling is the process of defining objectives, planning and management control so that every decision maker can act in accordance with agreed objectives. Controlling function as a separate department contributes business efficiency trough ensuring transparency of business result and business processes. Controlling takes place when manager and controller cooperate. The aim of this paper is to investigate the effectiveness of controlling function (i.e. controlling department in Croatian companies and to address the specific features of the function that contribute significantly to overall business performance. The research is conducted on the sample of companies listed on the Regulated market of the Zagreb Stock Exchange. Survey is used as a method to collect the data regarding the controlling function, while financial data necessary for the research are extracted from the published financial statements. Results of the research indicate that controlling department has positive effects on the business performance.

  1. Measuring Tax Efficiency

    DEFF Research Database (Denmark)

    Raimondos-Møller, Pascalis; Woodland, Alan D.

    2004-01-01

    This paper introduces an index of tax optimality thatmeasures the distance of some current tax structure from the optimal taxstructure in the presence of public goods. In doing so, we derive a [0, 1]number that reveals immediately how far the current tax configurationis from the optimal one and......, thereby, the degree of efficiency of a taxsystem. We call this number the Tax Optimality Index. We show howthe basic method can be altered in order to derive a revenue equivalentuniform tax, which measures the size of the public sector. A numericalexample is used to illustrate the method developed.......JEL Code: H21, H41.Keywords: Tax optimality index, excess burden, distance function.Authors Affiliations: Raimondos-Møller: Copenhagen Business School, CEPR,CESifo, and EPRU. Woodland: University of Sydney....

  2. Energy efficiency labelling

    Energy Technology Data Exchange (ETDEWEB)

    1978-04-01

    This research assesses the likely effects on UK consumers of the proposed EEC energy-efficiency labeling scheme. Unless (or until) an energy-labeling scheme is introduced, it is impossible to do more than postulate its likely effects on consumer behavior. This report shows that there are indeed significant differences in energy consumption between different brands and models of the same appliance of which consumers are unaware. Further, the report suggests that, if a readily intelligible energy-labeling scheme were introduced, it would provide useful information that consumers currently lack; and that, if this information were successfully presented, it would be used and could have substantial effects in reducing domestic fuel consumption. Therefore, it is recommended that an energy labeling scheme be introduced.

  3. The Efficiency of Freedom

    DEFF Research Database (Denmark)

    Østergaard Madsen, Christian; Kræmmergaard, Pernille

    2015-01-01

    -government and traditional channels are often used simultaneously, and citizens’ perceptions and previous histories with public authorities influence channel choice. Further, citizens’ existing routines related to third-party non-official channels also influence their interaction with public authorities. Moreover, we find......The Danish e-government strategy aims to increase the efficiency of public sector administration by making e-government channels mandatory for citizens by 2015. Although Danish citizens have adopted e-government channels to interact with public authorities, many also keep using traditional channels....... Previous studies have analyzed citizens’ channel choice in non-mandatory settings, and mostly surrounding a single isolated channel. To cover these gaps we present a mixed method study of citizens’ actual use of e-government channels using domestication theory as our framework. Our findings indicate that e...

  4. Systems Genetics and Transcriptomics of Feed Efficiency in Dairy Cattle

    DEFF Research Database (Denmark)

    Salleh, Suraya Binti Mohamad; Hoglund, J.; Løvendahl, P.

    Feed is the largest variable cost in milk production industries, thus improving feed efficiency will give better use of resources. This project works closely on definitions of feed efficiency in dairy cattle and uses advanced integrated genomics, bioinformatics and systems biology methods linking......-hydroxybutyrates, Triacylglyceride and urea. Feed efficiency, namely Residual Feed Intake and Kleiber Ratio based on daily feed or dry matter intake, body weight and milk production records also will be calculated. The bovine RNAseq gene expression data will be analyzed using statistical-bioinformatics and systems biology...... partitioning and deliver predictive biomarkers for feed efficiency in cattle. This study will also contribute to systems genomic prediction or selection models including the information on potential causal genes / SNPs or their functional modules....

  5. Automotive fuel efficiency

    International Nuclear Information System (INIS)

    Abelson, P.H.

    1992-01-01

    For at least the remainder of this century, the United States faces a growing dependence on imported oil. Costs are substantial, and they will mount. In June 1992, net imports provided nearly 50% of supplies, and their cost was $4.3 billion. Cost of net imports of motor vehicles and parts amounted to $3.0 billion. The two items combined totaled more than the negative trade balance of $6.6 billion. The light-duty highway fleet alone accounted for 38.2% of U.S. oil consumption in 1988. Correspondingly, the fleet was a substantial emitter of air pollutants - NO x , CO, and nonmethane hydrocarbons. In addition, it was a major source of CO 2 . The twin problems of oil imports and pollution would be ameliorated if the fuel economy if cars and trucks could be improved and their emissions were also reduced. In principle, the mileage of US automobiles could be substantially improved. But on purchasing a car, U.S. buyers rank fuel efficiency eight when making their choice. They are attracted to options that lower mileage. Consumers also tend to prefer large cars over small ones for reasons of safety. Increasingly, buyers are purchasing light trucks and vans that have inferior fuel efficiency. As a result of the above trends, the average mileage of the US automotive fleet has been diminishing. As long as fuel is available at comparatively low prices and there is no federal requirement for better mileage, improvement is unlikely. Moreover, even if improvements were mandated, change would be slow

  6. Efficiency of contactors

    International Nuclear Information System (INIS)

    Orth, D.A.; Graham, F.R.; Holt, D.L.

    1986-01-01

    The Savannah River Plant has two separations plants that began Purex operations in 1954 and 1955 with pump-mix mixer-settlers as contactors to process nuclear fuels. The only changes to the extraction equipment were replacement of most of the mixer-settlers in one plant with larger units in 1959, and the further replacement of the large 1A bank with a bank of rapid-contact centrifugal contactors in 1966. Improved performance of the old units has become highly desirable, and an experimental program is underway. Good contact between the phases, and adequate settling without entrainment of the opposite phase are required for high efficiency operation of the mixer-settlers. Factors that determine efficiency are mixer design, drop size generated, and phase coalescence properties. The original development work and accumulated plant data confirm that the tip speed of a given impeller design determines the throughput capacity and extraction performance. An experimental unit with three full-scale stages has been constructed and is being utilized to test different impeller designs; reduced pumping and better mixing with lower speeds appear to be the key factors for improvement. Decontamination performance of the rapid-contact centrifugal contactors is limited by the number of scrub contacts and the time of contact because of slowly equilibrating fission product species. Where solvent degradation is not a factor, the longer scrub contact of mixer-settlers gives better decontamination than the centrifugals. This kinetic effect can be overcome with long scrub contacts that follow the initial short extraction and short scrub contacts in the centrifugal contactors. A hybrid experimental unit with both rapid contact sections and longer contact scrub sections is under development to establish the degree of improvement that might be attained

  7. Deterministic prediction of surface wind speed variations

    Directory of Open Access Journals (Sweden)

    G. V. Drisya

    2014-11-01

    Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.

  8. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  9. Models for efficient integration of solar energy

    DEFF Research Database (Denmark)

    Bacher, Peder

    the available flexibility in the system. In the present thesis methods related to operation of solar energy systems and for optimal energy use in buildings are presented. Two approaches for forecasting of solar power based on numerical weather predictions (NWPs) are presented, they are applied to forecast......Efficient operation of energy systems with substantial amount of renewable energy production is becoming increasingly important. Renewables are dependent on the weather conditions and are therefore by nature volatile and uncontrollable, opposed to traditional energy production based on combustion....... The "smart grid" is a broad term for the technology for addressing the challenge of operating the grid with a large share of renewables. The "smart" part is formed by technologies, which models the properties of the systems and efficiently adapt the load to the volatile energy production, by using...

  10. Efficient quantum walk on a quantum processor

    Science.gov (United States)

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.

    2016-01-01

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471

  11. Efficient Multiparticle Entanglement via Asymmetric Rydberg Blockade

    DEFF Research Database (Denmark)

    Saffman, Mark; Mølmer, Klaus

    2009-01-01

    We present an efficient method for producing N particle entangled states using Rydberg blockade interactions. Optical excitation of Rydberg states that interact weakly, yet have a strong coupling to a second control state is used to achieve state dependent qubit rotations in small ensembles. On t....... On the basis of quantitative calculations, we predict that an entangled quantum superposition state of eight atoms can be produced with a fidelity of 84% in cold Rb atoms.......We present an efficient method for producing N particle entangled states using Rydberg blockade interactions. Optical excitation of Rydberg states that interact weakly, yet have a strong coupling to a second control state is used to achieve state dependent qubit rotations in small ensembles...

  12. Efficient micromagnetics for magnetic storage devices

    Science.gov (United States)

    Escobar Acevedo, Marco Antonio

    Micromagnetics is an important component for advancing the magnetic nanostructures understanding and design. Numerous existing and prospective magnetic devices rely on micromagnetic analysis, these include hard disk drives, magnetic sensors, memories, microwave generators, and magnetic logic. The ability to examine, describe, and predict the magnetic behavior, and macroscopic properties of nanoscale magnetic systems is essential for improving the existing devices, for progressing in their understanding, and for enabling new technologies. This dissertation describes efficient micromagnetic methods as required for magnetic storage analysis. Their performance and accuracy is demonstrated by studying realistic, complex, and relevant micromagnetic system case studies. An efficient methodology for dynamic micromagnetics in large scale simulations is used to study the writing process in a full scale model of a magnetic write head. An efficient scheme, tailored for micromagnetics, to find the minimum energy state on a magnetic system is presented. This scheme can be used to calculate hysteresis loops. An efficient scheme, tailored for micromagnetics, to find the minimum energy path between two stable states on a magnetic system is presented. This minimum energy path is intimately related to the thermal stability.

  13. Status of experimental verification of ECCS efficiency

    International Nuclear Information System (INIS)

    Hein, D.; Watzinger, H.

    1978-01-01

    For the emergency cooling system of KWU pressurized water reactors with combined hot and cold leg injection an outline is given of the status of experiments designed to prove the efficiency of the emergency cooling system. This proof has been established by basic investigations which clarify the physical processes, by ''separate effects tests'' to derive and check correlations, and finally by investigations on the PKL test facility, in which a 1300 MWe pressurized water reactor including the primary circuts is simulated. These ''system effects tests'' are used to verify computer codes which are ultimately used to make predictions for the reactor. (author)

  14. Where is the Efficient Frontier

    OpenAIRE

    Jing Chen

    2010-01-01

    Tremendous effort has been spent on the construction of reliable efficient frontiers. However, mean-variance efficient portfolios constructed using sample means and covariance often perform poorly out of sample. We prove that, the capital market line is the efficient frontier for the risky assets in a financial market with liquid fixed income trading. This unified understanding of riskless asset as the boundary of risky assets relieves the burden of constructing efficient frontiers in asset a...

  15. An Analysis of Natural T Cell Responses to Predicted Tumor Neoepitopes

    DEFF Research Database (Denmark)

    Bjerregaard, Anne-Mette; Nielsen, Morten; Jurtz, Vanessa Isabell

    2017-01-01

    Personalization of cancer immunotherapies such as therapeutic vaccines and adoptive T-cell therapy may benefit from efficient identification and targeting of patient-specific neoepitopes. However, current neoepitope prediction methods based on sequencing and predictions of epitope processing...

  16. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  17. Audiovisual biofeedback improves motion prediction accuracy.

    Science.gov (United States)

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-04-01

    The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients' respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p biofeedback improves prediction accuracy. This would result in increased efficiency of motion management techniques affected by system latencies used in radiotherapy.

  18. Consistent Structural Integrity and Efficient Certification with Analysis. Volume 2: Detailed Report on Innovative Research Developed, Applied, and Commercially Available

    National Research Council Canada - National Science Library

    Collier, Craig

    2005-01-01

    This SBIR report maintains that reliable pretest predictions and efficient certification are suffering from inconsistent structural integrity that is prevalent throughout a project's design maturity...

  19. Carbon and nutrient use efficiencies optimally balance stoichiometric imbalances

    Science.gov (United States)

    Manzoni, Stefano; Čapek, Petr; Lindahl, Björn; Mooshammer, Maria; Richter, Andreas; Šantrůčková, Hana

    2016-04-01

    Decomposer organisms face large stoichiometric imbalances because their food is generally poor in nutrients compared to the decomposer cellular composition. The presence of excess carbon (C) requires adaptations to utilize nutrients effectively while disposing of or investing excess C. As food composition changes, these adaptations lead to variable C- and nutrient-use efficiencies (defined as the ratios of C and nutrients used for growth over the amounts consumed). For organisms to be ecologically competitive, these changes in efficiencies with resource stoichiometry have to balance advantages and disadvantages in an optimal way. We hypothesize that efficiencies are varied so that community growth rate is optimized along stoichiometric gradients of their resources. Building from previous theories, we predict that maximum growth is achieved when C and nutrients are co-limiting, so that the maximum C-use efficiency is reached, and nutrient release is minimized. This optimality principle is expected to be applicable across terrestrial-aquatic borders, to various elements, and at different trophic levels. While the growth rate maximization hypothesis has been evaluated for consumers and predators, in this contribution we test it for terrestrial and aquatic decomposers degrading resources across wide stoichiometry gradients. The optimality hypothesis predicts constant efficiencies at low substrate C:N and C:P, whereas above a stoichiometric threshold, C-use efficiency declines and nitrogen- and phosphorus-use efficiencies increase up to one. Thus, high resource C:N and C:P lead to low C-use efficiency, but effective retention of nitrogen and phosphorus. Predictions are broadly consistent with efficiency trends in decomposer communities across terrestrial and aquatic ecosystems.

  20. Efficiency of competitions

    Science.gov (United States)

    Ben-Naim, E.; Hengartner, N. W.

    2007-08-01

    League competition is investigated using random processes and scaling techniques. In our model, a weak team can upset a strong team with a fixed probability. Teams play an equal number of head-to-head matches and the team with the largest number of wins is declared to be the champion. The total number of games needed for the best team to win the championship with high certainty T grows as the cube of the number of teams N , i.e., Ttilde N3 . This number can be substantially reduced using preliminary rounds where teams play a small number of games and subsequently, only the top teams advance to the next round. When there are k rounds, the total number of games needed for the best team to emerge as champion, Tk , scales as follows, Tk˜Nγk with γk=[1-(2/3)k+1]-1 . For example, γk=9/5,27/19,81/65 for k=1,2,3 . These results suggest an algorithm for how to infer the best team using a schedule that is linear in N . We conclude that league format is an ineffective method of determining the best team, and that sequential elimination from the bottom up is fair and efficient.

  1. Capital market efficiency III

    Directory of Open Access Journals (Sweden)

    Pantelić Svetlana

    2015-01-01

    Full Text Available In 2013 the Nobel Prize in Economic Sciences was awarded to the American economists, Eugene Fama, Lars Peter Hansen and Robert Shiller. The monetarists, Fama and Hansen, from the University of Chicago, and the Neo- Keynesian, Shiller, from the Yale University, according to the Swedish Royal Academy, won this prestigious prize for their research providing mathematical and economic models to determine (irregularities in the stock value trends at the stock exchanges. With his colleagues, in the 1960s Fama established that, in the short term, it is extremely difficult to forecast stock prices, given that new information gets embedded in the prices rather quickly. Shiller, however, determined that, although it is almost impossible to predict the stock prices for a period of few days, this is not true for a period of several years. He discovered that the stock prices fluctuate much more substantially than corporation dividents, and that the relationship between prices and dividends tends to decline when high, and to grow when low. This pattern does not apply only to stocks, but also to bonds and other forms of capital.

  2. Efficient technique for computational design of thermoelectric materials

    Science.gov (United States)

    Núñez-Valdez, Maribel; Allahyari, Zahed; Fan, Tao; Oganov, Artem R.

    2018-01-01

    Efficient thermoelectric materials are highly desirable, and the quest for finding them has intensified as they could be promising alternatives to fossil energy sources. Here we present a general first-principles approach to predict, in multicomponent systems, efficient thermoelectric compounds. The method combines a robust evolutionary algorithm, a Pareto multiobjective optimization, density functional theory and a Boltzmann semi-classical calculation of thermoelectric efficiency. To test the performance and reliability of our overall framework, we use the well-known system Bi2Te3-Sb2Te3.

  3. Efficiency Evaluation of Energy Systems

    CERN Document Server

    Kanoğlu, Mehmet; Dinçer, İbrahim

    2012-01-01

    Efficiency is one of the most frequently used terms in thermodynamics, and it indicates how well an energy conversion or process is accomplished. Efficiency is also one of the most frequently misused terms in thermodynamics and is often a source of misunderstanding. This is because efficiency is often used without being properly defined first. This book intends to provide a comprehensive evaluation of various efficiencies used for energy transfer and conversion systems including steady-flow energy devices (turbines, compressors, pumps, nozzles, heat exchangers, etc.), various power plants, cogeneration plants, and refrigeration systems. The book will cover first-law (energy based) and second-law (exergy based) efficiencies and provide a comprehensive understanding of their implications. It will help minimize the widespread misuse of efficiencies among students and researchers in energy field by using an intuitive and unified approach for defining efficiencies. The book will be particularly useful for a clear ...

  4. Encapsulation Efficiency, Oscillatory Rheometry

    Directory of Open Access Journals (Sweden)

    Z. Mohammad Hassani

    2014-01-01

    Full Text Available Nanoliposomes are one of the most important polar lipid-based nanocarriers which can be used for encapsulation of both hydrophilic and hydrophobic active compounds. In this research, nanoliposomes based on lecithin-polyethylene glycol-gamma oryzanol were prepared by using a modified thermal method. Only one melting peak in DSC curve of gamma oryzanol bearing liposomes was observed which could be attributed to co-crystallization of both compounds. The addition of gamma oryzanol, caused to reduce the melting point of 5% (w/v lecithin-based liposome from 207°C to 163.2°C. At high level of lecithin, increasing of liposome particle size (storage at 4°C for two months was more obvious and particle size increased from 61 and 113 to 283 and 384 nanometers, respectively. The encapsulation efficiency of gamma oryzanol increased from 60% to 84.3% with increasing lecithin content. The encapsulation stability of oryzanol in liposome was determined at different concentrations of lecithin 3, 5, 10, 20% (w/v and different storage times (1, 7, 30 and 60 days. In all concentrations, the encapsulation stability slightly decreased during 30 days storage. The scanning electron microscopy (SEM images showed relatively spherical to elliptic particles which indicated to low extent of particles coalescence. The oscillatory rheometry showed that the loss modulus of liposomes were higher than storage modulus and more liquid-like behavior than solid-like behavior. The samples storage at 25°C for one month, showed higher viscoelastic parameters than those having been stored at 4°C which were attributed to higher membrane fluidity at 25°C and their final coalescence.Nanoliposomes are one of the most important polar lipid based nanocarriers which can be used for encapsulation of both hydrophilic and hydrophobic active compounds. In this research, nanoliposomes based on lecithin-polyethylene glycol-gamma oryzanol were prepared by using modified thermal method. Only one

  5. Efficiency of manufacturing processes energy and ecological perspectives

    CERN Document Server

    Li, Wen

    2015-01-01

     This monograph presents a reliable methodology for characterising the energy and eco-efficiency of unit manufacturing processes. The Specific Energy Consumption, SEC, will be identified as the key indicator for the energy efficiency of unit processes.  An empirical approach will be validated on different machine tools and manufacturing processes to depict the relationship between process parameters and energy consumptions. Statistical results and additional validation runs will corroborate the high level of accuracy in predicting the energy consumption. In relation to the eco-efficiency, the value and the associated environmental impacts of  manufacturing processes will also be discussed. The interrelationship between process parameters, process value and the associated environmental impact will be integrated in the evaluation of eco-efficiency. The book concludes with a further investigation of the results in order to develop strategies for further efficiency improvement. The target audience primarily co...

  6. Applied predictive control

    CERN Document Server

    Sunan, Huang; Heng, Lee Tong

    2002-01-01

    The presence of considerable time delays in the dynamics of many industrial processes, leading to difficult problems in the associated closed-loop control systems, is a well-recognized phenomenon. The performance achievable in conventional feedback control systems can be significantly degraded if an industrial process has a relatively large time delay compared with the dominant time constant. Under these circumstances, advanced predictive control is necessary to improve the performance of the control system significantly. The book is a focused treatment of the subject matter, including the fundamentals and some state-of-the-art developments in the field of predictive control. Three main schemes for advanced predictive control are addressed in this book: • Smith Predictive Control; • Generalised Predictive Control; • a form of predictive control based on Finite Spectrum Assignment. A substantial part of the book addresses application issues in predictive control, providing several interesting case studie...

  7. The wind power prediction research based on mind evolutionary algorithm

    Science.gov (United States)

    Zhuang, Ling; Zhao, Xinjian; Ji, Tianming; Miao, Jingwen; Cui, Haina

    2018-04-01

    When the wind power is connected to the power grid, its characteristics of fluctuation, intermittent and randomness will affect the stability of the power system. The wind power prediction can guarantee the power quality and reduce the operating cost of power system. There were some limitations in several traditional wind power prediction methods. On the basis, the wind power prediction method based on Mind Evolutionary Algorithm (MEA) is put forward and a prediction model is provided. The experimental results demonstrate that MEA performs efficiently in term of the wind power prediction. The MEA method has broad prospect of engineering application.

  8. Delphi4LED - From measurements to standardized multi-domain compact models of LED : A new European R&D project for predictive and efficient multi-domain modeling and simulation of LEDs at all integration levels along the SSL supply chain

    NARCIS (Netherlands)

    Bornoff, R.; Hildenbrand, V.; Lungten, S.; Martin, G.; Marty, C.; Poppe, A.; Rencz, M.; Schilders, W.H.A.; Yu, Joan

    2016-01-01

    There are a few bottlenecks hampering efficient design of products on different integration lepels of the ssL supply chain. one major issue is that data sheet information propided about packaged LEDs is usually insufficient and inconsistent among different LED pendors. Many data such as temperature

  9. How health leaders can benefit from predictive analytics.

    Science.gov (United States)

    Giga, Aliyah

    2017-11-01

    Predictive analytics can support a better integrated health system providing continuous, coordinated, and comprehensive person-centred care to those who could benefit most. In addition to dollars saved, using a predictive model in healthcare can generate opportunities for meaningful improvements in efficiency, productivity, costs, and better population health with targeted interventions toward patients at risk.

  10. USE Efficiency -- Universities and Students for Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Melandri, Daniela

    2010-09-15

    Universities and Student for Energy Efficiency is a European Project within the Intelligent Energy Programme. It intends to create a common stream for energy efficiency systems in university buildings. Universities and students are proposed as shining examples for energy efficiency solutions and behaviour. The Project involves 10 countries and has the aim to improve energy efficiency in university buildings. Students are the main actors of the project together with professors and technicians. To act on students means to act on direct future market players in diffusion of public opinions. A strong communication action supports the succeeding of the action.

  11. Energy efficiency in Swedish industry

    International Nuclear Information System (INIS)

    Zhang, Shanshan; Lundgren, Tommy; Zhou, Wenchao

    2016-01-01

    This paper assesses energy efficiency in Swedish industry. Using unique firm-level panel data covering the years 2001–2008, the efficiency estimates are obtained for firms in 14 industrial sectors by using data envelopment analysis (DEA). The analysis accounts for multi-output technologies where undesirable outputs are produced alongside with the desirable output. The results show that there was potential to improve energy efficiency in all the sectors and relatively large energy inefficiencies existed in small energy-use industries in the sample period. Also, we assess how the EU ETS, the carbon dioxide (CO_2) tax and the energy tax affect energy efficiency by conducting a second-stage regression analysis. To obtain consistent estimates for the regression model, we apply a modified, input-oriented version of the double bootstrap procedure of Simar and Wilson (2007). The results of the regression analysis reveal that the EU ETS and the CO_2 tax did not have significant influences on energy efficiency in the sample period. However, the energy tax had a positive relation with the energy efficiency. - Highlights: • We use DEA to estimate firm-level energy efficiency in Swedish industry. • We examine impacts of climate and energy policies on energy efficiency. • The analyzed policies are Swedish carbon and energy taxes and the EU ETS. • Carbon tax and EU ETS did not have significant influences on energy efficiency. • The energy tax had a positive relation with energy efficiency.

  12. GAPIT: genome association and prediction integrated tool.

    Science.gov (United States)

    Lipka, Alexander E; Tian, Feng; Wang, Qishan; Peiffer, Jason; Li, Meng; Bradbury, Peter J; Gore, Michael A; Buckler, Edward S; Zhang, Zhiwu

    2012-09-15

    Software programs that conduct genome-wide association studies and genomic prediction and selection need to use methodologies that maximize statistical power, provide high prediction accuracy and run in a computationally efficient manner. We developed an R package called Genome Association and Prediction Integrated Tool (GAPIT) that implements advanced statistical methods including the compressed mixed linear model (CMLM) and CMLM-based genomic prediction and selection. The GAPIT package can handle large datasets in excess of 10 000 individuals and 1 million single-nucleotide polymorphisms with minimal computational time, while providing user-friendly access and concise tables and graphs to interpret results. http://www.maizegenetics.net/GAPIT. zhiwu.zhang@cornell.edu Supplementary data are available at Bioinformatics online.

  13. Efficient Metropolitan Resource Allocation

    Directory of Open Access Journals (Sweden)

    Richard Arnott

    2016-05-01

    Full Text Available Over the past 30 years Calgary has doubled in size, from a population of 640,645 in 1985 to 1,230,915 in 2015. During that time the City has had five different mayors, hosted the Winter Olympics, and expanded the C-Train from 25 platforms to 45. Calgary’s Metropolitan Area has grown too, with Airdrie, Chestermere, Okotoks and Cochrane growing into full-fledged cities, ripe with inter-urban commuters.* And with changes to provincial legislation in the mid-’90s, rural Rocky View County and the Municipal District of Foothills are now real competitors for residential, commercial and industrial development that in the past would have been considered urban. In this metropolitan system, where people live, their household structure, and their place of work informs the services they need to conduct their daily lives, and directly impacts the spatial character of the City and the broader region. In sum, Metropolitan Calgary is increasingly complex. Calgary and the broader metropolitan area will continue to grow, even with the current economic slowdown. Frictions within Calgary, between the various municipalities in the metropolitan area, and the priorities of other local authorities (such as the School Boards and Alberta Health Services will continue to impact the agendas of local politicians and their ability to answer to the needs of their residents. How resources – whether it is hard infrastructure, affordable housing, classrooms, or hospital beds – are allocated over space and how these resources are funded, directly impacts these relationships. This technical paper provides my perspective as an urban economist on the efficient allocation of resources within a metropolitan system in general, with reference to Calgary where appropriate, and serves as a companion to the previously released “Reflections on Calgary’s Spatial Structure: An Urban Economists Critique of Municipal Planning in Calgary.” It is hoped that the concepts reviewed

  14. Improving Gas Flooding Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Reid Grigg; Robert Svec; Zheng Zeng; Alexander Mikhalin; Yi Lin; Guoqiang Yin; Solomon Ampir; Rashid Kassim

    2008-03-31

    This study focuses on laboratory studies with related analytical and numerical models, as well as work with operators for field tests to enhance our understanding of and capabilities for more efficient enhanced oil recovery (EOR). Much of the work has been performed at reservoir conditions. This includes a bubble chamber and several core flood apparatus developed or modified to measure interfacial tension (IFT), critical micelle concentration (CMC), foam durability, surfactant sorption at reservoir conditions, and pressure and temperature effects on foam systems.Carbon dioxide and N{sub 2} systems have been considered, under both miscible and immiscible conditions. The injection of CO2 into brine-saturated sandstone and carbonate core results in brine saturation reduction in the range of 62 to 82% brine in the tests presented in this paper. In each test, over 90% of the reduction occurred with less than 0.5 PV of CO{sub 2} injected, with very little additional brine production after 0.5 PV of CO{sub 2} injected. Adsorption of all considered surfactant is a significant problem. Most of the effect is reversible, but the amount required for foaming is large in terms of volume and cost for all considered surfactants. Some foams increase resistance to the value beyond what is practical in the reservoir. Sandstone, limestone, and dolomite core samples were tested. Dissolution of reservoir rock and/or cement, especially carbonates, under acid conditions of CO2 injection is a potential problem in CO2 injection into geological formations. Another potential change in reservoir injectivity and productivity will be the precipitation of dissolved carbonates as the brine flows and pressure decreases. The results of this report provide methods for determining surfactant sorption and can be used to aid in the determination of surfactant requirements for reservoir use in a CO{sub 2}-foam flood for mobility control. It also provides data to be used to determine rock permeability

  15. Predictable or not predictable? The MOV question

    International Nuclear Information System (INIS)

    Thibault, C.L.; Matzkiw, J.N.; Anderson, J.W.; Kessler, D.W.

    1994-01-01

    Over the past 8 years, the nuclear industry has struggled to understand the dynamic phenomena experienced during motor-operated valve (MOV) operation under differing flow conditions. For some valves and designs, their operational functionality has been found to be predictable; for others, unpredictable. Although much has been accomplished over this period of time, especially on modeling valve dynamics, the unpredictability of many valves and designs still exists. A few valve manufacturers are focusing on improving design and fabrication techniques to enhance product reliability and predictability. However, this approach does not address these issues for installed and inpredictable valves. This paper presents some of the more promising techniques that Wyle Laboratories has explored with potential for transforming unpredictable valves to predictable valves and for retrofitting installed MOVs. These techniques include optimized valve tolerancing, surrogated material evaluation, and enhanced surface treatments

  16. Limits of predictability for large-scale urban vehicular mobility

    OpenAIRE

    Li, Yong; Jin, Depeng; Hui, Pan; Wang, Zhaocheng; Chen, Sheng

    2014-01-01

    Key challenges in vehicular transportation and communication systems are understanding vehicular mobility and utilizing mobility prediction, which are vital for both solving the congestion problem and helping to build efficient vehicular communication networking. Most of the existing works mainly focus on designing algorithms for mobility prediction and exploring utilization of these algorithms. However, the crucial questions of how much the mobility is predictable and how the mobility predic...

  17. Urban eco-efficiency and system dynamics modelling

    Energy Technology Data Exchange (ETDEWEB)

    Hradil, P., Email: petr.hradil@vtt.fi

    2012-06-15

    Assessment of urban development is generally based on static models of economic, social or environmental impacts. More advanced dynamic models have been used mostly for prediction of population and employment changes as well as for other macro-economic issues. This feasibility study was arranged to test the potential of system dynamic modelling in assessing eco-efficiency changes during urban development. (orig.)

  18. Market efficiency and the favorite-longshot bias

    DEFF Research Database (Denmark)

    Feddersen, Arne

    2017-01-01

    Considerable attention has been devoted to the presence of favourite-longshot boas in sports betting markets where favourites are ‘under-bet’ with odds that are superior to those predicted under fully efficient markets. Underdogs are ‘under-bet’ with odds that are even more unfair than those...

  19. Efficient Computation of Casimir Interactions between Arbitrary 3D Objects

    International Nuclear Information System (INIS)

    Reid, M. T. Homer; Rodriguez, Alejandro W.; White, Jacob; Johnson, Steven G.

    2009-01-01

    We introduce an efficient technique for computing Casimir energies and forces between objects of arbitrarily complex 3D geometries. In contrast to other recently developed methods, our technique easily handles nonspheroidal, nonaxisymmetric objects, and objects with sharp corners. Using our new technique, we obtain the first predictions of Casimir interactions in a number of experimentally relevant geometries, including crossed cylinders and tetrahedral nanoparticles.

  20. Scalability and efficiency of genetic algorithms for geometrical applications

    NARCIS (Netherlands)

    Dijk, van S.F.; Thierens, D.; Berg, de M.; Schoenauer, M.

    2000-01-01

    We study the scalability and efficiency of a GA that we developed earlier to solve the practical cartographic problem of labeling a map with point features. We argue that the special characteristics of our GA make that it fits in well with theoretical models predicting the optimal population size

  1. Scaling of the burning efficiency for multicomponent fuel pool fires

    DEFF Research Database (Denmark)

    van Gelderen, Laurens; Farahani, Hamed Farmahini; Rangwala, Ali S.

    In order to improve the validity of small scale crude oil burning experiments, which seem to underestimate the burning efficiency obtained in larger scales, the gasification mechanism of crude oil was studied. Gasification models obtained from literature were used to make a set of predictions for...... an external heat source to simulate the larger fire size are currently in process....

  2. Energy Efficient Mobile Operating Systems

    OpenAIRE

    Muhammad Waseem

    2013-01-01

    Energy is an important resource in mobile computers now days. It is important to manage energy in efficient manner so that energy consumption will be reduced. Developers of operating system decided to increase the battery life time of mobile phones at operating system level. So, design of energy efficient mobile operating system is the best way to reduce the energy consumption in mobile devices. In this paper, currently used energy efficient mobile operating system is discussed and compared. ...

  3. Energy efficiency policies and measures

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-07-01

    This document makes a review of the energy efficiency and demand side management (DSM) policies and measures in European Union countries and Norway in 1999: institutional changes, measures and programmes, budget, taxation, existence of a national DSM programme, national budgets for DSM programmes, electricity pricing: energy/environment tax, national efficiency standards and regulation for new electrical appliances, implementation of Commission directives, efficiency requirements, labelling, fiscal and economic incentives. (J.S.)

  4. Stochastic efficiency: five case studies

    International Nuclear Information System (INIS)

    Proesmans, Karel; Broeck, Christian Van den

    2015-01-01

    Stochastic efficiency is evaluated in five case studies: driven Brownian motion, effusion with a thermo-chemical and thermo-velocity gradient, a quantum dot and a model for information to work conversion. The salient features of stochastic efficiency, including the maximum of the large deviation function at the reversible efficiency, are reproduced. The approach to and extrapolation into the asymptotic time regime are documented. (paper)

  5. Efficient and robust gradient enhanced Kriging emulators.

    Energy Technology Data Exchange (ETDEWEB)

    Dalbey, Keith R.

    2013-08-01

    %E2%80%9CNaive%E2%80%9D or straight-forward Kriging implementations can often perform poorly in practice. The relevant features of the robustly accurate and efficient Kriging and Gradient Enhanced Kriging (GEK) implementations in the DAKOTA software package are detailed herein. The principal contribution is a novel, effective, and efficient approach to handle ill-conditioning of GEK's %E2%80%9Ccorrelation%E2%80%9D matrix, RN%CC%83, based on a pivoted Cholesky factorization of Kriging's (not GEK's) correlation matrix, R, which is a small sub-matrix within GEK's RN%CC%83 matrix. The approach discards sample points/equations that contribute the least %E2%80%9Cnew%E2%80%9D information to RN%CC%83. Since these points contain the least new information, they are the ones which when discarded are both the easiest to predict and provide maximum improvement of RN%CC%83's conditioning. Prior to this work, handling ill-conditioned correlation matrices was a major, perhaps the principal, unsolved challenge necessary for robust and efficient GEK emulators. Numerical results demonstrate that GEK predictions can be significantly more accurate when GEK is allowed to discard points by the presented method. Numerical results also indicate that GEK can be used to break the curse of dimensionality by exploiting inexpensive derivatives (such as those provided by automatic differentiation or adjoint techniques), smoothness in the response being modeled, and adaptive sampling. Development of a suitable adaptive sampling algorithm was beyond the scope of this work; instead adaptive sampling was approximated by omitting the cost of samples discarded by the presented pivoted Cholesky approach.

  6. Oil pipeline energy consumption and efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Hooker, J.N.

    1981-01-01

    This report describes an investigation of energy consumption and efficiency of oil pipelines in the US in 1978. It is based on a simulation of the actual movement of oil on a very detailed representation of the pipeline network, and it uses engineering equations to calculate the energy that pipeline pumps must have exerted on the oil to move it in this manner. The efficiencies of pumps and drivers are estimated so as to arrive at the amount of energy consumed at pumping stations. The throughput in each pipeline segment is estimated by distributing each pipeline company's reported oil movements over its segments in proportions predicted by regression equations that show typical throughput and throughput capacity as functions of pipe diameter. The form of the equations is justified by a generalized cost-engineering study of pipelining, and their parameters are estimated using new techniques developed for the purpose. A simplified model of flow scheduling is chosen on the basis of actual energy use data obtained from a few companies. The study yields energy consumption and intensiveness estimates for crude oil trunk lines, crude oil gathering lines and oil products lines, for the nation as well as by state and by pipe diameter. It characterizes the efficiency of typical pipelines of various diameters operating at capacity. Ancillary results include estimates of oil movements by state and by diameter and approximate pipeline capacity utilization nationwide.

  7. Efficiency, sustainability and global warming

    International Nuclear Information System (INIS)

    Woodward, Richard T.; Bishop, Richard C.

    1995-01-01

    Economic analyses of global warming have typically been grounded in the theory of economic efficiency. Such analyses may be inappropriate because many of the underlying concerns about climate change are rooted not in efficiency, but in the intergenerational allocation of economic endowments. A simple economic model is developed which demonstrates that an efficient economy is not necessarily a sustainable economy. This result leads directly to questions about the policy relevance of several economic studies of the issue. We then consider policy alternatives to address global warming in the context of economies with the dual objectives of efficiency and sustainability, with particular attention to carbon-based taxes

  8. An Evolutionary Efficiency Alternative to the Notion of Pareto Efficiency

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2012-01-01

    textabstractThe paper argues that the notion of Pareto efficiency builds on two normative assumptions: the more general consequentialist norm of any efficiency criterion, and the strong no-harm principle of the prohibition of any redistribution during the economic process that hurts at least one

  9. Predictive systems ecology.

    Science.gov (United States)

    Evans, Matthew R; Bithell, Mike; Cornell, Stephen J; Dall, Sasha R X; Díaz, Sandra; Emmott, Stephen; Ernande, Bruno; Grimm, Volker; Hodgson, David J; Lewis, Simon L; Mace, Georgina M; Morecroft, Michael; Moustakas, Aristides; Murphy, Eugene; Newbold, Tim; Norris, K J; Petchey, Owen; Smith, Matthew; Travis, Justin M J; Benton, Tim G

    2013-11-22

    Human societies, and their well-being, depend to a significant extent on the state of the ecosystems that surround them. These ecosystems are changing rapidly usually in response to anthropogenic changes in the environment. To determine the likely impact of environmental change on ecosystems and the best ways to manage them, it would be desirable to be able to predict their future states. We present a proposal to develop the paradigm of predictive systems ecology, explicitly to understand and predict the properties and behaviour of ecological systems. We discuss the necessary and desirable features of predictive systems ecology models. There are places where predictive systems ecology is already being practised and we summarize a range of terrestrial and marine examples. Significant challenges remain but we suggest that ecology would benefit both as a scientific discipline and increase its impact in society if it were to embrace the need to become more predictive.

  10. Wavelet-based prediction of oil prices

    International Nuclear Information System (INIS)

    Yousefi, Shahriar; Weinreich, Ilona; Reinarz, Dominik

    2005-01-01

    This paper illustrates an application of wavelets as a possible vehicle for investigating the issue of market efficiency in futures markets for oil. The paper provides a short introduction to the wavelets and a few interesting wavelet-based contributions in economics and finance are briefly reviewed. A wavelet-based prediction procedure is introduced and market data on crude oil is used to provide forecasts over different forecasting horizons. The results are compared with data from futures markets for oil and the relative performance of this procedure is used to investigate whether futures markets are efficiently priced

  11. From energy efficiency towards resource efficiency within the Ecodesign Directive

    DEFF Research Database (Denmark)

    Bundgaard, Anja Marie; Mosgaard, Mette; Remmen, Arne

    2017-01-01

    on the most significant environmental impact has often resulted in a focus on energy efficiency in the use phase. Therefore, the Ecodesign Directive should continue to target resource efficiency aspects but also consider environ- mental aspects with a large improvement potential in addition to the most...... significant environmental impact. For the introduction of resource efficiency requirements into the Ecodesign Directive, these requirements have to be included in the preparatory study. It is therefore recommended to broaden the scope of the Methodology for the Ecodesign of Energy-related products and the Eco......The article examines the integration of resource efficiency into the European Ecodesign Directive. The purpose is to analyse the processes and stakeholder interactions, which formed the basis for integrating resource efficiency requirements into the implementing measure for vacuum cleaners...

  12. Seismology for rockburst prediction.

    CSIR Research Space (South Africa)

    De Beer, W

    2000-02-01

    Full Text Available project GAP409 presents a method (SOOTHSAY) for predicting larger mining induced seismic events in gold mines, as well as a pattern recognition algorithm (INDICATOR) for characterising the seismic response of rock to mining and inferring future... State. Defining the time series of a specific function on a catalogue as a prediction strategy, the algorithm currently has a success rate of 53% and 65%, respectively, of large events claimed as being predicted in these two cases, with uncertainties...

  13. Predictability of Conversation Partners

    Science.gov (United States)

    Takaguchi, Taro; Nakamura, Mitsuhiro; Sato, Nobuo; Yano, Kazuo; Masuda, Naoki

    2011-08-01

    Recent developments in sensing technologies have enabled us to examine the nature of human social behavior in greater detail. By applying an information-theoretic method to the spatiotemporal data of cell-phone locations, [C. Song , ScienceSCIEAS0036-8075 327, 1018 (2010)] found that human mobility patterns are remarkably predictable. Inspired by their work, we address a similar predictability question in a different kind of human social activity: conversation events. The predictability in the sequence of one’s conversation partners is defined as the degree to which one’s next conversation partner can be predicted given the current partner. We quantify this predictability by using the mutual information. We examine the predictability of conversation events for each individual using the longitudinal data of face-to-face interactions collected from two company offices in Japan. Each subject wears a name tag equipped with an infrared sensor node, and conversation events are marked when signals are exchanged between sensor nodes in close proximity. We find that the conversation events are predictable to a certain extent; knowing the current partner decreases the uncertainty about the next partner by 28.4% on average. Much of the predictability is explained by long-tailed distributions of interevent intervals. However, a predictability also exists in the data, apart from the contribution of their long-tailed nature. In addition, an individual’s predictability is correlated with the position of the individual in the static social network derived from the data. Individuals confined in a community—in the sense of an abundance of surrounding triangles—tend to have low predictability, and those bridging different communities tend to have high predictability.

  14. Predictability of Conversation Partners

    Directory of Open Access Journals (Sweden)

    Taro Takaguchi

    2011-09-01

    Full Text Available Recent developments in sensing technologies have enabled us to examine the nature of human social behavior in greater detail. By applying an information-theoretic method to the spatiotemporal data of cell-phone locations, [C. Song et al., Science 327, 1018 (2010SCIEAS0036-8075] found that human mobility patterns are remarkably predictable. Inspired by their work, we address a similar predictability question in a different kind of human social activity: conversation events. The predictability in the sequence of one’s conversation partners is defined as the degree to which one’s next conversation partner can be predicted given the current partner. We quantify this predictability by using the mutual information. We examine the predictability of conversation events for each individual using the longitudinal data of face-to-face interactions collected from two company offices in Japan. Each subject wears a name tag equipped with an infrared sensor node, and conversation events are marked when signals are exchanged between sensor nodes in close proximity. We find that the conversation events are predictable to a certain extent; knowing the current partner decreases the uncertainty about the next partner by 28.4% on average. Much of the predictability is explained by long-tailed distributions of interevent intervals. However, a predictability also exists in the data, apart from the contribution of their long-tailed nature. In addition, an individual’s predictability is correlated with the position of the individual in the static social network derived from the data. Individuals confined in a community—in the sense of an abundance of surrounding triangles—tend to have low predictability, and those bridging different communities tend to have high predictability.

  15. Adaptive vehicle motion estimation and prediction

    Science.gov (United States)

    Zhao, Liang; Thorpe, Chuck E.

    1999-01-01

    Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.

  16. Genomic Prediction of Barley Hybrid Performance

    Directory of Open Access Journals (Sweden)

    Norman Philipp

    2016-07-01

    Full Text Available Hybrid breeding in barley ( L. offers great opportunities to accelerate the rate of genetic improvement and to boost yield stability. A crucial requirement consists of the efficient selection of superior hybrid combinations. We used comprehensive phenotypic and genomic data from a commercial breeding program with the goal of examining the potential to predict the hybrid performances. The phenotypic data were comprised of replicated grain yield trials for 385 two-way and 408 three-way hybrids evaluated in up to 47 environments. The parental lines were genotyped using a 3k single nucleotide polymorphism (SNP array based on an Illumina Infinium assay. We implemented ridge regression best linear unbiased prediction modeling for additive and dominance effects and evaluated the prediction ability using five-fold cross validations. The prediction ability of hybrid performances based on general combining ability (GCA effects was moderate, amounting to 0.56 and 0.48 for two- and three-way hybrids, respectively. The potential of GCA-based hybrid prediction requires that both parental components have been evaluated in a hybrid background. This is not necessary for genomic prediction for which we also observed moderate cross-validated prediction abilities of 0.51 and 0.58 for two- and three-way hybrids, respectively. This exemplifies the potential of genomic prediction in hybrid barley. Interestingly, prediction ability using the two-way hybrids as training population and the three-way hybrids as test population or vice versa was low, presumably, because of the different genetic makeup of the parental source populations. Consequently, further research is needed to optimize genomic prediction approaches combining different source populations in barley.

  17. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst......-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only...... compare the worst-case execution time bounds of different architectures....

  18. Predicting scholars' scientific impact.

    Directory of Open Access Journals (Sweden)

    Amin Mazloumian

    Full Text Available We tested the underlying assumption that citation counts are reliable predictors of future success, analyzing complete citation data on the careers of ~150,000 scientists. Our results show that i among all citation indicators, the annual citations at the time of prediction is the best predictor of future citations, ii future citations of a scientist's published papers can be predicted accurately (r(2 = 0.80 for a 1-year prediction, P<0.001 but iii future citations of future work are hardly predictable.

  19. Prediction and probability in sciences

    International Nuclear Information System (INIS)

    Klein, E.; Sacquin, Y.

    1998-01-01

    This book reports the 7 presentations made at the third meeting 'physics and fundamental questions' whose theme was probability and prediction. The concept of probability that was invented to apprehend random phenomena has become an important branch of mathematics and its application range spreads from radioactivity to species evolution via cosmology or the management of very weak risks. The notion of probability is the basis of quantum mechanics and then is bound to the very nature of matter. The 7 topics are: - radioactivity and probability, - statistical and quantum fluctuations, - quantum mechanics as a generalized probability theory, - probability and the irrational efficiency of mathematics, - can we foresee the future of the universe?, - chance, eventuality and necessity in biology, - how to manage weak risks? (A.C.)

  20. 3-dimensional Charge Collection Efficiency

    CERN Document Server

    Kodak, Umut

    2013-01-01

    In this project, we designed a simulation program to create the efficiency map of a 3 dimensional rectangular detector. Efficiency is calculated by observing the collected charge at the output. Using this simulation program, one can observe the inefficient regions at not only on the surface of detector but at the depths of detector.

  1. World's Most Efficient Solar Cell

    Science.gov (United States)

    World's Most Efficient Solar Cell National Renewable Energy Laboratory, Spectrolab Set Record For , 1999 - A solar cell that can convert sunlight to electricity at a record-setting 32 percent efficiency on Earth. Spectrolab of Sylmar, Calif., "grew" the record-setting solar cell. After

  2. Energy efficiency: 2004 world overview

    International Nuclear Information System (INIS)

    2004-01-01

    Since 1992 the World Energy Council (WEC) has been collaborating with ADEME (Agency for Environment and Energy Efficiency, France) on a joint project 'Energy Efficiency Policies and Indicators'. APERC (Asia Pacific Energy Research Centre) and OLADE (Latin American Energy Organisation) have also participated in the study, which has been monitoring and evaluating energy efficiency policies and their impacts around the world. WEC Member Committees have been providing data and information and ENERDATA (France) has provided technical assistance. This report, published in August 2004, presents and evaluates energy efficiency policies in 63 countries, with a specific focus on five policy measures, for which in-depth case studies were prepared by selected experts: - Minimum energy efficiency standards for household electrical appliances; - Innovative energy efficiency funds; - Voluntary/negotiated agreements on energy efficiency/ CO 2 ; - Local energy information centres; - Packages of measures. In particular, the report identifies the policy measures, which have proven to be the most effective, and can be recommended to countries which have recently embarked on the development and implementation of energy demand management policies. During the past ten years, the Kyoto Protocol and, more recently, emerging concerns about security of supply have raised, both the public and the political profile of energy efficiency. Almost all OECD countries and an increasing number of other countries are implementing energy efficiency policies adapted to their national circumstances. In addition to the market instruments (voluntary agreements, labels, information, etc.), regulatory measures are widely introduced where the market fails to give the right signals (buildings, appliances). In developing countries, energy efficiency is equally important, even if the drivers are different compared to industrialized countries. Reduction of greenhouse gas emissions and local pollution often have a

  3. Motor-operated gearbox efficiency

    Energy Technology Data Exchange (ETDEWEB)

    DeWall, K.G.; Watkins, J.C.; Bramwell, D. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Weidenhamer, G.H.

    1996-12-01

    Researchers at the Idaho National Engineering Laboratory recently conducted tests investigating the operating efficiency of the power train (gearbox) in motor-operators typically used in nuclear power plants to power motor-operated valves. Actual efficiency ratios were determined from in-line measurements of electric motor torque (input to the operator gearbox) and valve stem torque (output from the gearbox) while the operators were subjected to gradually increasing loads until the electric motor stalled. The testing included parametric studies under reduced voltage and elevated temperature conditions. As part of the analysis of the results, the authors compared efficiency values determined from testing to the values published by the operator manufacturer and typically used by the industry in calculations for estimating motor-operator capabilities. The operators they tested under load ran at efficiencies lower than the running efficiency (typically 50%) published by the operator manufacturer.

  4. Motor-operated gearbox efficiency

    International Nuclear Information System (INIS)

    DeWall, K.G.; Watkins, J.C.; Bramwell, D.; Weidenhamer, G.H.

    1996-01-01

    Researchers at the Idaho National Engineering Laboratory recently conducted tests investigating the operating efficiency of the power train (gearbox) in motor-operators typically used in nuclear power plants to power motor-operated valves. Actual efficiency ratios were determined from in-line measurements of electric motor torque (input to the operator gearbox) and valve stem torque (output from the gearbox) while the operators were subjected to gradually increasing loads until the electric motor stalled. The testing included parametric studies under reduced voltage and elevated temperature conditions. As part of the analysis of the results, the authors compared efficiency values determined from testing to the values published by the operator manufacturer and typically used by the industry in calculations for estimating motor-operator capabilities. The operators they tested under load ran at efficiencies lower than the running efficiency (typically 50%) published by the operator manufacturer

  5. Gold, currencies and market efficiency

    Science.gov (United States)

    Kristoufek, Ladislav; Vosvrda, Miloslav

    2016-05-01

    Gold and currency markets form a unique pair with specific interactions and dynamics. We focus on the efficiency ranking of gold markets with respect to the currency of purchase. By utilizing the Efficiency Index (EI) based on fractal dimension, approximate entropy and long-term memory on a wide portfolio of 142 gold price series for different currencies, we construct the efficiency ranking based on the extended EI methodology we provide. Rather unexpected results are uncovered as the gold prices in major currencies lay among the least efficient ones whereas very minor currencies are among the most efficient ones. We argue that such counterintuitive results can be partly attributed to a unique period of examination (2011-2014) characteristic by quantitative easing and rather unorthodox monetary policies together with the investigated illegal collusion of major foreign exchange market participants, as well as some other factors discussed in some detail.

  6. Motor-operator gearbox efficiency

    International Nuclear Information System (INIS)

    DeWall, K.G.; Watkins, J.C.; Bramwell, D.

    1996-01-01

    Researchers at the Idaho National Engineering Laboratory recently conducted tests investigating the operating efficiency of the power train (gearbox) in motor-operators typically used in nuclear power plants to power motor-operated valves. Actual efficiency ratios were determined from in-line measurements of electric motor torque (input to the operator gearbox) and valve stem torque (output from the gearbox) while the operators were subjected to gradually increasing loads until the electric motor stalled. The testing included parametric studies under reduced voltage and elevated temperature conditions. As part of the analysis of the results, we compared efficiency values determined from testing to the values published by the operator manufacturer and typically used by the industry in calculations for estimating motor-operator capabilities. The operators we tested under load ran at efficiencies lower than the running efficiency (typically 50%) published by the operator manufacturer

  7. The Prediction Value

    NARCIS (Netherlands)

    Koster, M.; Kurz, S.; Lindner, I.; Napel, S.

    2013-01-01

    We introduce the prediction value (PV) as a measure of players’ informational importance in probabilistic TU games. The latter combine a standard TU game and a probability distribution over the set of coalitions. Player i’s prediction value equals the difference between the conditional expectations

  8. Predictability of Stock Returns

    Directory of Open Access Journals (Sweden)

    Ahmet Sekreter

    2017-06-01

    Full Text Available Predictability of stock returns has been shown by empirical studies over time. This article collects the most important theories on forecasting stock returns and investigates the factors that affecting behavior of the stocks’ prices and the market as a whole. Estimation of the factors and the way of estimation are the key issues of predictability of stock returns.

  9. Predicting AD conversion

    DEFF Research Database (Denmark)

    Liu, Yawu; Mattila, Jussi; Ruiz, Miguel �ngel Mu�oz

    2013-01-01

    To compare the accuracies of predicting AD conversion by using a decision support system (PredictAD tool) and current research criteria of prodromal AD as identified by combinations of episodic memory impairment of hippocampal type and visual assessment of medial temporal lobe atrophy (MTA) on MRI...

  10. Predicting Free Recalls

    Science.gov (United States)

    Laming, Donald

    2006-01-01

    This article reports some calculations on free-recall data from B. Murdock and J. Metcalfe (1978), with vocal rehearsal during the presentation of a list. Given the sequence of vocalizations, with the stimuli inserted in their proper places, it is possible to predict the subsequent sequence of recalls--the predictions taking the form of a…

  11. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  12. Evaluating prediction uncertainty

    International Nuclear Information System (INIS)

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  13. Ground motion predictions

    Energy Technology Data Exchange (ETDEWEB)

    Loux, P C [Environmental Research Corporation, Alexandria, VA (United States)

    1969-07-01

    Nuclear generated ground motion is defined and then related to the physical parameters that cause it. Techniques employed for prediction of ground motion peak amplitude, frequency spectra and response spectra are explored, with initial emphasis on the analysis of data collected at the Nevada Test Site (NTS). NTS postshot measurements are compared with pre-shot predictions. Applicability of these techniques to new areas, for example, Plowshare sites, must be questioned. Fortunately, the Atomic Energy Commission is sponsoring complementary studies to improve prediction capabilities primarily in new locations outside the NTS region. Some of these are discussed in the light of anomalous seismic behavior, and comparisons are given showing theoretical versus experimental results. In conclusion, current ground motion prediction techniques are applied to events off the NTS. Predictions are compared with measurements for the event Faultless and for the Plowshare events, Gasbuggy, Cabriolet, and Buggy I. (author)

  14. Ground motion predictions

    International Nuclear Information System (INIS)

    Loux, P.C.

    1969-01-01

    Nuclear generated ground motion is defined and then related to the physical parameters that cause it. Techniques employed for prediction of ground motion peak amplitude, frequency spectra and response spectra are explored, with initial emphasis on the analysis of data collected at the Nevada Test Site (NTS). NTS postshot measurements are compared with pre-shot predictions. Applicability of these techniques to new areas, for example, Plowshare sites, must be questioned. Fortunately, the Atomic Energy Commission is sponsoring complementary studies to improve prediction capabilities primarily in new locations outside the NTS region. Some of these are discussed in the light of anomalous seismic behavior, and comparisons are given showing theoretical versus experimental results. In conclusion, current ground motion prediction techniques are applied to events off the NTS. Predictions are compared with measurements for the event Faultless and for the Plowshare events, Gasbuggy, Cabriolet, and Buggy I. (author)

  15. Is Bumblebee Foraging Efficiency Mediated by Morphological Correspondence to Flowers?

    Directory of Open Access Journals (Sweden)

    Ikumi Dohzono

    2011-01-01

    Full Text Available Preference for certain types of flowers in bee species may be an adaptation for efficient foraging, and they often prefer flowers whose shape fits their mouthparts. However, it is unclear whether such flowers are truly beneficial for them. We address this issue by experimentally measuring foraging efficiency of bumblebees, the volume of sucrose solution consumed over handling time (μL/second, using long-tongued Bombus diversus Smith and short-tongued B. honshuensis Tkalcu that visit Clematis stans Siebold et Zuccarini. The corolla tube length of C. stans decreases during a flowering period, and male flowers are longer than female flowers. Long- and short-tongued bumblebees frequently visited longer and shorter flowers, respectively. Based on these preferences, we hypothesized that bumblebee foraging efficiency is higher when visiting flowers that show a good morphological fit between the proboscis and the corolla tube. Foraging efficiency of bumblebees was estimated using flowers for which nectar quality and quantity were controlled, in an experimental enclosure. We show that 1 the foraging efficiency of B. diversus was enhanced when visiting younger, longer flowers, and that 2 the foraging efficiency of B. honshuensis was higher when visiting shorter female flowers. This suggests that morphological correspondence between insects and flowers is important for insect foraging efficiency. However, in contradiction to our prediction, 3 short-tongued bumblebees B. honshuensis sucked nectar more efficiently when visiting younger, longer flowers, and 4 there was no significant difference in the foraging efficiency of B. diversus between flower sexes. These results suggest that morphological fit between the proboscis and the corolla tube is not a sole determinant of foraging efficiency. Bumblebees may adjust their sucking behavior in response to available rewards, and competition over rewards between bumblebee species might change visitation patterns

  16. Mobility Modelling through Trajectory Decomposition and Prediction

    OpenAIRE

    Faghihi, Farbod

    2017-01-01

    The ubiquity of mobile devices with positioning sensors make it possible to derive user's location at any time. However, constantly sensing the position in order to track the user's movement is not feasible, either due to the unavailability of sensors, or computational and storage burdens. In this thesis, we present and evaluate a novel approach for efficiently tracking user's movement trajectories using decomposition and prediction of trajectories. We facilitate tracking by taking advantage ...

  17. Measuring energy efficiency in economics: Shadow value approach

    Science.gov (United States)

    Khademvatani, Asgar

    For decades, academic scholars and policy makers have commonly applied a simple average measure, energy intensity, for studying energy efficiency. In contrast, we introduce a distinctive marginal measure called energy shadow value (SV) for modeling energy efficiency drawn on economic theory. This thesis demonstrates energy SV advantages, conceptually and empirically, over the average measure recognizing marginal technical energy efficiency and unveiling allocative energy efficiency (energy SV to energy price). Using a dual profit function, the study illustrates how treating energy as quasi-fixed factor called quasi-fixed approach offers modeling advantages and is appropriate in developing an explicit model for energy efficiency. We address fallacies and misleading results using average measure and demonstrate energy SV advantage in inter- and intra-country energy efficiency comparison. Energy efficiency dynamics and determination of efficient allocation of energy use are shown through factors impacting energy SV: capital, technology, and environmental obligations. To validate the energy SV, we applied a dual restricted cost model using KLEM dataset for the 35 US sectors stretching from 1958 to 2000 and selected a sample of the four sectors. Following the empirical results, predicted wedges between energy price and the SV growth indicate a misallocation of energy use in stone, clay and glass (SCG) and communications (Com) sectors with more evidence in the SCG compared to the Com sector, showing overshoot in energy use relative to optimal paths and cost increases from sub-optimal energy use. The results show that energy productivity is a measure of technical efficiency and is void of information on the economic efficiency of energy use. Decomposing energy SV reveals that energy, capital and technology played key roles in energy SV increases helping to consider and analyze policy implications of energy efficiency improvement. Applying the marginal measure, we also

  18. Prediction Reweighting for Domain Adaptation.

    Science.gov (United States)

    Shuang Li; Shiji Song; Gao Huang

    2017-07-01

    There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.

  19. Rain use efficiency across a precipitation gradient on the Tibetan Plateau

    Science.gov (United States)

    Rain use efficiency (RUE), commonly described as the ratio of aboveground net primary production (ANPP) to mean annual precipitation (MAP), is a critical indicator for predicting potential responses of grassland ecosystems to changing precipitation regimes. However, current understanding on patterns...

  20. Structural prediction in aphasia

    Directory of Open Access Journals (Sweden)

    Tessa Warren

    2015-05-01

    Full Text Available There is considerable evidence that young healthy comprehenders predict the structure of upcoming material, and that their processing is facilitated when they encounter material matching those predictions (e.g., Staub & Clifton, 2006; Yoshida, Dickey & Sturt, 2013. However, less is known about structural prediction in aphasia. There is evidence that lexical prediction may be spared in aphasia (Dickey et al., 2014; Love & Webb, 1977; cf. Mack et al, 2013. However, predictive mechanisms supporting facilitated lexical access may not necessarily support structural facilitation. Given that many people with aphasia (PWA exhibit syntactic deficits (e.g. Goodglass, 1993, PWA with such impairments may not engage in structural prediction. However, recent evidence suggests that some PWA may indeed predict upcoming structure (Hanne, Burchert, De Bleser, & Vashishth, 2015. Hanne et al. tracked the eyes of PWA (n=8 with sentence-comprehension deficits while they listened to reversible subject-verb-object (SVO and object-verb-subject (OVS sentences in German, in a sentence-picture matching task. Hanne et al. manipulated case and number marking to disambiguate the sentences’ structure. Gazes to an OVS or SVO picture during the unfolding of a sentence were assumed to indicate prediction of the structure congruent with that picture. According to this measure, the PWA’s structural prediction was impaired compared to controls, but they did successfully predict upcoming structure when morphosyntactic cues were strong and unambiguous. Hanne et al.’s visual-world evidence is suggestive, but their forced-choice sentence-picture matching task places tight constraints on possible structural predictions. Clearer evidence of structural prediction would come from paradigms where the content of upcoming material is not as constrained. The current study used self-paced reading study to examine structural prediction among PWA in less constrained contexts. PWA (n=17 who

  1. Behavioural finance perspectives on Malaysian stock market efficiency

    Directory of Open Access Journals (Sweden)

    Jasman Tuyon

    2016-03-01

    Full Text Available This paper provides historical, theoretical, and empirical syntheses in understanding the rationality of investors, stock prices, and stock market efficiency behaviour in the theoretical lenses of behavioural finance paradigm. The inquiry is guided by multidisciplinary behavioural-related theories. The analyses employed a long span of Bursa Malaysia stock market data from 1977 to 2014 along the different phases of economic development and market states. The tests confirmed the presence of asymmetric dynamic behaviour of prices predictability as well as risk and return relationships across different market states, risk states and quantiles data segments. The efficiency tests show trends of an adaptive pattern of weak market efficiency across various economic phases and market states. Collectively, these evidences lend support to bounded-adaptive rational of investors' behaviour, dynamic stock price behaviour, and accordingly forming bounded-adaptive market efficiency.

  2. Energy efficiency standards and innovation

    Science.gov (United States)

    Morrison, Geoff

    2015-01-01

    Van Buskirk et al (2014 Environ. Res. Lett. 9 114010) demonstrate that the purchase price, lifecycle cost and price of improving efficiency (i.e. the incremental price of efficiency gain) decline at an accelerated rate following the adoption of the first energy efficiency standards for five consumer products. The authors show these trends using an experience curve framework (i.e. price/cost versus cumulative production). While the paper does not draw a causal link between standards and declining prices, they provide suggestive evidence using markets in the US and Europe. Below, I discuss the potential implications of the work.

  3. Absorption Efficiency of Receiving Antennas

    DEFF Research Database (Denmark)

    Andersen, Jørgen Bach; Frandsen, Aksel

    2005-01-01

    A receiving antenna with a matched load will always scatter some power. This paper sets an upper and a lower bound on the absorption efficiency (absorbed power over sum of absorbed and scattered powers), which lies between 0 and 100% depending on the directivities of the antenna and scatter...... patterns. It can approach 100% as closely as desired, although in practice this may not be an attractive solution. An example with a small endfire array of dipoles shows an efficiency of 93%. Several examples of small conical horn antennas are also given, and they all have absorption efficiencies less than...

  4. Polish Foundation for Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-31

    The Polish Foundation for Energy Efficiency (FEWE) was established in Poland at the end of 1990. FEWE, as an independent and non-profit organization, has the following objectives: to strive towards an energy efficient national economy, and to show the way and methods by use of which energy efficiency can be increased. The activity of the Foundation covers the entire territory of Poland through three regional centers: in Warsaw, Katowice and Cracow. FEWE employs well-known and experienced specialists within thermal and power engineering, civil engineering, economy and applied sciences. The organizer of the Foundation has been Battelle Memorial Institute - Pacific Northwest Laboratories from the USA.

  5. Prediction of bull fertility.

    Science.gov (United States)

    Utt, Matthew D

    2016-06-01

    Prediction of male fertility is an often sought-after endeavor for many species of domestic animals. This review will primarily focus on providing some examples of dependent and independent variables to stimulate thought about the approach and methodology of identifying the most appropriate of those variables to predict bull (bovine) fertility. Although the list of variables will continue to grow with advancements in science, the principles behind making predictions will likely not change significantly. The basic principle of prediction requires identifying a dependent variable that is an estimate of fertility and an independent variable or variables that may be useful in predicting the fertility estimate. Fertility estimates vary in which parts of the process leading to conception that they infer about and the amount of variation that influences the estimate and the uncertainty thereof. The list of potential independent variables can be divided into competence of sperm based on their performance in bioassays or direct measurement of sperm attributes. A good prediction will use a sample population of bulls that is representative of the population to which an inference will be made. Both dependent and independent variables should have a dynamic range in their values. Careful selection of independent variables includes reasonable measurement repeatability and minimal correlation among variables. Proper estimation and having an appreciation of the degree of uncertainty of dependent and independent variables are crucial for using predictions to make decisions regarding bull fertility. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Prediction of critical heat flux using ANFIS

    Energy Technology Data Exchange (ETDEWEB)

    Zaferanlouei, Salman, E-mail: zaferanlouei@gmail.co [Nuclear Engineering and Physics Department, Faculty of Nuclear Engineering, Center of Excellence in Nuclear Engineering, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Avenue, Tehran (Iran, Islamic Republic of); Rostamifard, Dariush; Setayeshi, Saeed [Nuclear Engineering and Physics Department, Faculty of Nuclear Engineering, Center of Excellence in Nuclear Engineering, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Avenue, Tehran (Iran, Islamic Republic of)

    2010-06-15

    The prediction of Critical Heat Flux (CHF) is essential for water cooled nuclear reactors since it is an important parameter for the economic efficiency and safety of nuclear power plants. Therefore, in this study using Adaptive Neuro-Fuzzy Inference System (ANFIS), a new flexible tool is developed to predict CHF. The process of training and testing in this model is done by using a set of available published field data. The CHF values predicted by the ANFIS model are acceptable compared with the other prediction methods. We improve the ANN model that is proposed by to avoid overfitting. The obtained new ANN test errors are compared with ANFIS model test errors, subsequently. It is found that the ANFIS model with root mean square (RMS) test errors of 4.79%, 5.04% and 11.39%, in fixed inlet conditions and local conditions and fixed outlet conditions, respectively, has superior performance in predicting the CHF than the test error obtained from MLP Neural Network in fixed inlet and outlet conditions, however, ANFIS also has acceptable result to predict CHF in fixed local conditions.

  7. Prediction of critical heat flux using ANFIS

    International Nuclear Information System (INIS)

    Zaferanlouei, Salman; Rostamifard, Dariush; Setayeshi, Saeed

    2010-01-01

    The prediction of Critical Heat Flux (CHF) is essential for water cooled nuclear reactors since it is an important parameter for the economic efficiency and safety of nuclear power plants. Therefore, in this study using Adaptive Neuro-Fuzzy Inference System (ANFIS), a new flexible tool is developed to predict CHF. The process of training and testing in this model is done by using a set of available published field data. The CHF values predicted by the ANFIS model are acceptable compared with the other prediction methods. We improve the ANN model that is proposed by to avoid overfitting. The obtained new ANN test errors are compared with ANFIS model test errors, subsequently. It is found that the ANFIS model with root mean square (RMS) test errors of 4.79%, 5.04% and 11.39%, in fixed inlet conditions and local conditions and fixed outlet conditions, respectively, has superior performance in predicting the CHF than the test error obtained from MLP Neural Network in fixed inlet and outlet conditions, however, ANFIS also has acceptable result to predict CHF in fixed local conditions.

  8. Predicting Hydrologic Function With Aquatic Gene Fragments

    Science.gov (United States)

    Good, S. P.; URycki, D. R.; Crump, B. C.

    2018-03-01

    Recent advances in microbiology techniques, such as genetic sequencing, allow for rapid and cost-effective collection of large quantities of genetic information carried within water samples. Here we posit that the unique composition of aquatic DNA material within a water sample contains relevant information about hydrologic function at multiple temporal scales. In this study, machine learning was used to develop discharge prediction models trained on the relative abundance of bacterial taxa classified into operational taxonomic units (OTUs) based on 16S rRNA gene sequences from six large arctic rivers. We term this approach "genohydrology," and show that OTU relative abundances can be used to predict river discharge at monthly and longer timescales. Based on a single DNA sample from each river, the average Nash-Sutcliffe efficiency (NSE) for predicted mean monthly discharge values throughout the year was 0.84, while the NSE for predicted discharge values across different return intervals was 0.67. These are considerable improvements over predictions based only on the area-scaled mean specific discharge of five similar rivers, which had average NSE values of 0.64 and -0.32 for seasonal and recurrence interval discharge values, respectively. The genohydrology approach demonstrates that genetic diversity within the aquatic microbiome is a large and underutilized data resource with benefits for prediction of hydrologic function.

  9. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  10. Prediction ranges. Annual review

    Energy Technology Data Exchange (ETDEWEB)

    Parker, J.C.; Tharp, W.H.; Spiro, P.S.; Keng, K.; Angastiniotis, M.; Hachey, L.T.

    1988-01-01

    Prediction ranges equip the planner with one more tool for improved assessment of the outcome of a course of action. One of their major uses is in financial evaluations, where corporate policy requires the performance of uncertainty analysis for large projects. This report gives an overview of the uses of prediction ranges, with examples; and risks and uncertainties in growth, inflation, and interest and exchange rates. Prediction ranges and standard deviations of 80% and 50% probability are given for various economic indicators in Ontario, Canada, and the USA, as well as for foreign exchange rates and Ontario Hydro interest rates. An explanatory note on probability is also included. 23 tabs.

  11. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  12. Protein Sorting Prediction

    DEFF Research Database (Denmark)

    Nielsen, Henrik

    2017-01-01

    and drawbacks of each of these approaches is described through many examples of methods that predict secretion, integration into membranes, or subcellular locations in general. The aim of this chapter is to provide a user-level introduction to the field with a minimum of computational theory.......Many computational methods are available for predicting protein sorting in bacteria. When comparing them, it is important to know that they can be grouped into three fundamentally different approaches: signal-based, global-property-based and homology-based prediction. In this chapter, the strengths...

  13. 'Red Flag' Predictions

    DEFF Research Database (Denmark)

    Hallin, Carina Antonia; Andersen, Torben Juul; Tveterås, Sigbjørn

    -generation prediction markets and outline its unique features as a third-generation prediction market. It is argued that frontline employees gain deep insights when they execute operational activities on an ongoing basis in the organization. The experiential learning from close interaction with internal and external......This conceptual article introduces a new way to predict firm performance based on aggregation of sensing among frontline employees about changes in operational capabilities to update strategic action plans and generate innovations. We frame the approach in the context of first- and second...

  14. Towards Predictive Association Theories

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Tsivintzelis, Ioannis; Michelsen, Michael Locht

    2011-01-01

    Association equations of state like SAFT, CPA and NRHB have been previously applied to many complex mixtures. In this work we focus on two of these models, the CPA and the NRHB equations of state and the emphasis is on the analysis of their predictive capabilities for a wide range of applications....... We use the term predictive in two situations: (i) with no use of binary interaction parameters, and (ii) multicomponent calculations using binary interaction parameters based solely on binary data. It is shown that the CPA equation of state can satisfactorily predict CO2–water–glycols–alkanes VLE...

  15. The CRRES high efficiency solar panel

    International Nuclear Information System (INIS)

    Trumble, T.M.

    1991-01-01

    This paper reports on the High Efficiency Solar Panel (HESP) experiments which is to provide both engineering and scientific information concerning the effects of space radiation on advanced gallium arsenide (GaAs) solar cells. The HESP experiment consists of an ambient panel, and annealing panel and a programmable load. This experiment, in conjunction with the radiation measurement experiments abroad the CREES, provides the first opportunity to simultaneously measure the trapped radiation belts and the results of radiation damage to solar cells. The engineering information will result in a design guide for selecting the optimum solar array characteristics for different orbits and different lifetimes. The scientific information will provide both correlation of laboratory damage effects to space damage effects and a better model for predicting effective solar cell panel lifetimes

  16. Learning receptive fields using predictive feedback.

    Science.gov (United States)

    Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H

    2006-01-01

    Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.

  17. Efficiency principles of consulting entrepreneurship

    OpenAIRE

    Moroz Yustina S.; Drozdov Igor N.

    2015-01-01

    The article reviews the primary goals and problems of consulting entrepreneurship. The principles defining efficiency of entrepreneurship in the field of consulting are generalized. The special attention is given to the importance of ethical principles of conducting consulting entrepreneurship activity.

  18. High-efficiency wind turbine

    Science.gov (United States)

    Hein, L. A.; Myers, W. N.

    1980-01-01

    Vertical axis wind turbine incorporates several unique features to extract more energy from wind increasing efficiency 20% over conventional propeller driven units. System also features devices that utilize solar energy or chimney effluents during periods of no wind.

  19. Planning for resource efficient cities

    DEFF Research Database (Denmark)

    Fertner, Christian; Groth, Niels Boje

    2016-01-01

    development from energy consumption are crucial for a city’s future vulnerability and resilience against changes in general resource availability. The challenge gets further complex, as resource and energy efficiency in a city is deeply interwoven with other aspects of urban development such as social...... structures and the geographical context. As cities are the main consumer of energy and resources, they are both problem and solution to tackle issues of energy efficiency and saving. Cities have been committed to this agenda, especially to meet the national and international energy targets. Increasingly......, cities act as entrepreneurs of new energy solutions acknowledging that efficient monitoring of energy and climate policies has become important to urban branding and competitiveness. This special issue presents findings from the European FP7 project ‘Planning for Energy Efficient Cities’ (PLEEC...

  20. Is energy efficiency environmentally friendly?

    Energy Technology Data Exchange (ETDEWEB)

    Herring, H. [Open University, Milton Keynes (United Kingdom). Energy and Environment Research Unit

    2000-07-01

    The paper challenges the view that improving the efficiency of energy use will lead to a reduction in national energy consumption, and hence is an effective policy for reducing CO{sub 2} emissions. It argues that improving energy efficiency lowers the implicit price of energy and hence makes its use more affordable, thus leading to greater use. The paper presents the views of economists, as well as green critics of 'efficiency' and the 'dematerialization' thesis. It argues that a more effective CO{sub 2} policy is to concentrate on shifting to non-fossil fuel, like renewables, subsidized through a carbon tax. Ultimately what is needed, to limit energy consumption is energy conservation not energy efficiency. 44 refs.

  1. Energy Efficient Hydraulic Hybrid Drives

    OpenAIRE

    Rydberg, Karl-Erik

    2009-01-01

    Energy efficiency of propulsion systems for cars, trucks and construction machineries has become one of the most important topics in today’s mobile system design, mainly because of increased fuel costs and new regulations about engine emissions, which is needed to save the environment. To meet the increased requirements on higher efficiency and better functionality, components and systems have been developed over the years. For the last ten years the development of hybrid systems can be divid...

  2. Energy Efficient Drivepower: An Overview.

    Energy Technology Data Exchange (ETDEWEB)

    Ula, Sadrul; Birnbaum, Larry E.; Jordan, Don

    1993-05-01

    This report examines energy efficiency in drivepower systems. Only systems where the prime movers are electrical motors are discussed. A systems approach is used to examine all major aspects of drivepower, including motors, controls, electrical tune-ups, mechanical efficiency, maintenance, and management. Potential annual savings to the US society of $25 to $50 billion are indicated. The report was written for readers with a semi-technical background.

  3. Energy efficiency: utopia or reality?

    International Nuclear Information System (INIS)

    Anon.

    2006-01-01

    In its 2006 allocution the world council on the energy WEC, analyzes the role of the energy efficiency in the energy life cycle. In spite of different objectives followed by the developing and developed countries, implement a world energy efficiency economy is a challenge possible by the cooperation.The WEC is an ideal forum for the information and experience exchange. (A.L.B.)

  4. Is the stock market efficient?

    Science.gov (United States)

    Malkiel, B G

    1989-03-10

    A stock market is said to be efficient if it accurately reflects all relevant information in determining security prices. Critics have asserted that share prices are far too volatile to be explained by changes in objective economic events-the October 1987 crash being a case in point. Although the evidence is not unambiguous, reports of the death of the efficient market hypothesis appear premature.

  5. Efficient computation of argumentation semantics

    CERN Document Server

    Liao, Beishui

    2013-01-01

    Efficient Computation of Argumentation Semantics addresses argumentation semantics and systems, introducing readers to cutting-edge decomposition methods that drive increasingly efficient logic computation in AI and intelligent systems. Such complex and distributed systems are increasingly used in the automation and transportation systems field, and particularly autonomous systems, as well as more generic intelligent computation research. The Series in Intelligent Systems publishes titles that cover state-of-the-art knowledge and the latest advances in research and development in intelligen

  6. Energy efficiency: potentials and profits

    International Nuclear Information System (INIS)

    Sigaud, J.B.

    2011-01-01

    In this work, Jean-Marie Bouchereau (ADEME) has presented a review of the energy efficiency profits in France during the last 20 years and the prospects from now to 2020. Then, Geoffrey Woodward (TOTAL) and Sebastien Huchette (AXENS) have recalled the stakes involved in the energy efficiency of the upstream and downstream sectors respectively and presented examples of advances approaches illustrated by concrete cases of applications. (O.M.)

  7. Energy Efficiency in Swimming Facilities

    OpenAIRE

    Kampel, Wolfgang

    2015-01-01

    High and increasing energy use is a worldwide issue that has been reported and documented in the literature. Various studies have been performed on renewable energy and energy efficiency to counteract this trend. Although using renewable energy sources reduces pollution, improvements in energy efficiency reduce total energy use and protect the environment from further damage. In Europe, 40 % of the total energy use is linked to buildings, making them a main objective concerning...

  8. Transformer Efficiency Assessment - Okinawa, Japan

    Energy Technology Data Exchange (ETDEWEB)

    Thomas L. Baldwin; Robert J. Turk; Kurt S. Myers; Jake P. Gentle; Jason W. Bush

    2012-08-01

    The US Army Engineering & Support Center, Huntsville (USAESCH), and the US Marine Corps Base (MCB), Okinawa, Japan retained Idaho National Laboratory (INL) to conduct a Transformer Efficiency Assessment of “key” transformers located at multiple military bases in Okinawa, Japan. The purpose of this assessment is to support the Marine Corps Base, Okinawa in evaluating medium voltage distribution transformers for potential efficiency upgrades. The original scope of work included the MCB providing actual transformer nameplate data, manufacturer’s factory test sheets, electrical system data (kWh), demand data (kWd), power factor data, and electricity cost data. Unfortunately, the MCB’s actual data is not available and therefore making it necessary to de-scope the original assessment. Note: Any similar nameplate data, photos of similar transformer nameplates, and basic electrical details from one-line drawings (provided by MCB) are not a replacement for actual load loss test data. It is recommended that load measurements are performed on the high and low sides of transformers to better quantify actual load losses, demand data, and power factor data. We also recommend that actual data, when available, be inserted by MCB Okinawa where assumptions have been made and then the LCC analysis updated. This report covers a generalized assessment of modern U.S. transformers in a three level efficiency category, Low-Level efficiency, Medium-Level efficiency, and High-Level efficiency.

  9. Transformer Efficiency Assessment - Okinawa, Japan

    Energy Technology Data Exchange (ETDEWEB)

    Thomas L. Baldwin; Robert J. Turk; Kurt S. Myers; Jake P. Gentle; Jason W. Bush

    2012-05-01

    The US Army Engineering & Support Center, Huntsville (USAESCH), and the US Marine Corps Base (MCB), Okinawa, Japan retained Idaho National Laboratory (INL) to conduct a Transformer Efficiency Assessment of “key” transformers located at multiple military bases in Okinawa, Japan. The purpose of this assessment is to support the Marine Corps Base, Okinawa in evaluating medium voltage distribution transformers for potential efficiency upgrades. The original scope of work included the MCB providing actual transformer nameplate data, manufacturer’s factory test sheets, electrical system data (kWh), demand data (kWd), power factor data, and electricity cost data. Unfortunately, the MCB’s actual data is not available and therefore making it necessary to de-scope the original assessment. Note: Any similar nameplate data, photos of similar transformer nameplates, and basic electrical details from one-line drawings (provided by MCB) are not a replacement for actual load loss test data. It is recommended that load measurements are performed on the high and low sides of transformers to better quantify actual load losses, demand data, and power factor data. We also recommend that actual data, when available, be inserted by MCB Okinawa where assumptions have been made and then the LCC analysis updated. This report covers a generalized assessment of modern U.S. transformers in a three level efficiency category, Low-Level efficiency, Medium-Level efficiency, and High-Level efficiency.

  10. Filtering and prediction

    CERN Document Server

    Fristedt, B; Krylov, N

    2007-01-01

    Filtering and prediction is about observing moving objects when the observations are corrupted by random errors. The main focus is then on filtering out the errors and extracting from the observations the most precise information about the object, which itself may or may not be moving in a somewhat random fashion. Next comes the prediction step where, using information about the past behavior of the object, one tries to predict its future path. The first three chapters of the book deal with discrete probability spaces, random variables, conditioning, Markov chains, and filtering of discrete Markov chains. The next three chapters deal with the more sophisticated notions of conditioning in nondiscrete situations, filtering of continuous-space Markov chains, and of Wiener process. Filtering and prediction of stationary sequences is discussed in the last two chapters. The authors believe that they have succeeded in presenting necessary ideas in an elementary manner without sacrificing the rigor too much. Such rig...

  11. CMAQ predicted concentration files

    Data.gov (United States)

    U.S. Environmental Protection Agency — CMAQ predicted ozone. This dataset is associated with the following publication: Gantt, B., G. Sarwar, J. Xing, H. Simon, D. Schwede, B. Hutzell, R. Mathur, and A....

  12. Methane prediction in collieries

    CSIR Research Space (South Africa)

    Creedy, DP

    1999-06-01

    Full Text Available The primary aim of the project was to assess the current status of research on methane emission prediction for collieries in South Africa in comparison with methods used and advances achieved elsewhere in the world....

  13. Climate Prediction Center - Outlooks

    Science.gov (United States)

    Weather Service NWS logo - Click to go to the NWS home page Climate Prediction Center Home Site Map News Web resources and services. HOME > Outreach > Publications > Climate Diagnostics Bulletin Climate Diagnostics Bulletin - Tropics Climate Diagnostics Bulletin - Forecast Climate Diagnostics

  14. CMAQ predicted concentration files

    Data.gov (United States)

    U.S. Environmental Protection Agency — model predicted concentrations. This dataset is associated with the following publication: Muñiz-Unamunzaga, M., R. Borge, G. Sarwar, B. Gantt, D. de la Paz, C....

  15. Comparing Spatial Predictions

    KAUST Repository

    Hering, Amanda S.; Genton, Marc G.

    2011-01-01

    Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial predictions produced by two competing models exists on average across the entire spatial domain of interest. The null hypothesis

  16. Genomic prediction using subsampling

    OpenAIRE

    Xavier, Alencar; Xu, Shizhong; Muir, William; Rainey, Katy Martin

    2017-01-01

    Background Genome-wide assisted selection is a critical tool for the?genetic improvement of plants and animals. Whole-genome regression models in Bayesian framework represent the main family of prediction methods. Fitting such models with a large number of observations involves a prohibitive computational burden. We propose the use of subsampling bootstrap Markov chain in genomic prediction. Such method consists of fitting whole-genome regression models by subsampling observations in each rou...

  17. Predicting Online Purchasing Behavior

    OpenAIRE

    W.R BUCKINX; D. VAN DEN POEL

    2003-01-01

    This empirical study investigates the contribution of different types of predictors to the purchasing behaviour at an online store. We use logit modelling to predict whether or not a purchase is made during the next visit to the website using both forward and backward variable-selection techniques, as well as Furnival and Wilson’s global score search algorithm to find the best subset of predictors. We contribute to the literature by using variables from four different categories in predicting...

  18. Empirical Flutter Prediction Method.

    Science.gov (United States)

    1988-03-05

    been used in this way to discover species or subspecies of animals, and to discover different types of voter or comsumer requiring different persuasions...respect to behavior or performance or response variables. Once this were done, corresponding clusters might be sought among descriptive or predictive or...jump in a response. The first sort of usage does not apply to the flutter prediction problem. Here the types of behavior are the different kinds of

  19. Stuck pipe prediction

    KAUST Repository

    Alzahrani, Majed

    2016-03-10

    Disclosed are various embodiments for a prediction application to predict a stuck pipe. A linear regression model is generated from hook load readings at corresponding bit depths. A current hook load reading at a current bit depth is compared with a normal hook load reading from the linear regression model. A current hook load greater than a normal hook load for a given bit depth indicates the likelihood of a stuck pipe.

  20. Stuck pipe prediction

    KAUST Repository

    Alzahrani, Majed; Alsolami, Fawaz; Chikalov, Igor; Algharbi, Salem; Aboudi, Faisal; Khudiri, Musab

    2016-01-01

    Disclosed are various embodiments for a prediction application to predict a stuck pipe. A linear regression model is generated from hook load readings at corresponding bit depths. A current hook load reading at a current bit depth is compared with a normal hook load reading from the linear regression model. A current hook load greater than a normal hook load for a given bit depth indicates the likelihood of a stuck pipe.

  1. Genomic prediction using subsampling.

    Science.gov (United States)

    Xavier, Alencar; Xu, Shizhong; Muir, William; Rainey, Katy Martin

    2017-03-24

    Genome-wide assisted selection is a critical tool for the genetic improvement of plants and animals. Whole-genome regression models in Bayesian framework represent the main family of prediction methods. Fitting such models with a large number of observations involves a prohibitive computational burden. We propose the use of subsampling bootstrap Markov chain in genomic prediction. Such method consists of fitting whole-genome regression models by subsampling observations in each round of a Markov Chain Monte Carlo. We evaluated the effect of subsampling bootstrap on prediction and computational parameters. Across datasets, we observed an optimal subsampling proportion of observations around 50% with replacement, and around 33% without replacement. Subsampling provided a substantial decrease in computation time, reducing the time to fit the model by half. On average, losses on predictive properties imposed by subsampling were negligible, usually below 1%. For each dataset, an optimal subsampling point that improves prediction properties was observed, but the improvements were also negligible. Combining subsampling with Gibbs sampling is an interesting ensemble algorithm. The investigation indicates that the subsampling bootstrap Markov chain algorithm substantially reduces computational burden associated with model fitting, and it may slightly enhance prediction properties.

  2. Efficient ionizer for polarized H- formation

    International Nuclear Information System (INIS)

    Alessi, J.G.

    1985-01-01

    An ionizer is under development for a polarized H - source based on the resonant charge exchange reaction polarized H 0 + D - → polarized H - + D 0 . The polarized H 0 beam passes through the center of a magnetron surface-plasma source having an annular geometry, where it crosses a high current (approx.0.5 A), 200 eV D - beam. Calculations predict an H 0 → H - ionization efficiency of approx.7%, more than an order of magnitude higher than that obtained on present ground state atomic beam sources. In initial experiments using an unpolarized H 0 beam, H - currents in excess of 100 μA have been measured. While the ionization efficiency is now only about the same as other methods (Cs beam, for example), the results are encouraging since it appears that by injecting positive ions to improve the space-charge neutralization, and by improving the extraction optics, considerable gains in intensity will be made. We will then use this ionizer with a polarized H 0 beam, and measure the polarization of the resulting H - beam. If no depolarization is observed this ionizer will be combined with an atomic beam, cooled to 5 to 6 K, to give a polarized H - beam expected to be in the milliampere range for use in the AGS

  3. Efficient alignment-free DNA barcode analytics.

    Science.gov (United States)

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-11-10

    In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.

  4. Heating efficiency in magnetic nanoparticle hyperthermia

    International Nuclear Information System (INIS)

    Deatsch, Alison E.; Evans, Benjamin A.

    2014-01-01

    Magnetic nanoparticles for hyperthermic treatment of cancers have gained significant attention in recent years. In magnetic hyperthermia, three independent mechanisms result in thermal energy upon stimulation: Néel relaxation, Brownian relaxation, and hysteresis loss. The relative contribution of each is strongly dependent on size, shape, crystalline anisotropy, and degree of aggregation or agglomeration of the nanoparticles. We review the effects of each of these physical mechanisms in light of recent experimental studies and suggest routes for progress in the field. Particular attention is given to the influence of the collective behaviors of nanoparticles in suspension. A number of recent studies have probed the effect of nanoparticle concentration on heating efficiency and have reported superficially contradictory results. We contextualize these studies and show that they consistently indicate a decrease in magnetic relaxation time with increasing nanoparticle concentration, in both Brownian- and Néel-dominated regimes. This leads to a predictable effect on heating efficiency and alleviates a significant source of confusion within the field. - Highlights: • Magnetic nanoparticle hyperthermia. • Heating depends on individual properties and collective properties. • We review recent studies with respect to loss mechanisms. • Collective behavior is a key source of confusion in the field. • We contextualize recent studies to elucidate consistencies and alleviate confusion

  5. Efficient Unsteady Flow Visualization with High-Order Access Dependencies

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    2016-04-19

    We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiency of pathline computation.

  6. High-efficiency single-photon source: The photonic wire geometry

    DEFF Research Database (Denmark)

    Claudon, J.; Bazin, Maela; Malik, Nitin S.

    2009-01-01

    We present a single-photon-source design based on the emission of a quantum dot embedded in a semiconductor (GaAs) nanowire. The nanowire ends are engineered (efficient metallic mirror and tip taper) to reach a predicted record-high collection efficiency of 90% with a realistic design. Preliminar...

  7. Efficient hash tables for network applications.

    Science.gov (United States)

    Zink, Thomas; Waldvogel, Marcel

    2015-01-01

    Hashing has yet to be widely accepted as a component of hard real-time systems and hardware implementations, due to still existing prejudices concerning the unpredictability of space and time requirements resulting from collisions. While in theory perfect hashing can provide optimal mapping, in practice, finding a perfect hash function is too expensive, especially in the context of high-speed applications. The introduction of hashing with multiple choices, d-left hashing and probabilistic table summaries, has caused a shift towards deterministic DRAM access. However, high amounts of rare and expensive high-speed SRAM need to be traded off for predictability, which is infeasible for many applications. In this paper we show that previous suggestions suffer from the false precondition of full generality. Our approach exploits four individual degrees of freedom available in many practical applications, especially hardware and high-speed lookups. This reduces the requirement of on-chip memory up to an order of magnitude and guarantees constant lookup and update time at the cost of only minute amounts of additional hardware. Our design makes efficient hash table implementations cheaper, more predictable, and more practical.

  8. Energy Efficiency Requirements in Building Codes, Energy Efficiency Policies for New Buildings. IEA Information Paper

    Energy Technology Data Exchange (ETDEWEB)

    Laustsen, Jens

    2008-03-15

    publications, including the World Energy Outlook 2006 (WEO) and Energy Technology Perspective (ETP). Here, we based the estimates of potentials on the scenarios presented, in particular on the predictions of consumption in the residential and commercial sectors in the WEO 2006. Finally, this paper recommends policies which could be used to realise these large and feasible energy saving potentials in new buildings and the use of building codes by renovation or refurbishment. The paper addresses as well experts as policy makers and interest groups with particular interest in energy efficiency in new buildings. Some parts might hence seem simplified and known for some experts, such as the discussions on barriers or the climatic impact on efficiency. Other parts might on the other hand seem a little technical for the policy oriented reader or for some interest groups. But there are large and compelling opportunities, this is recognised by many experts as well as there is a will to act by many policymakers and governments. But still too little happen because there are barriers and low understanding also in the institutional parts or little communications between different layers of the implementation process

  9. Energy efficiency initiatives: Indian experience

    Energy Technology Data Exchange (ETDEWEB)

    Dey, Dipankar [ICFAI Business School, Kolkata, (IBS-K) (India)

    2007-07-01

    India, with a population of over 1.10 billion is one of the fastest growing economies of the world. As domestic sources of different conventional commercial energy are drying up, dependence on foreign energy sources is increasing. There exists a huge potential for saving energy in India. After the first 'oil shock' (1973), the government of India realized the need for conservation of energy and a 'Petroleum Conservation Action Group' was formed in 1976. Since then many initiatives aiming at energy conservation and improving energy efficiency, have been undertaken (the establishment of Petroleum Conservation Research Association in 1978; the notification of Eco labelling scheme in 1991; the formation of Bureau of Energy Efficiency in 2002). But no such initiative was successful. In this paper an attempt has been made to analyze the changing importance of energy conservation/efficiency measures which have been initiated in India between 1970 and 2005.The present study tries to analyze the limitations and the reasons of failure of those initiatives. The probable reasons are: fuel pricing mechanism (including subsidies), political factors, corruption and unethical practices, influence of oil and related industry lobbies - both internal and external, the economic situation and the prolonged protection of domestic industries. Further, as India is opening its economy, the study explores the opportunities that the globally competitive market would offer to improve the overall energy efficiency of the economy. The study suggests that the Bureau of Energy Efficiency (BEE) - the newly formed nodal agency for improving energy efficiency of the economy may be made an autonomous institution where intervention from the politicians would be very low. For proper implementation of different initiatives to improve energy efficiency, BEE should involve more the civil societies (NGO) from the inception to the implementation stage of the programs. The paper also

  10. Benefits for whom? Energy efficiency within the efficient market

    International Nuclear Information System (INIS)

    Chello, Dario

    2015-01-01

    How should the lack of an efficient energy market affect the design of energy efficiency policies and their implementation? What the consequences of an inefficient energy market on end users’ behaviour? This article tries to give an answer to such questions, by considering the decision making of domestic users following a few fundamental concepts of behavioural economics. The mechanism of price formation in the market, with particular reference to the internal energy market in Europe, will be examined and we will show that price remains the inflexible attribute in making an energy choice. Then, some conclusions will be addressed to policy makers on how to overcome the barriers illustrated.

  11. Paying for Joint or Single Audits? The Importance of Auditor Pairings and Differences in Technology Efficiency

    DEFF Research Database (Denmark)

    Holm, Claus; Thinggaard, Frank

    2016-01-01

    In the first theoretical paper on joint audits, Deng et al. predict that the audit fees for joint audits will be lower than those from single audits. However, the prediction depends on the combination of audit firms involved in the joint audit and on their technology efficiency as well as on the ...

  12. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  13. Mobilising Investment in Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-01

    Taxes, loans and grants, trading schemes and white certificates, public procurement and investment in R&D or infrastructure: known collectively as 'economic instruments', these tools can be powerful means of mobilising the finances needed to achieve policy goals by implementing energy efficiency measures. The role of economic instruments is to kick-start the private financial markets and to motivate private investors to fund EE measures. They should reinforce and promote energy performance regulations. This IEA analysis addresses the fact that, to date, relatively little effort has been directed toward evaluating how well economic instruments work. Using the buildings sector to illustrate how such measures can support energy efficiency, this paper can help policy makers better select and design economic instruments appropriate to their policy objectives and national contexts. This report’s three main aims are to: 1) Examine how economic instruments are currently used in energy efficiency policy; 2) Consider how economic instruments can be more effective and efficient in supporting low-energy buildings; and 3) Assess how economic instruments should be funded, where public outlay is needed. Detailed case studies in this report assess examples of economic instruments for energy efficiency in the buildings sector in Canada (grants), France (tax relief and loans), Germany (loans and grants), Ireland (grants) and Italy (white certificates and tax relief).

  14. Metasurface holograms reaching 80% efficiency.

    Science.gov (United States)

    Zheng, Guoxing; Mühlenbernd, Holger; Kenney, Mitchell; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang

    2015-04-01

    Surfaces covered by ultrathin plasmonic structures--so-called metasurfaces--have recently been shown to be capable of completely controlling the phase of light, representing a new paradigm for the design of innovative optical elements such as ultrathin flat lenses, directional couplers for surface plasmon polaritons and wave plate vortex beam generation. Among the various types of metasurfaces, geometric metasurfaces, which consist of an array of plasmonic nanorods with spatially varying orientations, have shown superior phase control due to the geometric nature of their phase profile. Metasurfaces have recently been used to make computer-generated holograms, but the hologram efficiency remained too low at visible wavelengths for practical purposes. Here, we report the design and realization of a geometric metasurface hologram reaching diffraction efficiencies of 80% at 825 nm and a broad bandwidth between 630 nm and 1,050 nm. The 16-level-phase computer-generated hologram demonstrated here combines the advantages of a geometric metasurface for the superior control of the phase profile and of reflectarrays for achieving high polarization conversion efficiency. Specifically, the design of the hologram integrates a ground metal plane with a geometric metasurface that enhances the conversion efficiency between the two circular polarization states, leading to high diffraction efficiency without complicating the fabrication process. Because of these advantages, our strategy could be viable for various practical holographic applications.

  15. Transionospheric propagation predictions

    Science.gov (United States)

    Klobucher, J. A.; Basu, S.; Basu, S.; Bernhardt, P. A.; Davies, K.; Donatelli, D. E.; Fremouw, E. J.; Goodman, J. M.; Hartmann, G. K.; Leitinger, R.

    1979-01-01

    The current status and future prospects of the capability to make transionospheric propagation predictions are addressed, highlighting the effects of the ionized media, which dominate for frequencies below 1 to 3 GHz, depending upon the state of the ionosphere and the elevation angle through the Earth-space path. The primary concerns are the predictions of time delay of signal modulation (group path delay) and of radio wave scintillation. Progress in these areas is strongly tied to knowledge of variable structures in the ionosphere ranging from the large scale (thousands of kilometers in horizontal extent) to the fine scale (kilometer size). Ionospheric variability and the relative importance of various mechanisms responsible for the time histories observed in total electron content (TEC), proportional to signal group delay, and in irregularity formation are discussed in terms of capability to make both short and long term predictions. The data base upon which predictions are made is examined for its adequacy, and the prospects for prediction improvements by more theoretical studies as well as by increasing the available statistical data base are examined.

  16. Predictable grammatical constructions

    DEFF Research Database (Denmark)

    Lucas, Sandra

    2015-01-01

    My aim in this paper is to provide evidence from diachronic linguistics for the view that some predictable units are entrenched in grammar and consequently in human cognition, in a way that makes them functionally and structurally equal to nonpredictable grammatical units, suggesting that these p......My aim in this paper is to provide evidence from diachronic linguistics for the view that some predictable units are entrenched in grammar and consequently in human cognition, in a way that makes them functionally and structurally equal to nonpredictable grammatical units, suggesting...... that these predictable units should be considered grammatical constructions on a par with the nonpredictable constructions. Frequency has usually been seen as the only possible argument speaking in favor of viewing some formally and semantically fully predictable units as grammatical constructions. However, this paper...... semantically and formally predictable. Despite this difference, [méllo INF], like the other future periphrases, seems to be highly entrenched in the cognition (and grammar) of Early Medieval Greek language users, and consequently a grammatical construction. The syntactic evidence speaking in favor of [méllo...

  17. Energy efficiency and renewables policies: Promoting efficiency or facilitating monopsony?

    International Nuclear Information System (INIS)

    Brennan, Timothy J.

    2011-01-01

    The cliche in the electricity sector, the 'cheapest power plant is the one we don't build,' neglects the benefits of the energy that plant would generate. That economy-wide perspective need not apply in considering benefits to only consumers if not building that plant was the exercise of monopsony power. A regulator maximizing consumer welfare may need to avoid rationing demand at monopsony prices. Subsidizing energy efficiency to reduce electricity demand at the margin can solve that problem, if energy efficiency and electricity use are substitutes. Renewable energy subsidies, percentage use standards, or feed in tariffs may also serve monopsony as well with sufficient inelasticity in fossil fuel electricity supply. We may not observe these effects if the regulator can set price as well as quantity, lacks buyer-side market power, or is legally precluded from denying generators a reasonable return on capital. Nevertheless, the possibility of monopsony remains significant in light of the debate as to whether antitrust enforcement should maximize consumer welfare or total welfare. - Research Highlights: → Subsidizing energy efficiency can promote monopsony, if efficiency and use are substitutes. → Renewable energy subsidies, portfolio standards, or feed-in tariffs may also promote monopsony. → Effects require buyer-side market power and ability to deny generators a reasonable return. → Monopsony is significant in light of whether antitrust should maximize consumer or total welfare.

  18. Green corridor : energy efficiency initiatives

    Energy Technology Data Exchange (ETDEWEB)

    Bartlett, M.; Strickland, R.; Harding, N. [Windsor Univ., ON (Canada)

    2005-07-01

    This presentation discussed environmental sustainability using alternative energy technologies. It discussed Ecohouse, which is a house designed using conventional and inventive products and techniques to represent an eco-efficient model for living, a more sustainable house, demonstrating sustainable technologies in action and setting a new standard for resource efficiency in Windsor. The presentation provided a building analysis and discussed the following: geothermal heating; distributive power; green roof; net metering; grey water plumbing; solar water heating; passive lighting; energy efficient lighting and geothermal heating and cooling. It also discussed opportunities for innovation, namely: greenhouse; composting toilets; alternative insulation; net metering; solar arrays; hydroponics; and expansion of the house. Also discussed were a nature bridge, an underwater electric kite, and a vertically aerodynamic turbine. The benefits of renewable energy, small hydro power potential, and instream energy generation technology were presented. 9 refs., figs.

  19. The Efficiency of Educational Production

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Heinesen, Eskil; Tranæs, Torben

    is the most efficient Nordic country (often fully efficient), whereas Sweden and especially Norway and Denmark are clearly inefficient. However, using PISA test scores as indicators of student input quality in upper secondary education reduces the inefficiencies of these three countries. Also, when expected......Focusing in particular on upper secondary education, this paper examines whether the relatively high level of expenditure on education in the Nordic countries is matched by high output from the educational sector, both in terms of student enrolment and indicators of output quality in the form...... of graduation/completion rates and expected earnings after completed education. We use Data Envelopment Analysis (DEA) to compare (benchmark) the Nordic countries with a relevant group of rich OECD countries and calculate input efficiency scores for each country. We estimate a wide range of specifications...

  20. The Efficiency of Educational Production

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Heinesen, Eskil; Tranæs, Torben

    2015-01-01

    is the most efficient Nordic country (often fully efficient), whereas Sweden and especially Norway and Denmark are clearly inefficient. However, using PISA test scores as indicators of student input quality in upper secondary education reduces the inefficiencies of these three countries. Also, when expected......Focusing in particular on upper secondary education, this paper examines whether the relatively high level of expenditure on education in the Nordic countries is matched by high output from the educational sector, both in terms of student enrolment and indicators of output quality in the form...... of graduation/completion rates and expected earnings after completed education. We use data envelopment analysis (DEA) to compare (benchmark) the Nordic countries with a relevant group of rich OECD countries and calculate input efficiency scores for each country. We estimate a wide range of specifications...

  1. Frontier technologies to improve efficiency

    International Nuclear Information System (INIS)

    Kalhammer, F.R.

    1992-01-01

    The author discusses conservation technology to improve the efficiency of energy production. Although coal is seen as the largest source of fuel for producing electricity until the year 2040, the heating value of coal is expected to be increased by using Integrated Gasification Combined Cycle (IGCC) technology. Use of fuel cells to produce electricity will be a viable option only if costs can be reduced to make the technology competitive. By coupling IGCC with fuel cells it may be possible to increase total conversion efficiency of coal to electricity at 50%. Photovoltaics technology is more likely to be used in developing countries. Electric utilities target power electronics, lighting fixtures, heat pumps, plasma processing, freeze concentration and application of superconductivity as electricity end use technologies that have the most potential for efficiency improvement. The impact of these technologies in coping with the greenhouse effect was not addressed

  2. Thermodynamic efficiency of solar concentrators.

    Science.gov (United States)

    Shatz, Narkis; Bortz, John; Winston, Roland

    2010-04-26

    The optical thermodynamic efficiency is a comprehensive metric that takes into account all loss mechanisms associated with transferring flux from the source to the target phase space, which may include losses due to inadequate design, non-ideal materials, fabrication errors, and less than maximal concentration. We discuss consequences of Fermat's principle of geometrical optics and review étendue dilution and optical loss mechanisms associated with nonimaging concentrators. We develop an expression for the optical thermodynamic efficiency which combines the first and second laws of thermodynamics. As such, this metric is a gold standard for evaluating the performance of nonimaging concentrators. We provide examples illustrating the use of this new metric for concentrating photovoltaic systems for solar power applications, and in particular show how skewness mismatch limits the attainable optical thermodynamic efficiency.

  3. Cleanroom Energy Efficiency Workshop Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Tschudi, Bill

    1999-03-15

    On March 15, 1999, Lawrence Berkeley National Laboratory hosted a workshop focused on energy efficiency in Cleanroom facilities. The workshop was held as part of a multiyear effort sponsored by the California Institute for Energy Efficiency, and the California Energy Commission. It is part of a project that concentrates on improving energy efficiency in Laboratory type facilities including cleanrooms. The project targets the broad market of laboratory and cleanroom facilities, and thus cross-cuts many different industries and institutions. This workshop was intended to raise awareness by sharing case study success stories, providing a forum for industry networking on energy issues, contributing LBNL expertise in research to date, determining barriers to implementation and possible solutions, and soliciting input for further research.

  4. Efficiency Of Transuranium Nuclides Transmutation

    International Nuclear Information System (INIS)

    Kazansky, Yu.A.; Klinov, D.A.; Semenov, E.V.

    2002-01-01

    One of the ways to create a wasteless nuclear power is based on transmutation of spent fuel nuclides. In particular, it is considered that the radioactivity of the nuclear power wastes should be the same (or smaller), than radioactivity of the uranium and the thorium extracted from entrails of the Earth. The problem of fission fragments transmutation efficiency was considered in article, where, in particular, the concepts of transmutation factor and the ''generalised'' index of biological hazard of the radioactive nuclides were entered. The transmutation efficiency has appeared to be a function of time and, naturally, dependent on nuclear power activity scenario, from neutron flux, absorption cross-sections of the nuclides under transmutation and on the rate of their formation in reactors. In the present paper the efficiency of the transmutation of transuranium nuclides is considered

  5. GATE: Improving the computational efficiency

    International Nuclear Information System (INIS)

    Staelens, S.; De Beenhouwer, J.; Kruecker, D.; Maigne, L.; Rannou, F.; Ferrer, L.; D'Asseler, Y.; Buvat, I.; Lemahieu, I.

    2006-01-01

    GATE is a software dedicated to Monte Carlo simulations in Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET). An important disadvantage of those simulations is the fundamental burden of computation time. This manuscript describes three different techniques in order to improve the efficiency of those simulations. Firstly, the implementation of variance reduction techniques (VRTs), more specifically the incorporation of geometrical importance sampling, is discussed. After this, the newly designed cluster version of the GATE software is described. The experiments have shown that GATE simulations scale very well on a cluster of homogeneous computers. Finally, an elaboration on the deployment of GATE on the Enabling Grids for E-Science in Europe (EGEE) grid will conclude the description of efficiency enhancement efforts. The three aforementioned methods improve the efficiency of GATE to a large extent and make realistic patient-specific overnight Monte Carlo simulations achievable

  6. Efficiency of Capacitively Loaded Converters

    DEFF Research Database (Denmark)

    Andersen, Thomas; Huang, Lina; Andersen, Michael A. E.

    2012-01-01

    This paper explores the characteristic of capacitance versus voltage for dielectric electro active polymer (DEAP) actuator, 2kV polypropylene film capacitor as well as 3kV X7R multi layer ceramic capacitor (MLCC) at the beginning. An energy efficiency for capacitively loaded converters...... is introduced as a definition of efficiency. The calculated and measured efficiency curves for charging DEAP actuator, polypropylene film capacitor and X7R MLCC are provided and compared. The attention has to be paid for the voltage dependent capacitive load, like X7R MLCC, when evaluating the charging...... polypropylene film capacitor can be the equivalent capacitive load. Because of the voltage dependent characteristic, X7R MLCC cannot be used to replace the DEAP actuator. However, this type of capacitor can be used to substitute the capacitive actuator with voltage dependent property at the development phase....

  7. Correlation between social responsibility and efficient performance in Croatian enterprises

    Directory of Open Access Journals (Sweden)

    Neda Vitezić

    2011-12-01

    Full Text Available The objective of the research is to establish if there is a correlation between efficiency and socially responsible business performance in Croatian enterprises. The research is based on the hypothesis that higher corporate efficiency affects social responsibility development in enterprises and vice versa, that socially more responsible corporate performance have a positive effect on efficiency. In their research, many authors have proved the correlation between social responsibility and financial performance, reputation of the enterprise and added value. Cases from transition countries, which transferred to market economy and focused on socially responsible management and sustainability, have not been the subject of research. The social responsibility concept implies balance between economic, ecological and social goals, which means distribution of assets on several actors, so it may be predicted that more efficient enterprises will sooner accept the sustainability concept and act more responsibly. Except for theoretical social responsibility hypothesis, the initial point in the empirical section is dynamic analysis of business activities of Croatian entrepreneurs in the period between 1993 and 2010, on the basis of which a sample of enterprises was chosen, which submit transparent reports on social responsibility. The main result obtained by univariate analysis confirms that socially more responsible enterprises have better financial results, i.e. they are more efficient, and also have better reputation. The research also had limitations in relation to qualitative determination of the social responsibility impact on efficiency. The conclusion is derived that there is a causal relationship between efficiency and social responsibility, i.e. higher efficiency level enables higher allocation of resources with the purpose of socially more responsible corporate performance and vice versa; socially responsible corporate performance have an impact on

  8. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  9. Fourier transform wavefront control with adaptive prediction of the atmosphere.

    Science.gov (United States)

    Poyneer, Lisa A; Macintosh, Bruce A; Véran, Jean-Pierre

    2007-09-01

    Predictive Fourier control is a temporal power spectral density-based adaptive method for adaptive optics that predicts the atmosphere under the assumption of frozen flow. The predictive controller is based on Kalman filtering and a Fourier decomposition of atmospheric turbulence using the Fourier transform reconstructor. It provides a stable way to compensate for arbitrary numbers of atmospheric layers. For each Fourier mode, efficient and accurate algorithms estimate the necessary atmospheric parameters from closed-loop telemetry and determine the predictive filter, adjusting as conditions change. This prediction improves atmospheric rejection, leading to significant improvements in system performance. For a 48x48 actuator system operating at 2 kHz, five-layer prediction for all modes is achievable in under 2x10(9) floating-point operations/s.

  10. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  11. Energy Efficiency in Future PONs

    DEFF Research Database (Denmark)

    Reschat, Halfdan; Laustsen, Johannes Russell; Wessing, Henrik

    2012-01-01

    There is a still increasing tendency to give energy efficiency a high priority, even in already low energy demanding systems. This is also the case for Passive Optical Networks (PONs) for which many different methods for saving energy are proposed. This paper uses simulations to evaluate three...... proposed power saving solutions for PONs which use sleep mechanisms for saving power. The discovered advantages and disadvantages of these methods are then used as a basis for proposing a new solution combining different techniques in order to increase the energy efficiency further. This novel solution...

  12. Information efficiency in visual communication

    Science.gov (United States)

    Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1993-08-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  13. Information efficiency in visual communication

    Science.gov (United States)

    Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1993-01-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  14. Essays on Earnings Predictability

    DEFF Research Database (Denmark)

    Bruun, Mark

    This dissertation addresses the prediction of corporate earnings. The thesis aims to examine whether the degree of precision in earnings forecasts can be increased by basing them on historical financial ratios. Furthermore, the intent of the dissertation is to analyze whether accounting standards...... forecasts are not more accurate than the simpler forecasts based on a historical timeseries of earnings. Secondly, the dissertation shows how accounting standards affect analysts’ earnings predictions. Accounting conservatism contributes to a more volatile earnings process, which lowers the accuracy...... of analysts’ earnings forecasts. Furthermore, the dissertation shows how the stock market’s reaction to the disclosure of information about corporate earnings depends on how well corporate earnings can be predicted. The dissertation indicates that the stock market’s reaction to the disclosure of earnings...

  15. Pulverized coal devolatilization prediction

    International Nuclear Information System (INIS)

    Rojas, Andres F; Barraza, Juan M

    2008-01-01

    The aim of this study was to predict the two bituminous coals devolatilization at low rate of heating (50 Celsius degrade/min), with program FG-DVC (functional group Depolymerization. Vaporization and crosslinking), and to compare the devolatilization profiles predicted by program FG-DVC, which are obtained in the thermogravimetric analyzer. It was also study the volatile liberation at (10 4 k/s) in a drop-tube furnace. The tar, methane, carbon monoxide, and carbon dioxide, formation rate profiles, and the hydrogen, oxygen, nitrogen and sulphur, elemental distribution in the devolatilization products by FG-DVC program at low rate of heating was obtained; and the liberation volatile and R factor at high rate of heating was calculated. it was found that the program predicts the bituminous coals devolatilization at low rate heating, at high rate heating, a volatile liberation around 30% was obtained

  16. Predicting Ideological Prejudice.

    Science.gov (United States)

    Brandt, Mark J

    2017-06-01

    A major shortcoming of current models of ideological prejudice is that although they can anticipate the direction of the association between participants' ideology and their prejudice against a range of target groups, they cannot predict the size of this association. I developed and tested models that can make specific size predictions for this association. A quantitative model that used the perceived ideology of the target group as the primary predictor of the ideology-prejudice relationship was developed with a representative sample of Americans ( N = 4,940) and tested against models using the perceived status of and choice to belong to the target group as predictors. In four studies (total N = 2,093), ideology-prejudice associations were estimated, and these observed estimates were compared with the models' predictions. The model that was based only on perceived ideology was the most parsimonious with the smallest errors.

  17. Tide Predictions, California, 2014, NOAA

    Data.gov (United States)

    U.S. Environmental Protection Agency — The predictions from the web based NOAA Tide Predictions are based upon the latest information available as of the date of the user's request. Tide predictions...

  18. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    Science.gov (United States)

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  19. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dao-Wei Bi

    2007-03-01

    Full Text Available Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  20. Rule Induction-Based Knowledge Discovery for Energy Efficiency

    OpenAIRE

    Chen, Qipeng; Fan, Zhong; Kaleshi, Dritan; Armour, Simon M D

    2015-01-01

    Rule induction is a practical approach to knowledge discovery. Provided that a problem is developed, rule induction is able to return the knowledge that addresses the goal of this problem as if-then rules. The primary goals of knowledge discovery are for prediction and description. The rule format knowledge representation is easily understandable so as to enable users to make decisions. This paper presents the potential of rule induction for energy efficiency. In particular, three rule induct...