Michael J. Falkowski; Andrew T. Hudak; Nicholas L. Crookston; Paul E. Gessler; Edward H. Uebler; Alistair M. S. Smith
2010-01-01
Sustainable forest management requires timely, detailed forest inventory data across large areas, which is difficult to obtain via traditional forest inventory techniques. This study evaluated k-nearest neighbor imputation models incorporating LiDAR data to predict tree-level inventory data (individual tree height, diameter at breast height, and...
Atomic process modeling based on nearest neighbor approximation
International Nuclear Information System (INIS)
Nishikawa, Takeshi
2016-01-01
An atomic modeling based on the nearest neighbor approximation (NNA) to solove atomic process in plasmas was considered. In the atomic modeling, it includes the plasma effect to the electron state densities of the atom or ion as the potential due to the nearest neighbor atom or ion. Using the modeling, I was able to compute the ionization degrees of hydrogen plasmas without any ad hoc assumption adopted in the atomic modeling based on the plasma microfield. In order to apply the NNA to the plasmas of near and above solid density, three adequate treatments were required to obtain physically acceptable results. The first one was the Coulomb interaction between pairs of ions. The second one was the modification of the Saha equation. The third one was the adequate treatment of the neutral atom's contribution to the potential distribution as the nearest neighbor particle. (author)
Energy-landscape analysis of the two-dimensional nearest-neighbor φ⁴ model.
Mehta, Dhagash; Hauenstein, Jonathan D; Kastner, Michael
2012-06-01
The stationary points of the potential energy function of the φ⁴ model on a two-dimensional square lattice with nearest-neighbor interactions are studied by means of two numerical methods: a numerical homotopy continuation method and a globally convergent Newton-Raphson method. We analyze the properties of the stationary points, in particular with respect to a number of quantities that have been conjectured to display signatures of the thermodynamic phase transition of the model. Although no such signatures are found for the nearest-neighbor φ⁴ model, our study illustrates the strengths and weaknesses of the numerical methods employed.
Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2009-01-01
Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....
Ferroquadrupolar Order in the Spin-1 Bilinear-Biquadratic Model up to the Second Nearest Neighbor
Pires, A. S. T.
2017-10-01
We have studied some ferroquadrupolar phases of the S = 1 Heisenberg model with bilinear and biquadratic exchange interactions on the square lattice up to the second nearest neighbor, using the SU(3) Schwinger bosons formalism in a mean field approximation. This technique is very convenient to treat nematic order. This technique has the advantage of using the fundamental representation of the SU(N) group instead of SU(2), designed to capture spin-quadrupolar order in addition to the dipolar magnetic order. We also present quadrupole structure factors that can be measured in future experiments. Our calculations can have implications in the study of iron-based superconductors.
Guerra, João Carlos de Oliveira
2013-08-01
Additive physical properties of DNA double strand polymers have been expanded in terms of 8 irreducible parameters. This provided consistency relations among the corresponding 10 duplex dimer contributions. To allow for oligomer analysis, end parameters were often added, and this would add extra degrees of freedom to the fore mentioned parameters. Statistical mechanics approaches were then connected to the nearest neighbor (NN) approach in the framework of the two-states model. Ad hoc end effects were thus (wrongly) correlated to nucleation phenomena and this lead to a critique for its role in NN modeling. With this motivation, a new NN model is proposed that accommodates the nucleation free energies. The model relates the nucleation free energy to the mean composition of the chain and permits to obtain a good estimate for the free energy associated only to the Watson-Crick base pairings. Copyright © 2013 Wiley Periodicals, Inc.
Ordering in the quenched two-dimensional axial next-nearest-neighbor Ising model
International Nuclear Information System (INIS)
Hassold, G.N.; Srolovitz, D.J.
1988-01-01
Monte Carlo simulations of ordering in the two-dimensional axial next-nearest-neighbor Ising model following a quench were performed using nonconserved dynamics for a wide range of frustration parameters, κ, and temperatures. It was found that in quenches from T>>T/sub c/ to T 1 2 kinetics. Similar results are found for quenches at κ≥1, where the ordered structure is striped. However, for 0 phase (i.e., striped phase). Quenches to higher temperatures show the presence of a finite glass-transition temperature. Discontinuous changes in the value of the frustration parameter from the ferromagnetic to the -phase region of the phase diagram at low temperature yields a phase change which occurs via classical nucleation and growth. A simple energetic or growth model is proposed which accounts for all of the temperatures at which the ordering kinetics undergoes transitions
Chin, Wen Cheong; Lee, Min Cherng; Yap, Grace Lee Ching
2016-01-01
High frequency financial data modelling has become one of the important research areas in the field of financial econometrics. However, the possible structural break in volatile financial time series often trigger inconsistency issue in volatility estimation. In this study, we propose a structural break heavy-tailed heterogeneous autoregressive (HAR) volatility econometric model with the enhancement of jump-robust estimators. The breakpoints in the volatility are captured by dummy variables after the detection by Bai-Perron sequential multi breakpoints procedure. In order to further deal with possible abrupt jump in the volatility, the jump-robust volatility estimators are composed by using the nearest neighbor truncation approach, namely the minimum and median realized volatility. Under the structural break improvements in both the models and volatility estimators, the empirical findings show that the modified HAR model provides the best performing in-sample and out-of-sample forecast evaluations as compared with the standard HAR models. Accurate volatility forecasts have direct influential to the application of risk management and investment portfolio analysis.
Third nearest neighbor parameterized tight binding model for graphene nano-ribbons
Directory of Open Access Journals (Sweden)
Van-Truong Tran
2017-07-01
Full Text Available The existing tight binding models can very well reproduce the ab initio band structure of a 2D graphene sheet. For graphene nano-ribbons (GNRs, the current sets of tight binding parameters can successfully describe the semi-conducting behavior of all armchair GNRs. However, they are still failing in reproducing accurately the slope of the bands that is directly associated with the group velocity and the effective mass of electrons. In this work, both density functional theory and tight binding calculations were performed and a new set of tight binding parameters up to the third nearest neighbors including overlap terms is introduced. The results obtained with this model offer excellent agreement with the predictions of the density functional theory in most cases of ribbon structures, even in the high-energy region. Moreover, this set can induce electron-hole asymmetry as manifested in results from density functional theory. Relevant outcomes are also achieved for armchair ribbons of various widths as well as for zigzag structures, thus opening a route for multi-scale atomistic simulation of large systems that cannot be considered using density functional theory.
Third nearest neighbor parameterized tight binding model for graphene nano-ribbons
Tran, Van-Truong; Saint-Martin, Jérôme; Dollfus, Philippe; Volz, Sebastian
2017-07-01
The existing tight binding models can very well reproduce the ab initio band structure of a 2D graphene sheet. For graphene nano-ribbons (GNRs), the current sets of tight binding parameters can successfully describe the semi-conducting behavior of all armchair GNRs. However, they are still failing in reproducing accurately the slope of the bands that is directly associated with the group velocity and the effective mass of electrons. In this work, both density functional theory and tight binding calculations were performed and a new set of tight binding parameters up to the third nearest neighbors including overlap terms is introduced. The results obtained with this model offer excellent agreement with the predictions of the density functional theory in most cases of ribbon structures, even in the high-energy region. Moreover, this set can induce electron-hole asymmetry as manifested in results from density functional theory. Relevant outcomes are also achieved for armchair ribbons of various widths as well as for zigzag structures, thus opening a route for multi-scale atomistic simulation of large systems that cannot be considered using density functional theory.
Jay M. Ver Hoef; Hailemariam Temesgen; Sergio Gómez
2013-01-01
Forest surveys provide critical information for many diverse interests. Data are often collected from samples, and from these samples, maps of resources and estimates of aerial totals or averages are required. In this paper, two approaches for mapping and estimating totals; the spatial linear model (SLM) and k-NN (k-Nearest Neighbor) are compared, theoretically,...
Jurčišinová, E.; Jurčišin, M.
2018-02-01
The influence of the next-nearest-neighbor interaction on the properties of the geometrically frustrated antiferromagnetic systems is investigated in the framework of the exactly solvable antiferromagnetic spin- 1 / 2 Ising model in the external magnetic field on the square-kagome recursive lattice, where the next-nearest-neighbor interaction is supposed between sites within each elementary square of the lattice. The thermodynamic properties of the model are investigated in detail and it is shown that the competition between the nearest-neighbor antiferromagnetic interaction and the next-nearest-neighbor ferromagnetic interaction changes properties of the single-point ground states but does not change the frustrated character of the basic model. On the other hand, the presence of the antiferromagnetic next-nearest-neighbor interaction leads to the enhancement of the frustration effects with the formation of additional plateau and single-point ground states at low temperatures. Exact expressions for magnetizations and residual entropies of all ground states of the model are found. It is shown that the model exhibits various ground states with the same value of magnetization but different macroscopic degeneracies as well as the ground states with different values of magnetization but the same value of the residual entropy. The specific heat capacity is investigated and it is shown that the model exhibits the Schottky-type anomaly behavior in the vicinity of each single-point ground state value of the magnetic field. The formation of the field-induced double-peak structure of the specific heat capacity at low temperatures is demonstrated and it is shown that its very existence is directly related to the presence of highly macroscopically degenerated single-point ground states in the model.
Energy Technology Data Exchange (ETDEWEB)
Gong, Longyan, E-mail: lygong@njupt.edu.cn [Information Physics Research Center and Department of Applied Physics, Nanjing University of Posts and Telecommunications, Nanjing, 210003 (China); Institute of Signal Processing and Transmission, Nanjing University of Posts and Telecommunications, Nanjing, 210003 (China); National Laboratory of Solid State Microstructures, Nanjing University, Nanjing 210093 (China); Feng, Yan; Ding, Yougen [Information Physics Research Center and Department of Applied Physics, Nanjing University of Posts and Telecommunications, Nanjing, 210003 (China); Institute of Signal Processing and Transmission, Nanjing University of Posts and Telecommunications, Nanjing, 210003 (China)
2017-02-12
Highlights: • Quasiperiodic lattice models with next-nearest-neighbor hopping are studied. • Shannon information entropies are used to reflect state localization properties. • Phase diagrams are obtained for the inverse bronze and golden means, respectively. • Our studies present a more complete picture than existing works. - Abstract: We explore the reduced relative Shannon information entropies SR for a quasiperiodic lattice model with nearest- and next-nearest-neighbor hopping, where an irrational number is in the mathematical expression of incommensurate on-site potentials. Based on SR, we respectively unveil the phase diagrams for two irrationalities, i.e., the inverse bronze mean and the inverse golden mean. The corresponding phase diagrams include regions of purely localized phase, purely delocalized phase, pure critical phase, and regions with mobility edges. The boundaries of different regions depend on the values of irrational number. These studies present a more complete picture than existing works.
Wang, Yujie; Wang, Zhen; Wang, Yanli; Liu, Taigang; Zhang, Wenbing
2018-01-01
The thermodynamic and kinetic parameters of an RNA base pair with different nearest and next nearest neighbors were obtained through long-time molecular dynamics simulation of the opening-closing switch process of the base pair near its melting temperature. The results indicate that thermodynamic parameters of GC base pair are dependent on the nearest neighbor base pair, and the next nearest neighbor base pair has little effect, which validated the nearest-neighbor model. The closing and opening rates of the GC base pair also showed nearest neighbor dependences. At certain temperature, the closing and opening rates of the GC pair with nearest neighbor AU is larger than that with the nearest neighbor GC, and the next nearest neighbor plays little role. The free energy landscape of the GC base pair with the nearest neighbor GC is rougher than that with nearest neighbor AU.
Directory of Open Access Journals (Sweden)
J. Faradmal
2016-01-01
Full Text Available Introduction & Objective: Cox model is a common method to estimate survival and validity of the results is dependent on the proportional hazards assumption. K- Nearest neighbor is a nonparametric method for survival probability in heterogeneous communities. The purpose of this study was to compare the performance of k- nearest neighbor method (K-NN with Cox model. Materials & Methods: This retrospective cohort study was conducted in Hamadan Province, on 475 patients who had undergone kidney transplantation from 1994 to 2011. Data were extracted from patients’ medical records using a checklist. The duration of the time between kidney transplantation and rejection was considered as the survival time. Cox model and k- nearest neighbor method were used for Data modeling. The prediction error Brier score was used to compare the performance models. Results: Out of 475 transplantations, 55 episodes of rejection occurred. 5, 10 and 15 year survival rates of transplantation were 91.70 %, 84.90% and 74.50%, respectively. The number of neighborhood optimized using cross validation method was 45. Cumulative Brier score of k-NN algorithm for t=5, 10 and 15 years were 0.003, 0.006 and 0.007, respectively. Cumulative Brier of score Cox model for t=5, 10 and 15 years were 0.036, 0.058 and 0.058, respectively. Prediction error of k-NN algorithm for t=5, 10 and 15 years was less than Cox model that shows that the k-NN method outperforms. Conclusions: The results of this study show that the predictions of KNN has higher accuracy than the Cox model when sample sizes and the number of predictor variables are high. Sci J Hamadan Univ Med Sci . 2016; 22 (4 :300-308
International Nuclear Information System (INIS)
Wang, L.F.; Bai, L.Y.
2013-01-01
To improve the precision of quantitative structure-activity relationship (QSAR) modeling for aromatic carboxylic acid derivatives insect repellent, a novel nonlinear combination forecast model was proposed integrating support vector regression (SVR) and K-nearest neighbor (KNN): Firstly, search optimal kernel function and nonlinearly select molecular descriptors by the rule of minimum MSE value using SVR. Secondly, illuminate the effects of all descriptors on biological activity by multi-round enforcement resistance-selection. Thirdly, construct the sub-models with predicted values of different KNN. Then, get the optimal kernel and corresponding retained sub-models through subtle selection. Finally, make prediction with leave-one-out (LOO) method in the basis of reserved sub-models. Compared with previous widely used models, our work shows significant improvement in modeling performance, which demonstrates the superiority of the present combination forecast model. (author)
Directory of Open Access Journals (Sweden)
Nader Salari
Full Text Available Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that
Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy
2014-01-01
Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the
Directory of Open Access Journals (Sweden)
Chao-Rong Chen
2017-02-01
Full Text Available This paper proposes a novel methodology for very short term forecasting of hourly global solar irradiance (GSI. The proposed methodology is based on meteorology data, especially for optimizing the operation of power generating electricity from photovoltaic (PV energy. This methodology is a combination of k-nearest neighbor (k-NN algorithm modelling and artificial neural network (ANN model. The k-NN-ANN method is designed to forecast GSI for 60 min ahead based on meteorology data for the target PV station which position is surrounded by eight other adjacent PV stations. The novelty of this method is taking into account the meteorology data. A set of GSI measurement samples was available from the PV station in Taiwan which is used as test data. The first method implements k-NN as a preprocessing technique prior to ANN method. The error statistical indicators of k-NN-ANN model the mean absolute bias error (MABE is 42 W/m2 and the root-mean-square error (RMSE is 242 W/m2. The models forecasts are then compared to measured data and simulation results indicate that the k-NN-ANN-based model presented in this research can calculate hourly GSI with satisfactory accuracy.
Hole motion in the t-J and Hubbard models: Effect of a next-nearest-neighbor hopping
International Nuclear Information System (INIS)
Gagliano, E.; Bacci, S.; Dagotto, E.
1990-01-01
Using exact diagonalization techniques, we study one dynamical hole in the two-dimensional t-J and Hubbard models on a square lattice including a next-nearest-neighbor hopping t'. We present the phase diagram in the parameter space (J/t,t'/t), discussing the ground-state properties of the hole. At J=0, a crossing of levels exists at some value of t' separating a ferromagnetic from an antiferromagnetic ground state. For nonzero J, at least four different regions appear where the system behaves like an antiferromagnet or a (not fully saturated) ferromagnet. We study the quasiparticle behavior of the hole, showing that for small values of |t'| the previously presented string picture is still valid. We also find that, for a realistic set of parameters derived from the Cu-O Hamiltonian, the hole has momentum (π/2,π/2), suggesting an enhancement of the p-wave superconducting mode due to the second-neighbor interactions in the spin-bag picture. Results for the t-t'-U model are also discussed with conclusions similar to those of the t-t'-J model. In general we found that t'=0 is not a singular point of these models
Ver Hoef, Jay M; Temesgen, Hailemariam
2013-01-01
Forest surveys provide critical information for many diverse interests. Data are often collected from samples, and from these samples, maps of resources and estimates of aerial totals or averages are required. In this paper, two approaches for mapping and estimating totals; the spatial linear model (SLM) and k-NN (k-Nearest Neighbor) are compared, theoretically, through simulations, and as applied to real forestry data. While both methods have desirable properties, a review shows that the SLM has prediction optimality properties, and can be quite robust. Simulations of artificial populations and resamplings of real forestry data show that the SLM has smaller empirical root-mean-squared prediction errors (RMSPE) for a wide variety of data types, with generally less bias and better interval coverage than k-NN. These patterns held for both point predictions and for population totals or averages, with the SLM reducing RMSPE from 9% to 67% over some popular k-NN methods, with SLM also more robust to spatially imbalanced sampling. Estimating prediction standard errors remains a problem for k-NN predictors, despite recent attempts using model-based methods. Our conclusions are that the SLM should generally be used rather than k-NN if the goal is accurate mapping or estimation of population totals or averages.
Directory of Open Access Journals (Sweden)
Weide Li
2017-05-01
Full Text Available Electric load forecasting plays an important role in electricity markets and power systems. Because electric load time series are complicated and nonlinear, it is very difficult to achieve a satisfactory forecasting accuracy. In this paper, a hybrid model, Wavelet Denoising-Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EWKM, which combines k-Nearest Neighbor (KNN and Extreme Learning Machine (ELM based on a wavelet denoising technique is proposed for short-term load forecasting. The proposed hybrid model decomposes the time series into a low frequency-associated main signal and some detailed signals associated with high frequencies at first, then uses KNN to determine the independent and dependent variables from the low-frequency signal. Finally, the ELM is used to get the non-linear relationship between these variables to get the final prediction result for the electric load. Compared with three other models, Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EKM, Wavelet Denoising-Extreme Learning Machine (WKM and Wavelet Denoising-Back Propagation Neural Network optimized by k-Nearest Neighbor Regression (WNNM, the model proposed in this paper can improve the accuracy efficiently. New South Wales is the economic powerhouse of Australia, so we use the proposed model to predict electric demand for that region. The accurate prediction has a significant meaning.
Como, F; Carnesecchi, E; Volani, S; Dorne, J L; Richardson, J; Bassan, A; Pavan, M; Benfenati, E
2017-01-01
Ecological risk assessment of plant protection products (PPPs) requires an understanding of both the toxicity and the extent of exposure to assess risks for a range of taxa of ecological importance including target and non-target species. Non-target species such as honey bees (Apis mellifera), solitary bees and bumble bees are of utmost importance because of their vital ecological services as pollinators of wild plants and crops. To improve risk assessment of PPPs in bee species, computational models predicting the acute and chronic toxicity of a range of PPPs and contaminants can play a major role in providing structural and physico-chemical properties for the prioritisation of compounds of concern and future risk assessments. Over the last three decades, scientific advisory bodies and the research community have developed toxicological databases and quantitative structure-activity relationship (QSAR) models that are proving invaluable to predict toxicity using historical data and reduce animal testing. This paper describes the development and validation of a k-Nearest Neighbor (k-NN) model using in-house software for the prediction of acute contact toxicity of pesticides on honey bees. Acute contact toxicity data were collected from different sources for 256 pesticides, which were divided into training and test sets. The k-NN models were validated with good prediction, with an accuracy of 70% for all compounds and of 65% for highly toxic compounds, suggesting that they might reliably predict the toxicity of structurally diverse pesticides and could be used to screen and prioritise new pesticides. Copyright © 2016 Elsevier Ltd. All rights reserved.
Frog sound identification using extended k-nearest neighbor classifier
Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati
2017-09-01
Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.
Dimensionality reduction with unsupervised nearest neighbors
Kramer, Oliver
2013-01-01
This book is devoted to a novel approach for dimensionality reduction based on the famous nearest neighbor method that is a powerful classification and regression approach. It starts with an introduction to machine learning concepts and a real-world application from the energy domain. Then, unsupervised nearest neighbors (UNN) is introduced as efficient iterative method for dimensionality reduction. Various UNN models are developed step by step, reaching from a simple iterative strategy for discrete latent spaces to a stochastic kernel-based algorithm for learning submanifolds with independent parameterizations. Extensions that allow the embedding of incomplete and noisy patterns are introduced. Various optimization approaches are compared, from evolutionary to swarm-based heuristics. Experimental comparisons to related methodologies taking into account artificial test data sets and also real-world data demonstrate the behavior of UNN in practical scenarios. The book contains numerous color figures to illustr...
Approximate Nearest Neighbor Queries among Parallel Segments
DEFF Research Database (Denmark)
Emiris, Ioannis Z.; Malamatos, Theocharis; Tsigaridas, Elias
2010-01-01
We develop a data structure for answering efficiently approximate nearest neighbor queries over a set of parallel segments in three dimensions. We connect this problem to approximate nearest neighbor searching under weight constraints and approximate nearest neighbor searching on historical data...
DEFF Research Database (Denmark)
Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune H.
2014-01-01
n combined PET/MR, attenuation correction (AC) is performed indirectly based on the available MR image information. Metal implant-induced susceptibility artifacts and subsequent signal voids challenge MR-based AC. Several papers acknowledge the problem in PET attenuation correction when dental...... artifacts are ignored, but none of them attempts to solve the problem. We propose a clinically feasible correction method which combines Active Shape Models (ASM) and k- Nearest-Neighbors (kNN) into a simple approach which finds and corrects the dental artifacts within the surface boundaries of the patient...... vector, and fill the artifact voxels with a value representing soft tissue. We tested the method using fourteen patients without artifacts, and eighteen patients with dental artifacts of varying sizes within the anatomical surface of the head/neck region. Though the method wrongly filled a small volume...
One-Dimensional Fluids with Second Nearest-Neighbor Interactions
Fantoni, Riccardo; Santos, Andrés
2017-12-01
As is well known, one-dimensional systems with interactions restricted to first nearest neighbors admit a full analytically exact statistical-mechanical solution. This is essentially due to the fact that the knowledge of the first nearest-neighbor probability distribution function, p_1(r), is enough to determine the structural and thermodynamic properties of the system. On the other hand, if the interaction between second nearest-neighbor particles is turned on, the analytically exact solution is lost. Not only the knowledge of p_1(r) is not sufficient anymore, but even its determination becomes a complex many-body problem. In this work we systematically explore different approximate solutions for one-dimensional second nearest-neighbor fluid models. We apply those approximations to the square-well and the attractive two-step pair potentials and compare them with Monte Carlo simulations, finding an excellent agreement.
Nearest Neighbor Queries in Road Networks
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Kolar, Jan; Pedersen, Torben Bach
2003-01-01
With wireless communications and geo-positioning being widely available, it becomes possible to offer new e-services that provide mobile users with information about other mobile objects. This paper concerns active, ordered k-nearest neighbor queries for query and data objects that are moving in ...... for the nearest neighbor search in the prototype is presented in detail. In addition, the paper reports on results from experiments with the prototype system....
Sznajd, J.
2016-12-01
The linear perturbation renormalization group (LPRG) is used to study the phase transition of the weakly coupled Ising chains with intrachain (J ) and interchain nearest-neighbor (J1) and next-nearest-neighbor (J2) interactions forming the triangular and rectangular lattices in a field. The phase diagrams with the frustration point at J2=-J1/2 for a rectangular lattice and J2=-J1 for a triangular lattice have been found. The LPRG calculations support the idea that the phase transition is always continuous except for the frustration point and is accompanied by a divergence of the specific heat. For the antiferromagnetic chains, the external field does not change substantially the shape of the phase diagram. The critical temperature is suppressed to zero according to the power law when approaching the frustration point with an exponent dependent on the value of the field.
Lectures on the nearest neighbor method
Biau, Gérard
2015-01-01
This text presents a wide-ranging and rigorous overview of nearest neighbor methods, one of the most important paradigms in machine learning. Now in one self-contained volume, this book systematically covers key statistical, probabilistic, combinatorial and geometric ideas for understanding, analyzing and developing nearest neighbor methods. Gérard Biau is a professor at Université Pierre et Marie Curie (Paris). Luc Devroye is a professor at the School of Computer Science at McGill University (Montreal). .
Nearest Neighbor Algorithms for Pattern Classification
Barrios, J. O.
1972-01-01
A solution of the discrimination problem is considered by means of the minimum distance classifier, commonly referred to as the nearest neighbor (NN) rule. The NN rule is nonparametric, or distribution free, in the sense that it does not depend on any assumptions about the underlying statistics for its application. The k-NN rule is a procedure that assigns an observation vector z to a category F if most of the k nearby observations x sub i are elements of F. The condensed nearest neighbor (CNN) rule may be used to reduce the size of the training set required categorize The Bayes risk serves merely as a reference-the limit of excellence beyond which it is not possible to go. The NN rule is bounded below by the Bayes risk and above by twice the Bayes risk.
Interval valued fuzzy sets k-nearest neighbors classifier for finger vein recognition
Mukahar, Nordiana; Affendi Rosdi, Bakhtiar
2017-09-01
In nearest neighbor classification, fuzzy sets can be used to model the degree of membership of each instance to the classes of the problem. Although the fuzzy memberships can be set by analyzing local data around each instance, there may be still a lack of knowledge associated with the assignation of a single value to the membership. This is caused by the requirement of determining in advance two fixed parameters: k, in the definition of the initial membership values and m, in the computation of the votes of neighbors. Thus, the two fixed parameters only allow the flexibility of membership using a single value only. To overcome this drawback, a new approach of interval valued fuzzy sets k-nearest neighbors (IVFKNN) that incorporating interval valued fuzzy sets for computing the membership of instance is presented that allows membership values to be defined using a lower bound and an upper bound with the length of interval. The intervals concept is introduced to assign membership for each instance in training set and represents membership as an array of intervals. The intervals also considered the computation of the votes with the length of interval. In order to assess the classification performance of the IVFKNN classifier, it is compared with the competing classifiers, such as k-nearest neighbors (KNN) and fuzzy k-nearest neighbors (FKNN), in terms of the classification accuracy on publicly available Finger Vein USM (FV-USM) image database which was collected from 123 volunteers. The experimental results remark the strong performance of IVFKNN compared with the competing classifiers and show the best improvement in classification accuracy in all cases.
Approximation result toward nearest neighbor heuristic
Directory of Open Access Journals (Sweden)
Monnot Jér"me
2002-01-01
Full Text Available In this paper, we revisit the famous heuristic called nearest neighbor (N N for the traveling salesman problem under maximization and minimization goal. We deal with variants where the edge costs belong to interval Ša;taĆ for a>0 and t>1, which certainly corresponds to practical cases of these problems. We prove that NN is a (t+1/2t-approximation for maxTSPŠa;taĆ and a 2/(t+1-approximation for minTSPŠa;taĆ under the standard performance ratio. Moreover, we show that these ratios are tight for some instances.
Directory of Open Access Journals (Sweden)
Zhen Liu
2017-11-01
Full Text Available The insulated gate bipolar transistor (IGBT is a kind of excellent performance switching device used widely in power electronic systems. How to estimate the remaining useful life (RUL of an IGBT to ensure the safety and reliability of the power electronics system is currently a challenging issue in the field of IGBT reliability. The aim of this paper is to develop a prognostic technique for estimating IGBTs’ RUL. There is a need for an efficient prognostic algorithm that is able to support in-situ decision-making. In this paper, a novel prediction model with a complete structure based on optimally pruned extreme learning machine (OPELM and Volterra series is proposed to track the IGBT’s degradation trace and estimate its RUL; we refer to this model as Volterra k-nearest neighbor OPELM prediction (VKOPP model. This model uses the minimum entropy rate method and Volterra series to reconstruct phase space for IGBTs’ ageing samples, and a new weight update algorithm, which can effectively reduce the influence of the outliers and noises, is utilized to establish the VKOPP network; then a combination of the k-nearest neighbor method (KNN and least squares estimation (LSE method is used to calculate the output weights of OPELM and predict the RUL of the IGBT. The prognostic results show that the proposed approach can predict the RUL of IGBT modules with small error and achieve higher prediction precision and lower time cost than some classic prediction approaches.
Liu, Zhen; Mei, Wenjuan; Zeng, Xianping; Yang, Chenglin; Zhou, Xiuyun
2017-11-03
The insulated gate bipolar transistor (IGBT) is a kind of excellent performance switching device used widely in power electronic systems. How to estimate the remaining useful life (RUL) of an IGBT to ensure the safety and reliability of the power electronics system is currently a challenging issue in the field of IGBT reliability. The aim of this paper is to develop a prognostic technique for estimating IGBTs' RUL. There is a need for an efficient prognostic algorithm that is able to support in-situ decision-making. In this paper, a novel prediction model with a complete structure based on optimally pruned extreme learning machine (OPELM) and Volterra series is proposed to track the IGBT's degradation trace and estimate its RUL; we refer to this model as Volterra k-nearest neighbor OPELM prediction (VKOPP) model. This model uses the minimum entropy rate method and Volterra series to reconstruct phase space for IGBTs' ageing samples, and a new weight update algorithm, which can effectively reduce the influence of the outliers and noises, is utilized to establish the VKOPP network; then a combination of the k -nearest neighbor method (KNN) and least squares estimation (LSE) method is used to calculate the output weights of OPELM and predict the RUL of the IGBT. The prognostic results show that the proposed approach can predict the RUL of IGBT modules with small error and achieve higher prediction precision and lower time cost than some classic prediction approaches.
Evolving edited k-nearest neighbor classifiers.
Gil-Pita, Roberto; Yao, Xin
2008-12-01
The k-nearest neighbor method is a classifier based on the evaluation of the distances to each pattern in the training set. The edited version of this method consists of the application of this classifier with a subset of the complete training set in which some of the training patterns are excluded, in order to reduce the classification error rate. In recent works, genetic algorithms have been successfully applied to determine which patterns must be included in the edited subset. In this paper we propose a novel implementation of a genetic algorithm for designing edited k-nearest neighbor classifiers. It includes the definition of a novel mean square error based fitness function, a novel clustered crossover technique, and the proposal of a fast smart mutation scheme. In order to evaluate the performance of the proposed method, results using the breast cancer database, the diabetes database and the letter recognition database from the UCI machine learning benchmark repository have been included. Both error rate and computational cost have been considered in the analysis. Obtained results show the improvement achieved by the proposed editing method.
Text Categorization Using Weight Adjusted k-Nearest Neighbor Classification
National Research Council Canada - National Science Library
Han, Euihong; Karypis, George; Kumar, Vipin
1999-01-01
.... The authors present a nearest neighbor classification scheme for text categorization in which the importance of discriminating words is learned using mutual information and weight adjustment techniques...
Introduction to machine learning: k-nearest neighbors.
Zhang, Zhongheng
2016-06-01
Machine learning techniques have been widely used in many scientific fields, but its use in medical literature is limited partly because of technical difficulties. k-nearest neighbors (kNN) is a simple method of machine learning. The article introduces some basic ideas underlying the kNN algorithm, and then focuses on how to perform kNN modeling with R. The dataset should be prepared before running the knn() function in R. After prediction of outcome with kNN algorithm, the diagnostic performance of the model should be checked. Average accuracy is the mostly widely used statistic to reflect the kNN algorithm. Factors such as k value, distance calculation and choice of appropriate predictors all have significant impact on the model performance.
Implementation of Nearest Neighbor using HSV to Identify Skin Disease
Gerhana, Y. A.; Zulfikar, W. B.; Ramdani, A. H.; Ramdhani, M. A.
2018-01-01
Today, Android is one of the most widely used operating system in the world. Most of android device has a camera that could capture an image, this feature could be optimized to identify skin disease. The disease is one of health problem caused by bacterium, fungi, and virus. The symptoms of skin disease usually visible. In this work, the symptoms that captured as image contains HSV in every pixel of the image. HSV can extracted and then calculate to earn euclidean value. The value compared using nearest neighbor algorithm to discover closer value between image testing and image training to get highest value that decide class label or type of skin disease. The testing result show that 166 of 200 or about 80% is accurate. There are some reasons that influence the result of classification model like number of image training and quality of android device’s camera.
Multiple k Nearest Neighbor Query Processing in Spatial Network Databases
DEFF Research Database (Denmark)
Xuegang, Huang; Jensen, Christian Søndergaard; Saltenis, Simonas
2006-01-01
This paper concerns the efficient processing of multiple k nearest neighbor queries in a road-network setting. The assumed setting covers a range of scenarios such as the one where a large population of mobile service users that are constrained to a road network issue nearest-neighbor queries...... for points of interest that are accessible via the road network. Given multiple k nearest neighbor queries, the paper proposes progressive techniques that selectively cache query results in main memory and subsequently reuse these for query processing. The paper initially proposes techniques for the case...... neighbor query processing....
Nearest-neighbor interactions, habitat fragmentation, and the persistence of host-pathogen systems.
Wodarz, Dominik; Sun, Zhiying; Lau, John W; Komarova, Natalia L
2013-09-01
Spatial interactions are known to promote stability and persistence in enemy-victim interactions if instability and extinction occur in well-mixed settings. We investigate the effect of spatial interactions in the opposite case, where populations can persist in well-mixed systems. A stochastic agent-based model of host-pathogen dynamics is considered that describes nearest-neighbor interactions in an undivided habitat. Contrary to previous notions, we find that in this setting, spatial interactions in fact promote extinction. The reason is that, in contrast to the mass-action system, the outcome of the nearest-neighbor model is governed by dynamics in small "local neighborhoods." This is an abstraction that describes interactions in a minimal grid consisting of an individual plus its nearest neighbors. The small size of this characteristic scale accounts for the higher extinction probabilities. Hence, nearest-neighbor interactions in a continuous habitat lead to outcomes reminiscent of a fragmented habitat, which is underlined further with a metapopulation model that explicitly assumes habitat fragmentation. Beyond host-pathogen dynamics, axiomatic modeling shows that our results hold for generic enemy-victim interactions under specified assumptions. These results are used to interpret a set of published experiments that provide a first step toward model testing and are discussed in the context of the literature.
K-Nearest Neighbor Algorithm Optimization in Text Categorization
Chen, Shufeng
2018-01-01
K-Nearest Neighbor (KNN) classification algorithm is one of the simplest methods of data mining. It has been widely used in classification, regression and pattern recognition. The traditional KNN method has some shortcomings such as large amount of sample computation and strong dependence on the sample library capacity. In this paper, a method of representative sample optimization based on CURE algorithm is proposed. On the basis of this, presenting a quick algorithm QKNN (Quick k-nearest neighbor) to find the nearest k neighbor samples, which greatly reduces the similarity calculation. The experimental results show that this algorithm can effectively reduce the number of samples and speed up the search for the k nearest neighbor samples to improve the performance of the algorithm.
River Flow Prediction Using the Nearest Neighbor Probabilistic Ensemble Method
Directory of Open Access Journals (Sweden)
H. Sanikhani
2016-02-01
Full Text Available Introduction: In the recent years, researchers interested on probabilistic forecasting of hydrologic variables such river flow.A probabilistic approach aims at quantifying the prediction reliability through a probability distribution function or a prediction interval for the unknown future value. The evaluation of the uncertainty associated to the forecast is seen as a fundamental information, not only to correctly assess the prediction, but also to compare forecasts from different methods and to evaluate actions and decisions conditionally on the expected values. Several probabilistic approaches have been proposed in the literature, including (1 methods that use resampling techniques to assess parameter and model uncertainty, such as the Metropolis algorithm or the Generalized Likelihood Uncertainty Estimation (GLUE methodology for an application to runoff prediction, (2 methods based on processing the forecast errors of past data to produce the probability distributions of future values and (3 methods that evaluate how the uncertainty propagates from the rainfall forecast to the river discharge prediction, as the Bayesian forecasting system. Materials and Methods: In this study, two different probabilistic methods are used for river flow prediction.Then the uncertainty related to the forecast is quantified. One approach is based on linear predictors and in the other, nearest neighbor was used. The nonlinear probabilistic ensemble can be used for nonlinear time series analysis using locally linear predictors, while NNPE utilize a method adapted for one step ahead nearest neighbor methods. In this regard, daily river discharge (twelve years of Dizaj and Mashin Stations on Baranduz-Chay basin in west Azerbijan and Zard-River basin in Khouzestan provinces were used, respectively. The first six years of data was applied for fitting the model. The next three years was used to calibration and the remained three yeas utilized for testing the models
k-Nearest Neighbors Algorithm in Profiling Power Analysis Attacks
Directory of Open Access Journals (Sweden)
Z. Martinasek
2016-06-01
Full Text Available Power analysis presents the typical example of successful attacks against trusted cryptographic devices such as RFID (Radio-Frequency IDentifications and contact smart cards. In recent years, the cryptographic community has explored new approaches in power analysis based on machine learning models such as Support Vector Machine (SVM, RF (Random Forest and Multi-Layer Perceptron (MLP. In this paper, we made an extensive comparison of machine learning algorithms in the power analysis. For this purpose, we implemented a verification program that always chooses the optimal settings of individual machine learning models in order to obtain the best classification accuracy. In our research, we used three datasets, the first containing the power traces of an unprotected AES (Advanced Encryption Standard implementation. The second and third datasets are created independently from public available power traces corresponding to a masked AES implementation (DPA Contest v4. The obtained results revealed some interesting facts, namely, an elementary k-NN (k-Nearest Neighbors algorithm, which has not been commonly used in power analysis yet, shows great application potential in practice.
Secure Nearest Neighbor Query on Crowd-Sensing Data
Directory of Open Access Journals (Sweden)
Ke Cheng
2016-09-01
Full Text Available Nearest neighbor queries are fundamental in location-based services, and secure nearest neighbor queries mainly focus on how to securely and quickly retrieve the nearest neighbor in the outsourced cloud server. However, the previous big data system structure has changed because of the crowd-sensing data. On the one hand, sensing data terminals as the data owner are numerous and mistrustful, while, on the other hand, in most cases, the terminals find it difficult to finish many safety operation due to computation and storage capability constraints. In light of they Multi Owners and Multi Users (MOMU situation in the crowd-sensing data cloud environment, this paper presents a secure nearest neighbor query scheme based on the proxy server architecture, which is constructed by protocols of secure two-party computation and secure Voronoi diagram algorithm. It not only preserves the data confidentiality and query privacy but also effectively resists the collusion between the cloud server and the data owners or users. Finally, extensive theoretical and experimental evaluations are presented to show that our proposed scheme achieves a superior balance between the security and query performance compared to other schemes.
Local-Nearest-Neighbors-Based Feature Weighting for Gene Selection.
An, Shuai; Wang, Jun; Wei, Jinmao
2017-06-07
Selecting functional genes is essential for analyzing microarray data. Among many available feature (gene) selection approaches, the ones on the basis of the large margin nearest neighbor receive more attention due to their low computational costs and high accuracies in analyzing the high-dimensional data. Yet there still exist some problems that hamper the existing approaches in sifting real target genes, including selecting erroneous nearest neighbors, high sensitivity to irrelevant genes, and inappropriate evaluation criteria. Previous pioneer works have partly addressed some of the problems, but none of them are capable of solving these problems simultaneously. In this paper, we propose a new local-nearest-neighbors-based feature weighting approach to alleviate the above problems. The proposed approach is based on the trick of locally minimizing the within-class distances and maximizing the between-class distances with the k nearest neighbors rule. We further define a feature weight vector, and construct it by minimizing the cost function with a regularization term. The proposed approach can be applied naturally to the multi-class problems and does not require extra modification. Experimental results on the UCI and the open microarray data sets validate the effectiveness and efficiency of the new approach.
Nearest Neighbor Networks: clustering expression data based on gene neighborhoods
Directory of Open Access Journals (Sweden)
Olszewski Kellen L
2007-07-01
Full Text Available Abstract Background The availability of microarrays measuring thousands of genes simultaneously across hundreds of biological conditions represents an opportunity to understand both individual biological pathways and the integrated workings of the cell. However, translating this amount of data into biological insight remains a daunting task. An important initial step in the analysis of microarray data is clustering of genes with similar behavior. A number of classical techniques are commonly used to perform this task, particularly hierarchical and K-means clustering, and many novel approaches have been suggested recently. While these approaches are useful, they are not without drawbacks; these methods can find clusters in purely random data, and even clusters enriched for biological functions can be skewed towards a small number of processes (e.g. ribosomes. Results We developed Nearest Neighbor Networks (NNN, a graph-based algorithm to generate clusters of genes with similar expression profiles. This method produces clusters based on overlapping cliques within an interaction network generated from mutual nearest neighborhoods. This focus on nearest neighbors rather than on absolute distance measures allows us to capture clusters with high connectivity even when they are spatially separated, and requiring mutual nearest neighbors allows genes with no sufficiently similar partners to remain unclustered. We compared the clusters generated by NNN with those generated by eight other clustering methods. NNN was particularly successful at generating functionally coherent clusters with high precision, and these clusters generally represented a much broader selection of biological processes than those recovered by other methods. Conclusion The Nearest Neighbor Networks algorithm is a valuable clustering method that effectively groups genes that are likely to be functionally related. It is particularly attractive due to its simplicity, its success in the
[Galaxy/quasar classification based on nearest neighbor method].
Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun
2011-09-01
With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.
A new approach to very short term wind speed prediction using k-nearest neighbor classification
International Nuclear Information System (INIS)
Yesilbudak, Mehmet; Sagiroglu, Seref; Colak, Ilhami
2013-01-01
Highlights: ► Wind speed parameter was predicted in an n-tupled inputs using k-NN classification. ► The effects of input parameters, nearest neighbors and distance metrics were analyzed. ► Many useful and reasonable inferences were uncovered using the developed model. - Abstract: Wind energy is an inexhaustible energy source and wind power production has been growing rapidly in recent years. However, wind power has a non-schedulable nature due to wind speed variations. Hence, wind speed prediction is an indispensable requirement for power system operators. This paper predicts wind speed parameter in an n-tupled inputs using k-nearest neighbor (k-NN) classification and analyzes the effects of input parameters, nearest neighbors and distance metrics on wind speed prediction. The k-NN classification model was developed using the object oriented programming techniques and includes Manhattan and Minkowski distance metrics except from Euclidean distance metric on the contrary of literature. The k-NN classification model which uses wind direction, air temperature, atmospheric pressure and relative humidity parameters in a 4-tupled space achieved the best wind speed prediction for k = 5 in the Manhattan distance metric. Differently, the k-NN classification model which uses wind direction, air temperature and atmospheric pressure parameters in a 3-tupled inputs gave the worst wind speed prediction for k = 1 in the Minkowski distance metric
Classification of EEG Signals using adaptive weighted distance nearest neighbor algorithm
Directory of Open Access Journals (Sweden)
E. Parvinnia
2014-01-01
Full Text Available Electroencephalogram (EEG signals are often used to diagnose diseases such as seizure, alzheimer, and schizophrenia. One main problem with the recorded EEG samples is that they are not equally reliable due to the artifacts at the time of recording. EEG signal classification algorithms should have a mechanism to handle this issue. It seems that using adaptive classifiers can be useful for the biological signals such as EEG. In this paper, a general adaptive method named weighted distance nearest neighbor (WDNN is applied for EEG signal classification to tackle this problem. This classification algorithm assigns a weight to each training sample to control its influence in classifying test samples. The weights of training samples are used to find the nearest neighbor of an input query pattern. To assess the performance of this scheme, EEG signals of thirteen schizophrenic patients and eighteen normal subjects are analyzed for the classification of these two groups. Several features including, fractal dimension, band power and autoregressive (AR model are extracted from EEG signals. The classification results are evaluated using Leave one (subject out cross validation for reliable estimation. The results indicate that combination of WDNN and selected features can significantly outperform the basic nearest-neighbor and the other methods proposed in the past for the classification of these two groups. Therefore, this method can be a complementary tool for specialists to distinguish schizophrenia disorder.
Information Retrieval Document Classified with K-Nearest Neighbor
Directory of Open Access Journals (Sweden)
Badruz Zaman
2016-01-01
Full Text Available Along with the rapid advancement of technology development led to the amount of information available is also increasingly abundant. The aim of this study was to determine how the implementation of information retrieval system in the classification of the journal by using the cosine similarity and K-Nearest Neighbor (KNN. The data used as many as 160 documents with categories such as Physical Sciences and Engineering, Life Science, Health Science, and Social Sciences and Humanities. Construction stage begins with the use of text mining processing, the weighting of each token by using the term frequency-inverse document frequency (TF-IDF, calculate the degree of similarity of each document by using the cosine similarity and classification using k-Nearest Neighbor.Evaluation is done by using the testing documents as much as 20 documents, with a value of k = {37, 41, 43}. Evaluation system shows the level of success in classifying documents on the value of k = 43 with a value precision of 0501. System test results showed that 20 document testing used can be classified according to the actual category.
Using K-Nearest Neighbor in Optical Character Recognition
Directory of Open Access Journals (Sweden)
Veronica Ong
2016-03-01
Full Text Available The growth in computer vision technology has aided society with various kinds of tasks. One of these tasks is the ability of recognizing text contained in an image, or usually referred to as Optical Character Recognition (OCR. There are many kinds of algorithms that can be implemented into an OCR. The K-Nearest Neighbor is one such algorithm. This research aims to find out the process behind the OCR mechanism by using K-Nearest Neighbor algorithm; one of the most influential machine learning algorithms. It also aims to find out how precise the algorithm is in an OCR program. To do that, a simple OCR program to classify alphabets of capital letters is made to produce and compare real results. The result of this research yielded a maximum of 76.9% accuracy with 200 training samples per alphabet. A set of reasons are also given as to why the program is able to reach said level of accuracy.
Central Carbon-Carbon Collisions at 4.2 a GeV/c and Nearest-Neighbor Spacing Distributions
Wazir, Z.; Fakhar-E-Alam, M.; Khan, S. A.; Amer, M. A. Rafih
2012-05-01
The obtained experimental results due to nearest-neighbor spacing distributions were compared with simulated data using random matrix theory (RMT) with aid of the ultra relativistic quantum molecular dynamics (UrQMD) model. The present assessment reveals the primary level of multiplicity of secondary charged particles which might be linked with the onset of region of central collisions based on mentioned results. The author tried to demonstrate the importance of the nearest-neighbor distributions for various multiplicities to detect the region of central collisions.
Designing lattice structures with maximal nearest-neighbor entanglement
International Nuclear Information System (INIS)
Navarro-Munoz, J C; Lopez-Sandoval, R; Garcia, M E
2009-01-01
In this paper, we study the numerical optimization of nearest-neighbor concurrence of bipartite one- and two-dimensional lattices, as well as non-bipartite two-dimensional lattices. These systems are described in the framework of a tight-binding Hamiltonian while the optimization of concurrence was performed using genetic algorithms. Our results show that the concurrence of the optimized lattice structures is considerably higher than that of non-optimized systems. In the case of one-dimensional chains, the concurrence increases dramatically when the system begins to dimerize, i.e., it undergoes a structural phase transition (Peierls distortion). This result is consistent with the idea that entanglement is maximal or shows a singularity near quantum phase transitions. Moreover, the optimization of concurrence in two-dimensional bipartite and non-bipartite lattices is achieved when the structures break into smaller subsystems, which are arranged in geometrically distinguishable configurations.
Enhanced Approximate Nearest Neighbor via Local Area Focused Search.
Energy Technology Data Exchange (ETDEWEB)
Gonzales, Antonio [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Blazier, Nicholas Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
Approximate Nearest Neighbor (ANN) algorithms are increasingly important in machine learning, data mining, and image processing applications. There is a large family of space- partitioning ANN algorithms, such as randomized KD-Trees, that work well in practice but are limited by an exponential increase in similarity comparisons required to optimize recall. Additionally, they only support a small set of similarity metrics. We present Local Area Fo- cused Search (LAFS), a method that enhances the way queries are performed using an existing ANN index. Instead of a single query, LAFS performs a number of smaller (fewer similarity comparisons) queries and focuses on a local neighborhood which is refined as candidates are identified. We show that our technique improves performance on several well known datasets and is easily extended to general similarity metrics using kernel projection techniques.
Designing lattice structures with maximal nearest-neighbor entanglement
Energy Technology Data Exchange (ETDEWEB)
Navarro-Munoz, J C; Lopez-Sandoval, R [Instituto Potosino de Investigacion CientIfica y Tecnologica, Camino a la presa San Jose 2055, 78216 San Luis Potosi (Mexico); Garcia, M E [Theoretische Physik, FB 18, Universitaet Kassel and Center for Interdisciplinary Nanostructure Science and Technology (CINSaT), Heinrich-Plett-Str.40, 34132 Kassel (Germany)
2009-08-07
In this paper, we study the numerical optimization of nearest-neighbor concurrence of bipartite one- and two-dimensional lattices, as well as non-bipartite two-dimensional lattices. These systems are described in the framework of a tight-binding Hamiltonian while the optimization of concurrence was performed using genetic algorithms. Our results show that the concurrence of the optimized lattice structures is considerably higher than that of non-optimized systems. In the case of one-dimensional chains, the concurrence increases dramatically when the system begins to dimerize, i.e., it undergoes a structural phase transition (Peierls distortion). This result is consistent with the idea that entanglement is maximal or shows a singularity near quantum phase transitions. Moreover, the optimization of concurrence in two-dimensional bipartite and non-bipartite lattices is achieved when the structures break into smaller subsystems, which are arranged in geometrically distinguishable configurations.
Directory of Open Access Journals (Sweden)
Cobaugh Christian W
2004-08-01
Full Text Available Abstract Background A detailed understanding of an RNA's correct secondary and tertiary structure is crucial to understanding its function and mechanism in the cell. Free energy minimization with energy parameters based on the nearest-neighbor model and comparative analysis are the primary methods for predicting an RNA's secondary structure from its sequence. Version 3.1 of Mfold has been available since 1999. This version contains an expanded sequence dependence of energy parameters and the ability to incorporate coaxial stacking into free energy calculations. We test Mfold 3.1 by performing the largest and most phylogenetically diverse comparison of rRNA and tRNA structures predicted by comparative analysis and Mfold, and we use the results of our tests on 16S and 23S rRNA sequences to assess the improvement between Mfold 2.3 and Mfold 3.1. Results The average prediction accuracy for a 16S or 23S rRNA sequence with Mfold 3.1 is 41%, while the prediction accuracies for the majority of 16S and 23S rRNA structures tested are between 20% and 60%, with some having less than 20% prediction accuracy. The average prediction accuracy was 71% for 5S rRNA and 69% for tRNA. The majority of the 5S rRNA and tRNA sequences have prediction accuracies greater than 60%. The prediction accuracy of 16S rRNA base-pairs decreases exponentially as the number of nucleotides intervening between the 5' and 3' halves of the base-pair increases. Conclusion Our analysis indicates that the current set of nearest-neighbor energy parameters in conjunction with the Mfold folding algorithm are unable to consistently and reliably predict an RNA's correct secondary structure. For 16S or 23S rRNA structure prediction, Mfold 3.1 offers little improvement over Mfold 2.3. However, the nearest-neighbor energy parameters do work well for shorter RNA sequences such as tRNA or 5S rRNA, or for larger rRNAs when the contact distance between the base-pairs is less than 100 nucleotides.
Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.
Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong
2016-02-01
Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both
Kenneth B. Pierce; Janet L. Ohmann; Michael C. Wimberly; Matthew J. Gregory; Jeremy S. Fried
2009-01-01
Land managers need consistent information about the geographic distribution of wildland fuels and forest structure over large areas to evaluate fire risk and plan fuel treatments. We compared spatial predictions for 12 fuel and forest structure variables across three regions in the western United States using gradient nearest neighbor (GNN) imputation, linear models (...
Zandvliet, Henricus J.W.
2015-01-01
We have derived within the framework of a solid-on-solid model with anisotropic nearest-neighbor interactions an exact expression for the free energy of an arbitrarily oriented step edge or boundary on a rectangular two-dimensional lattice. The full angular dependence of the step free energy allows
International Nuclear Information System (INIS)
Zvyagin, A.A.; Cheranovskii, V.O.
2009-01-01
A one-dimensional spin-1/2 model in which the alternation of the exchange interactions between neighboring spins is accompanied by the next-nearest-neighbor (NNN) spin exchange (zig-zag spin ladder with alternation) is studied. The thermodynamic characteristics of the model quantum spin chain are obtained in the mean-field-like approximation. Depending on the strength of the NNN interactions, the model manifests either the spin-gapped behavior of low-lying excitations at low magnetic fields, or ferrimagnetic ordering in the ground state with gapless low-lying excitations. The system undergoes second-order or first-order quantum phase transitions, governed by the external magnetic field, NNN coupling strength, and the degree of the alternation. Hence, NNN spin-spin interactions in a dimerized quantum spin chain can produce a spontaneous magnetization. On the other hand, for quantum spin chains with a spontaneous magnetization, caused by NNN spin-spin couplings, the alternation of nearest-neighbor (NN) exchange interactions can cause destruction of that magnetization and the onset of a spin gap for low-lying excitations. Alternating NN interactions produce a spin gap between two branches of low-energy excitations, and the NNN interactions yield asymmetry of the dispersion laws of those excitations, with possible minima corresponding to incommensurate structures in the spin chain
Quality and efficiency in high dimensional Nearest neighbor search
Tao, Yufei
2009-01-01
Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.
Wang, Dong
2016-03-01
features, some other popular statistical models including linear discriminant analysis, quadratic discriminant analysis, classification and regression tree and naive Bayes classifier, are compared with the developed method. The results show that the developed method has the highest prediction accuracies among these statistical models. Additionally, selection of the number of new significant features and parameter selection of K-nearest neighbors are thoroughly investigated.
Using nearest neighbors for accurate estimation of ultrasonic attenuation in the spectral domain.
Hasan, Md Kamrul; Hussain, Mohammad Arafat; Ara, Sharmin R; Lee, Soo Yeol; Alam, S Kaisar
2013-06-01
Attenuation is a key diagnostic parameter of tissue pathology change and thus may play a vital role in the quantitative discrimination of malignant and benign tumors in soft tissue. In this paper, two novel techniques are proposed for estimating the average ultrasonic attenuation in soft tissue using the spectral domain weighted nearest neighbor method. Because the attenuation coefficient of soft tissues can be considered to be a continuous function in a small neighborhood, we directly estimate an average value of it from the slope of the regression line fitted to the 1) modified average midband fit value and 2) the average center frequency shift along the depth. To calculate the average midband fit value, an average regression line computed from the exponentially weighted short-time Fourier transform (STFT) of the neighboring 1-D signal blocks, in the axial and lateral directions, is fitted over the usable bandwidth of the normalized power spectrum. The average center frequency downshift is computed from the maximization of a cost function defined from the normalized spectral cross-correlation (NSCC) of exponentially weighted nearest neighbors in both directions. Different from the large spatial signal-block-based spectral stability approach, a costfunction- based approach incorporating NSCC functions of neighboring 1-D signal blocks is introduced. This paves the way for using comparatively smaller spatial area along the lateral direction, a necessity for producing more realistic attenuation estimates for heterogeneous tissue. For accurate estimation of the attenuation coefficient, we also adopt a reference-phantombased diffraction-correction technique for both methods. The proposed attenuation estimation algorithm demonstrates better performance than other reported techniques in the tissue-mimicking phantom and the in vivo breast data analysis.
Nearest Neighbor Search in the Metric Space of a Complex Network for Community Detection
Directory of Open Access Journals (Sweden)
Suman Saha
2016-03-01
Full Text Available The objective of this article is to bridge the gap between two important research directions: (1 nearest neighbor search, which is a fundamental computational tool for large data analysis; and (2 complex network analysis, which deals with large real graphs but is generally studied via graph theoretic analysis or spectral analysis. In this article, we have studied the nearest neighbor search problem in a complex network by the development of a suitable notion of nearness. The computation of efficient nearest neighbor search among the nodes of a complex network using the metric tree and locality sensitive hashing (LSH are also studied and experimented. For evaluation of the proposed nearest neighbor search in a complex network, we applied it to a network community detection problem. Experiments are performed to verify the usefulness of nearness measures for the complex networks, the role of metric tree and LSH to compute fast and approximate node nearness and the the efficiency of community detection using nearest neighbor search. We observed that nearest neighbor between network nodes is a very efficient tool to explore better the community structure of the real networks. Several efficient approximation schemes are very useful for large networks, which hardly made any degradation of results, whereas they save lot of computational times, and nearest neighbor based community detection approach is very competitive in terms of efficiency and time.
Rusdiana, Lili; Marfuah
2017-12-01
K-Nearest Neighbors method is one of methods used for classification which calculate a value to find out the closest in distance. It is used to group a set of data such as students’ graduation status that are got from the amount of course credits taken by them, the grade point average (AVG), and the mini-thesis grade. The study is conducted to know the results of using K-Nearest Neighbors method on the application of determining students’ graduation status, so it can be analyzed from the method used, the data, and the application constructed. The aim of this study is to find out the application results by using K-Nearest Neighbors concept to determine students’ graduation status using the data of STMIK Palangkaraya students. The development of the software used Extreme Programming, since it was appropriate and precise for this study which was to quickly finish the project. The application was created using Microsoft Office Excel 2007 for the training data and Matlab 7 to implement the application. The result of K-Nearest Neighbors method on the application of determining students’ graduation status was 92.5%. It could determine the predicate graduation of 94 data used from the initial data before the processing as many as 136 data which the maximal training data was 50data. The K-Nearest Neighbors method is one of methods used to group a set of data based on the closest value, so that using K-Nearest Neighbors method agreed with this study. The results of K-Nearest Neighbors method on the application of determining students’ graduation status was 92.5% could determine the predicate graduation which is the maximal training data. The K-Nearest Neighbors method is one of methods used to group a set of data based on the closest value, so that using K-Nearest Neighbors method agreed with this study.
Spaces of phylogenetic networks from generalized nearest-neighbor interchange operations.
Huber, Katharina T; Linz, Simone; Moulton, Vincent; Wu, Taoyang
2016-02-01
Phylogenetic networks are a generalization of evolutionary or phylogenetic trees that are used to represent the evolution of species which have undergone reticulate evolution. In this paper we consider spaces of such networks defined by some novel local operations that we introduce for converting one phylogenetic network into another. These operations are modeled on the well-studied nearest-neighbor interchange operations on phylogenetic trees, and lead to natural generalizations of the tree spaces that have been previously associated to such operations. We present several results on spaces of some relatively simple networks, called level-1 networks, including the size of the neighborhood of a fixed network, and bounds on the diameter of the metric defined by taking the smallest number of operations required to convert one network into another. We expect that our results will be useful in the development of methods for systematically searching for optimal phylogenetic networks using, for example, likelihood and Bayesian approaches.
Research on Parallelization of GPU-based K-Nearest Neighbor Algorithm
Jiang, Hao; Wu, Yulin
2017-10-01
Based on the analysis of the K-Nearest Neighbor Algorithm, the feasibility of parallelization is studied from the steps of the algorithm, the operation efficiency and the data structure of each step, and the part of parallel execution is determined. A K-Nearest Neighbor Algorithm parallelization scheme is designed and the parallel G-KNN algorithm is implemented in the CUDA environment. The experimental results show that the K-Nearest Neighbor Algorithm has a significant improvement in efficiency after parallelization, especially on large-scale data.
On Competitiveness of Nearest-Neighbor-Based Music Classification: A Methodological Critique
DEFF Research Database (Denmark)
Pálmason, Haukur; Jónsson, Björn Thór; Amsaleg, Laurent
2017-01-01
The traditional role of nearest-neighbor classification in music classification research is that of a straw man opponent for the learning approach of the hour. Recent work in high-dimensional indexing has shown that approximate nearest-neighbor algorithms are extremely scalable, yielding results...... of reasonable quality from billions of high-dimensional features. With such efficient large-scale classifiers, the traditional music classification methodology of aggregating and compressing the audio features is incorrect; instead the approximate nearest-neighbor classifier should be given an extensive data...... collection to work with. We present a case study, using a well-known MIR classification benchmark with well-known music features, which shows that a simple nearest-neighbor classifier performs very competitively when given ample data. In this position paper, we therefore argue that nearest...
Zhang, Zhongzhi; Dong, Yuze; Sheng, Yibin
2015-10-01
Random walks including non-nearest-neighbor jumps appear in many real situations such as the diffusion of adatoms and have found numerous applications including PageRank search algorithm; however, related theoretical results are much less for this dynamical process. In this paper, we present a study of mixed random walks in a family of fractal scale-free networks, where both nearest-neighbor and next-nearest-neighbor jumps are included. We focus on trapping problem in the network family, which is a particular case of random walks with a perfect trap fixed at the central high-degree node. We derive analytical expressions for the average trapping time (ATT), a quantitative indicator measuring the efficiency of the trapping process, by using two different methods, the results of which are consistent with each other. Furthermore, we analytically determine all the eigenvalues and their multiplicities for the fundamental matrix characterizing the dynamical process. Our results show that although next-nearest-neighbor jumps have no effect on the leading scaling of the trapping efficiency, they can strongly affect the prefactor of ATT, providing insight into better understanding of random-walk process in complex systems.
Anomaly Detection Based on Local Nearest Neighbor Distance Descriptor in Crowded Scenes
Directory of Open Access Journals (Sweden)
Xing Hu
2014-01-01
Full Text Available We propose a novel local nearest neighbor distance (LNND descriptor for anomaly detection in crowded scenes. Comparing with the commonly used low-level feature descriptors in previous works, LNND descriptor has two major advantages. First, LNND descriptor efficiently incorporates spatial and temporal contextual information around the video event that is important for detecting anomalous interaction among multiple events, while most existing feature descriptors only contain the information of single event. Second, LNND descriptor is a compact representation and its dimensionality is typically much lower than the low-level feature descriptor. Therefore, not only the computation time and storage requirement can be accordingly saved by using LNND descriptor for the anomaly detection method with offline training fashion, but also the negative aspects caused by using high-dimensional feature descriptor can be avoided. We validate the effectiveness of LNND descriptor by conducting extensive experiments on different benchmark datasets. Experimental results show the promising performance of LNND-based method against the state-of-the-art methods. It is worthwhile to notice that the LNND-based approach requires less intermediate processing steps without any subsequent processing such as smoothing but achieves comparable event better performance.
Nearest-neighbor guided evaluation of data reliability and its applications.
Boongoen, Tossapon; Shen, Qiang
2010-12-01
The intuition of data reliability has recently been incorporated into the main stream of research on ordered weighted averaging (OWA) operators. Instead of relying on human-guided variables, the aggregation behavior is determined in accordance with the underlying characteristics of the data being aggregated. Data-oriented operators such as the dependent OWA (DOWA) utilize centralized data structures to generate reliable weights, however. Despite their simplicity, the approach taken by these operators neglects entirely any local data structure that represents a strong agreement or consensus. To address this issue, the cluster-based OWA (Clus-DOWA) operator has been proposed. It employs a cluster-based reliability measure that is effective to differentiate the accountability of different input arguments. Yet, its actual application is constrained by the high computational requirement. This paper presents a more efficient nearest-neighbor-based reliability assessment for which an expensive clustering process is not required. The proposed measure can be perceived as a stress function, from which the OWA weights and associated decision-support explanations can be generated. To illustrate the potential of this measure, it is applied to both the problem of information aggregation for alias detection and the problem of unsupervised feature selection (in which unreliable features are excluded from an actual learning process). Experimental results demonstrate that these techniques usually outperform their conventional state-of-the-art counterparts.
Fracton topological order from nearest-neighbor two-spin interactions and dualities
Slagle, Kevin; Kim, Yong Baek
2017-10-01
Fracton topological order describes a remarkable phase of matter, which can be characterized by fracton excitations with constrained dynamics and a ground-state degeneracy that increases exponentially with the length of the system on a three-dimensional torus. However, previous models exhibiting this order require many-spin interactions, which may be very difficult to realize in a real material or cold atom system. In this work, we present a more physically realistic model which has the so-called X-cube fracton topological order [Vijay, Haah, and Fu, Phys. Rev. B 94, 235157 (2016), 10.1103/PhysRevB.94.235157] but only requires nearest-neighbor two-spin interactions. The model lives on a three-dimensional honeycomb-based lattice with one to two spin-1/2 degrees of freedom on each site and a unit cell of six sites. The model is constructed from two orthogonal stacks of Z2 topologically ordered Kitaev honeycomb layers [Kitaev, Ann. Phys. 321, 2 (2006), 10.1016/j.aop.2005.10.005], which are coupled together by a two-spin interaction. It is also shown that a four-spin interaction can be included to instead stabilize 3+1D Z2 topological order. We also find dual descriptions of four quantum phase transitions in our model, all of which appear to be discontinuous first-order transitions.
A Novel Preferential Diffusion Recommendation Algorithm Based on User’s Nearest Neighbors
Directory of Open Access Journals (Sweden)
Fuguo Zhang
2017-01-01
Full Text Available Recommender system is a very efficient way to deal with the problem of information overload for online users. In recent years, network based recommendation algorithms have demonstrated much better performance than the standard collaborative filtering methods. However, most of network based algorithms do not give a high enough weight to the influence of the target user’s nearest neighbors in the resource diffusion process, while a user or an object with high degree will obtain larger influence in the standard mass diffusion algorithm. In this paper, we propose a novel preferential diffusion recommendation algorithm considering the significance of the target user’s nearest neighbors and evaluate it in the three real-world data sets: MovieLens 100k, MovieLens 1M, and Epinions. Experiments results demonstrate that the novel preferential diffusion recommendation algorithm based on user’s nearest neighbors can significantly improve the recommendation accuracy and diversity.
Nearest-neighbor interaction systems in the tensor-train format
Gelß, Patrick; Klus, Stefan; Matera, Sebastian; Schütte, Christof
2017-07-01
Low-rank tensor approximation approaches have become an important tool in the scientific computing community. The aim is to enable the simulation and analysis of high-dimensional problems which cannot be solved using conventional methods anymore due to the so-called curse of dimensionality. This requires techniques to handle linear operators defined on extremely large state spaces and to solve the resulting systems of linear equations or eigenvalue problems. In this paper, we present a systematic tensor-train decomposition for nearest-neighbor interaction systems which is applicable to a host of different problems. With the aid of this decomposition, it is possible to reduce the memory consumption as well as the computational costs significantly. Furthermore, it can be shown that in some cases the rank of the tensor decomposition does not depend on the network size. The format is thus feasible even for high-dimensional systems. We will illustrate the results with several guiding examples such as the Ising model, a system of coupled oscillators, and a CO oxidation model.
Vasylkivska, Veronika S.; Huerta, Nicolas J.
2017-07-01
Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog's inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable with respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.
Geometric k-nearest neighbor estimation of entropy and mutual information
Lord, Warren M.; Sun, Jie; Bollt, Erik M.
2018-03-01
Nonparametric estimation of mutual information is used in a wide range of scientific problems to quantify dependence between variables. The k-nearest neighbor (knn) methods are consistent, and therefore expected to work well for a large sample size. These methods use geometrically regular local volume elements. This practice allows maximum localization of the volume elements, but can also induce a bias due to a poor description of the local geometry of the underlying probability measure. We introduce a new class of knn estimators that we call geometric knn estimators (g-knn), which use more complex local volume elements to better model the local geometry of the probability measures. As an example of this class of estimators, we develop a g-knn estimator of entropy and mutual information based on elliptical volume elements, capturing the local stretching and compression common to a wide range of dynamical system attractors. A series of numerical examples in which the thickness of the underlying distribution and the sample sizes are varied suggest that local geometry is a source of problems for knn methods such as the Kraskov-Stögbauer-Grassberger estimator when local geometric effects cannot be removed by global preprocessing of the data. The g-knn method performs well despite the manipulation of the local geometry. In addition, the examples suggest that the g-knn estimators can be of particular relevance to applications in which the system is large, but the data size is limited.
Directory of Open Access Journals (Sweden)
Caro, Norma Patricia
2017-12-01
Full Text Available En la presente década, en economías emergentes como las latinoamericanas, se han comenzado a aplicar modelos logísticos mixtos para predecir el fracaso financiero de las empresas. No obstante, existen limitaciones subyacentes a la metodología, vinculadas a la factibilidad de predicción del estado de nuevas empresas que no han formado parte de la muestra de entrenamiento con la que se estimó el modelo. En la literatura se han propuesto diversos métodos de predicción para los efectos aleatorios que forman parte de los modelos mixtos, entre ellos, el del vecino más cercano. Este método es aplicado en una segunda etapa, luego de la estimación de un modelo que explica la situación financiera (en crisis o sana de las empresas mediante la consideración del comportamiento de sus ratios contables. En el presente trabajo, se consideraron empresas de Argentina, Chile y Perú, estimando los efectos aleatorios que resultaron significativos en la estimación del modelo mixto. De este modo, se concluye que la aplicación de este método permite identificar empresas con problemas financieros con una tasa de clasificación correcta superior a 80%, lo cual cobra relevancia en la modelación y predicción de este tipo de riesgo. || In the present decade, in emerging economies such as those in Latin-America, mixed logistic models have been started applying to predict the financial failure of companies. However, there are limitations for the methodology linked to the feasibility of predicting the state of new companies that have not been part of the training sample which was used to estimate the model. In the literature, several methods have been proposed for predicting random effects in the mixed models such as, for example, the nearest neighbor. This method is applied in a second step, after estimating a model that explains the financial situation (in crisis or healthy of companies by considering the behavior of its financial ratios. In this study
Kenneth B. Jr. Pierce; C. Kenneth Brewer; Janet L. Ohmann
2010-01-01
This study was designed to test the feasibility of combining a method designed to populate pixels with inventory plot data at the 30-m scale with a new national predictor data set. The new national predictor data set was developed by the USDA Forest Service Remote Sensing Applications Center (hereafter RSAC) at the 250-m scale. Gradient Nearest Neighbor (GNN)...
Estimating forest attribute parameters for small areas using nearest neighbors techniques
Ronald E. McRoberts
2012-01-01
Nearest neighbors techniques have become extremely popular, particularly for use with forest inventory data. With these techniques, a population unit prediction is calculated as a linear combination of observations for a selected number of population units in a sample that are most similar, or nearest, in a space of ancillary variables to the population unit requiring...
van Dam, Herman T.; Seifert, Stefan; Vinke, Ruud; Dendooven, Peter; Lohner, Herbert; Beekman, Freek J.; Schaart, Dennis R.
2011-01-01
Monolithic scintillator detectors have been shown to provide good performance and to have various practical advantages for use in PET systems. Excellent results for the gamma photon interaction position determination in these detectors have been obtained by means of the k-nearest neighbor (k-NN)
Improved Fuzzy K-Nearest Neighbor Using Modified Particle Swarm Optimization
Jamaluddin; Siringoringo, Rimbun
2017-12-01
Fuzzy k-Nearest Neighbor (FkNN) is one of the most powerful classification methods. The presence of fuzzy concepts in this method successfully improves its performance on almost all classification issues. The main drawbackof FKNN is that it is difficult to determine the parameters. These parameters are the number of neighbors (k) and fuzzy strength (m). Both parameters are very sensitive. This makes it difficult to determine the values of ‘m’ and ‘k’, thus making FKNN difficult to control because no theories or guides can deduce how proper ‘m’ and ‘k’ should be. This study uses Modified Particle Swarm Optimization (MPSO) to determine the best value of ‘k’ and ‘m’. MPSO is focused on the Constriction Factor Method. Constriction Factor Method is an improvement of PSO in order to avoid local circumstances optima. The model proposed in this study was tested on the German Credit Dataset. The test of the data/The data test has been standardized by UCI Machine Learning Repository which is widely applied to classification problems. The application of MPSO to the determination of FKNN parameters is expected to increase the value of classification performance. Based on the experiments that have been done indicating that the model offered in this research results in a better classification performance compared to the Fk-NN model only. The model offered in this study has an accuracy rate of 81%, while. With using Fk-NN model, it has the accuracy of 70%. At the end is done comparison of research model superiority with 2 other classification models;such as Naive Bayes and Decision Tree. This research model has a better performance level, where Naive Bayes has accuracy 75%, and the decision tree model has 70%
A Regression-based K nearest neighbor algorithm for gene function prediction from heterogeneous data
Directory of Open Access Journals (Sweden)
Ruzzo Walter L
2006-03-01
Full Text Available Abstract Background As a variety of functional genomic and proteomic techniques become available, there is an increasing need for functional analysis methodologies that integrate heterogeneous data sources. Methods In this paper, we address this issue by proposing a general framework for gene function prediction based on the k-nearest-neighbor (KNN algorithm. The choice of KNN is motivated by its simplicity, flexibility to incorporate different data types and adaptability to irregular feature spaces. A weakness of traditional KNN methods, especially when handling heterogeneous data, is that performance is subject to the often ad hoc choice of similarity metric. To address this weakness, we apply regression methods to infer a similarity metric as a weighted combination of a set of base similarity measures, which helps to locate the neighbors that are most likely to be in the same class as the target gene. We also suggest a novel voting scheme to generate confidence scores that estimate the accuracy of predictions. The method gracefully extends to multi-way classification problems. Results We apply this technique to gene function prediction according to three well-known Escherichia coli classification schemes suggested by biologists, using information derived from microarray and genome sequencing data. We demonstrate that our algorithm dramatically outperforms the naive KNN methods and is competitive with support vector machine (SVM algorithms for integrating heterogenous data. We also show that by combining different data sources, prediction accuracy can improve significantly. Conclusion Our extension of KNN with automatic feature weighting, multi-class prediction, and probabilistic inference, enhance prediction accuracy significantly while remaining efficient, intuitive and flexible. This general framework can also be applied to similar classification problems involving heterogeneous datasets.
Ronald E. McRoberts
2009-01-01
Nearest neighbors techniques have been shown to be useful for predicting multiple forest attributes from forest inventory and Landsat satellite image data. However, in regions lacking good digital land cover information, nearest neighbors selected to predict continuous variables such as tree volume must be selected without regard to relevant categorical variables such...
Multi-strategy based quantum cost reduction of linear nearest-neighbor quantum circuit
Tan, Ying-ying; Cheng, Xue-yun; Guan, Zhi-jin; Liu, Yang; Ma, Haiying
2018-03-01
With the development of reversible and quantum computing, study of reversible and quantum circuits has also developed rapidly. Due to physical constraints, most quantum circuits require quantum gates to interact on adjacent quantum bits. However, many existing quantum circuits nearest-neighbor have large quantum cost. Therefore, how to effectively reduce quantum cost is becoming a popular research topic. In this paper, we proposed multiple optimization strategies to reduce the quantum cost of the circuit, that is, we reduce quantum cost from MCT gates decomposition, nearest neighbor and circuit simplification, respectively. The experimental results show that the proposed strategies can effectively reduce the quantum cost, and the maximum optimization rate is 30.61% compared to the corresponding results.
FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule
Lu Si; Jie Yu; Shasha Li; Jun Ma; Lei Luo; Qingbo Wu; Yongqi Ma; Zhengji Liu
2017-01-01
Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rul...
Improving the accuracy of k-nearest neighbor using local mean based and distance weight
Syaliman, K. U.; Nababan, E. B.; Sitompul, O. S.
2018-03-01
In k-nearest neighbor (kNN), the determination of classes for new data is normally performed by a simple majority vote system, which may ignore the similarities among data, as well as allowing the occurrence of a double majority class that can lead to misclassification. In this research, we propose an approach to resolve the majority vote issues by calculating the distance weight using a combination of local mean based k-nearest neighbor (LMKNN) and distance weight k-nearest neighbor (DWKNN). The accuracy of results is compared to the accuracy acquired from the original k-NN method using several datasets from the UCI Machine Learning repository, Kaggle and Keel, such as ionosphare, iris, voice genre, lower back pain, and thyroid. In addition, the proposed method is also tested using real data from a public senior high school in city of Tualang, Indonesia. Results shows that the combination of LMKNN and DWKNN was able to increase the classification accuracy of kNN, whereby the average accuracy on test data is 2.45% with the highest increase in accuracy of 3.71% occurring on the lower back pain symptoms dataset. For the real data, the increase in accuracy is obtained as high as 5.16%.
Collective Behaviors of Mobile Robots Beyond the Nearest Neighbor Rules With Switching Topology.
Ning, Boda; Han, Qing-Long; Zuo, Zongyu; Jin, Jiong; Zheng, Jinchuan
2018-05-01
This paper is concerned with the collective behaviors of robots beyond the nearest neighbor rules, i.e., dispersion and flocking, when robots interact with others by applying an acute angle test (AAT)-based interaction rule. Different from a conventional nearest neighbor rule or its variations, the AAT-based interaction rule allows interactions with some far-neighbors and excludes unnecessary nearest neighbors. The resulting dispersion and flocking hold the advantages of scalability, connectivity, robustness, and effective area coverage. For the dispersion, a spring-like controller is proposed to achieve collision-free coordination. With switching topology, a new fixed-time consensus-based energy function is developed to guarantee the system stability. An upper bound of settling time for energy consensus is obtained, and a uniform time interval is accordingly set so that energy distribution is conducted in a fair manner. For the flocking, based on a class of generalized potential functions taking nonsmooth switching into account, a new controller is proposed to ensure that the same velocity for all robots is eventually reached. A co-optimizing problem is further investigated to accomplish additional tasks, such as enhancing communication performance, while maintaining the collective behaviors of mobile robots. Simulation results are presented to show the effectiveness of the theoretical results.
Efficient and accurate nearest neighbor and closest pair search in high-dimensional space
Tao, Yufei
2010-07-01
Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial
Allen, Victoria W; Shirasu-Hiza, Mimi
2018-01-01
Despite being pervasive, the control of programmed grooming is poorly understood. We addressed this gap by developing a high-throughput platform that allows long-term detection of grooming in Drosophila melanogaster. In our method, a k-nearest neighbors algorithm automatically classifies fly behavior and finds grooming events with over 90% accuracy in diverse genotypes. Our data show that flies spend ~13% of their waking time grooming, driven largely by two major internal programs. One of these programs regulates the timing of grooming and involves the core circadian clock components cycle, clock, and period. The second program regulates the duration of grooming and, while dependent on cycle and clock, appears to be independent of period. This emerging dual control model in which one program controls timing and another controls duration, resembles the two-process regulatory model of sleep. Together, our quantitative approach presents the opportunity for further dissection of mechanisms controlling long-term grooming in Drosophila. PMID:29485401
Ralko, Arnaud; Mila, Frédéric; Rousochatzakis, Ioannis
2018-03-01
The spin-1/2 Heisenberg model on the kagome lattice, which is closely realized in layered Mott insulators such as ZnCu3(OH) 6Cl2 , is one of the oldest and most enigmatic spin-1/2 lattice models. While the numerical evidence has accumulated in favor of a quantum spin liquid, the debate is still open as to whether it is a Z2 spin liquid with very short-range correlations (some kind of resonating valence bond spin liquid), or an algebraic spin liquid with power-law correlations. To address this issue, we have pushed the program started by Rokhsar and Kivelson in their derivation of the effective quantum dimer model description of Heisenberg models to unprecedented accuracy for the spin-1/2 kagome, by including all the most important virtual singlet contributions on top of the orthogonalization of the nearest-neighbor valence bond singlet basis. Quite remarkably, the resulting picture is a competition between a Z2 spin liquid and a diamond valence bond crystal with a 12-site unit cell, as in the density-matrix renormalization group simulations of Yan et al. Furthermore, we found that, on cylinders of finite diameter d , there is a transition between the Z2 spin liquid at small d and the diamond valence bond crystal at large d , the prediction of the present microscopic description for the two-dimensional lattice. These results show that, if the ground state of the spin-1/2 kagome antiferromagnet can be described by nearest-neighbor singlet dimers, it is a diamond valence bond crystal, and, a contrario, that, if the system is a quantum spin liquid, it has to involve long-range singlets, consistent with the algebraic spin liquid scenario.
Seismic clusters analysis in Northeastern Italy by the nearest-neighbor approach
Peresan, Antonella; Gentili, Stefania
2018-01-01
The main features of earthquake clusters in Northeastern Italy are explored, with the aim to get new insights on local scale patterns of seismicity in the area. The study is based on a systematic analysis of robustly and uniformly detected seismic clusters, which are identified by a statistical method, based on nearest-neighbor distances of events in the space-time-energy domain. The method permits us to highlight and investigate the internal structure of earthquake sequences, and to differentiate the spatial properties of seismicity according to the different topological features of the clusters structure. To analyze seismicity of Northeastern Italy, we use information from local OGS bulletins, compiled at the National Institute of Oceanography and Experimental Geophysics since 1977. A preliminary reappraisal of the earthquake bulletins is carried out and the area of sufficient completeness is outlined. Various techniques are considered to estimate the scaling parameters that characterize earthquakes occurrence in the region, namely the b-value and the fractal dimension of epicenters distribution, required for the application of the nearest-neighbor technique. Specifically, average robust estimates of the parameters of the Unified Scaling Law for Earthquakes, USLE, are assessed for the whole outlined region and are used to compute the nearest-neighbor distances. Clusters identification by the nearest-neighbor method turn out quite reliable and robust with respect to the minimum magnitude cutoff of the input catalog; the identified clusters are well consistent with those obtained from manual aftershocks identification of selected sequences. We demonstrate that the earthquake clusters have distinct preferred geographic locations, and we identify two areas that differ substantially in the examined clustering properties. Specifically, burst-like sequences are associated with the north-western part and swarm-like sequences with the south-eastern part of the study
A Novel Quantum Solution to Privacy-Preserving Nearest Neighbor Query in Location-Based Services
Luo, Zhen-yu; Shi, Run-hua; Xu, Min; Zhang, Shun
2018-04-01
We present a cheating-sensitive quantum protocol for Privacy-Preserving Nearest Neighbor Query based on Oblivious Quantum Key Distribution and Quantum Encryption. Compared with the classical related protocols, our proposed protocol has higher security, because the security of our protocol is based on basic physical principles of quantum mechanics, instead of difficulty assumptions. Especially, our protocol takes single photons as quantum resources and only needs to perform single-photon projective measurement. Therefore, it is feasible to implement this protocol with the present technologies.
Fast and Accuracy Control Chart Pattern Recognition using a New cluster-k-Nearest Neighbor
Samir Brahim Belhaouari
2009-01-01
By taking advantage of both k-NN which is highly accurate and K-means cluster which is able to reduce the time of classification, we can introduce Cluster-k-Nearest Neighbor as "variable k"-NN dealing with the centroid or mean point of all subclasses generated by clustering algorithm. In general the algorithm of K-means cluster is not stable, in term of accuracy, for that reason we develop another algorithm for clustering our space which gives a higher accuracy than K-means cluster, less ...
Penerapan Metode K-nearest Neighbor pada Penentuan Grade Dealer Sepeda Motor
Leidiyana, Henny
2017-01-01
The mutually beneficial cooperation is a very important thing for a leasing and dealer. Incentives for marketing is given in order to get consumers as much as possible. But sometimes the surveyor objectivity is lost due to the conspiracy on the field of marketing and surveyors. To overcome this, leasing a variety of ways one of them is doing ranking against the dealer. In this study the application of the k-Nearest Neighbor method and Euclidean distance measurement to determine the grade deal...
International Nuclear Information System (INIS)
Fang Xiaoling; Yu Hongjie; Jiang Zonglai
2009-01-01
The chaotic synchronization of Hindmarsh-Rose neural networks linked by a nonlinear coupling function is discussed. The HR neural networks with nearest-neighbor diffusive coupling form are treated as numerical examples. By the construction of a special nonlinear-coupled term, the chaotic system is coupled symmetrically. For three and four neurons network, a certain region of coupling strength corresponding to full synchronization is given, and the effect of network structure and noise position are analyzed. For five and more neurons network, the full synchronization is very difficult to realize. All the results have been proved by the calculation of the maximum conditional Lyapunov exponent.
PENGENALAN MOTIF BATIK MENGGUNAKAN DETEKSI TEPI CANNY DAN K-NEAREST NEIGHBOR
Directory of Open Access Journals (Sweden)
Johanes Widagdho Yodha
2014-11-01
Full Text Available Salah satu budaya ciri khas Indonesia yang telah dikenal dunia adalah batik. Penelitian ini bertujuan untuk mengenali 6 jenis motif batik pada buku karangan H.Santosa Doellah yang berjudul “Batik: Pengaruh Zaman dan Lingkungan”. Proses klasifikasi akan melalui 3 tahap yaitu preprosesing, feature extraction dan klasifikasi. Preproses mengubah citra warna batik menjadi citra grayscale. Pada tahap feature extraction citra grayscale ditingkatkan kontrasnya dengan histogram equalization dan kemudian menggunakan deteksi tepi Canny untuk memisahkan motif batik dengan backgroundnya dan untuk mendapatkan pola dari motif batik tersebut. Hasil ekstraksi kemudian dikelompokkan dan diberi label sesuai motifnya masing-masing dan kemudian diklasifikasikan menggunakan k-¬Nearest Neighbor menggunakan pencarian jarak Manhattan. Hasil uji coba diperoleh akurasi tertinggi mencapai 100% pada penggunaan data¬ testing sama dengan data training (dataset sebanyak 300 image. Pada penggunaan data training yang berbeda dengan data testing diperoleh akurasi tertinggi 66,67%. Kedua akurasi tersebut diperoleh dengan menggunakan lower threshold = 0.010 dan upper threshold = 0.115 dan menggunakan k=1. Kata kunci : Batik, Edge Detection, Canny, k-Nearest Neighbor, Manhattan distance
Directory of Open Access Journals (Sweden)
D.A. Adeniyi
2016-01-01
Full Text Available The major problem of many on-line web sites is the presentation of many choices to the client at a time; this usually results to strenuous and time consuming task in finding the right product or information on the site. In this work, we present a study of automatic web usage data mining and recommendation system based on current user behavior through his/her click stream data on the newly developed Really Simple Syndication (RSS reader website, in order to provide relevant information to the individual without explicitly asking for it. The K-Nearest-Neighbor (KNN classification method has been trained to be used on-line and in Real-Time to identify clients/visitors click stream data, matching it to a particular user group and recommend a tailored browsing option that meet the need of the specific user at a particular time. To achieve this, web users RSS address file was extracted, cleansed, formatted and grouped into meaningful session and data mart was developed. Our result shows that the K-Nearest Neighbor classifier is transparent, consistent, straightforward, simple to understand, high tendency to possess desirable qualities and easy to implement than most other machine learning techniques specifically when there is little or no prior knowledge about data distribution.
Li, Yuenan
2013-01-10
Copy-move is one of the most commonly used image tampering operation, where a part of image content is copied and then pasted to another part of the same image. In order to make the forgery visually convincing and conceal its trace, the copied part may subject to post-processing operations such as rotation and blur. In this paper, we propose a polar cosine transform and approximate nearest neighbor searching based copy-move forgery detection algorithm. The algorithm starts by dividing the image into overlapping patches. Robust and compact features are extracted from patches by taking advantage of the rotationally-invariant and orthogonal properties of the polar cosine transform. Potential copy-move pairs are then detected by identifying the patches with similar features, which is formulated as approximate nearest neighbor searching and accomplished by means of locality-sensitive hashing (LSH). Finally, post-verifications are performed on potential pairs to filter out false matches and improve the accuracy of forgery detection. To sum up, the LSH based similar patch identification and the post-verification methods are two major novelties of the proposed work. Experimental results reveal that the proposed work can produce accurate detection results, and it exhibits high robustness to various post-processing operations. In addition, the LSH based similar patch detection scheme is much more effective than the widely used lexicographical sorting. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Sequential nearest-neighbor effects on computed {sup 13}C{sup {alpha}} chemical shifts
Energy Technology Data Exchange (ETDEWEB)
Vila, Jorge A. [Cornell University, Baker Laboratory of Chemistry and Chemical Biology (United States); Serrano, Pedro; Wuethrich, Kurt [The Scripps Research Institute, Department of Molecular Biology (United States); Scheraga, Harold A., E-mail: has5@cornell.ed [Cornell University, Baker Laboratory of Chemistry and Chemical Biology (United States)
2010-09-15
To evaluate sequential nearest-neighbor effects on quantum-chemical calculations of {sup 13}C{sup {alpha}} chemical shifts, we selected the structure of the nucleic acid binding (NAB) protein from the SARS coronavirus determined by NMR in solution (PDB id 2K87). NAB is a 116-residue {alpha}/{beta} protein, which contains 9 prolines and has 50% of its residues located in loops and turns. Overall, the results presented here show that sizeable nearest-neighbor effects are seen only for residues preceding proline, where Pro introduces an overestimation, on average, of 1.73 ppm in the computed {sup 13}C{sup {alpha}} chemical shifts. A new ensemble of 20 conformers representing the NMR structure of the NAB, which was calculated with an input containing backbone torsion angle constraints derived from the theoretical {sup 13}C{sup {alpha}} chemical shifts as supplementary data to the NOE distance constraints, exhibits very similar topology and comparable agreement with the NOE constraints as the published NMR structure. However, the two structures differ in the patterns of differences between observed and computed {sup 13}C{sup {alpha}} chemical shifts, {Delta}{sub ca,i}, for the individual residues along the sequence. This indicates that the {Delta}{sub ca,i} -values for the NAB protein are primarily a consequence of the limited sampling by the bundles of 20 conformers used, as in common practice, to represent the two NMR structures, rather than of local flaws in the structures.
Nearest neighbor-density-based clustering methods for large hyperspectral images
Cariou, Claude; Chehdi, Kacem
2017-10-01
We address the problem of hyperspectral image (HSI) pixel partitioning using nearest neighbor - density-based (NN-DB) clustering methods. NN-DB methods are able to cluster objects without specifying the number of clusters to be found. Within the NN-DB approach, we focus on deterministic methods, e.g. ModeSeek, knnClust, and GWENN (standing for Graph WatershEd using Nearest Neighbors). These methods only require the availability of a k-nearest neighbor (kNN) graph based on a given distance metric. Recently, a new DB clustering method, called Density Peak Clustering (DPC), has received much attention, and kNN versions of it have quickly followed and showed their efficiency. However, NN-DB methods still suffer from the difficulty of obtaining the kNN graph due to the quadratic complexity with respect to the number of pixels. This is why GWENN was embedded into a multiresolution (MR) scheme to bypass the computation of the full kNN graph over the image pixels. In this communication, we propose to extent the MR-GWENN scheme on three aspects. Firstly, similarly to knnClust, the original labeling rule of GWENN is modified to account for local density values, in addition to the labels of previously processed objects. Secondly, we set up a modified NN search procedure within the MR scheme, in order to stabilize of the number of clusters found from the coarsest to the finest spatial resolution. Finally, we show that these extensions can be easily adapted to the three other NN-DB methods (ModeSeek, knnClust, knnDPC) for pixel clustering in large HSIs. Experiments are conducted to compare the four NN-DB methods for pixel clustering in HSIs. We show that NN-DB methods can outperform a classical clustering method such as fuzzy c-means (FCM), in terms of classification accuracy, relevance of found clusters, and clustering speed. Finally, we demonstrate the feasibility and evaluate the performances of NN-DB methods on a very large image acquired by our AISA Eagle hyperspectral
Categorizing document by fuzzy C-Means and K-nearest neighbors approach
Priandini, Novita; Zaman, Badrus; Purwanti, Endah
2017-08-01
Increasing of technology had made categorizing documents become important. It caused by increasing of number of documents itself. Managing some documents by categorizing is one of Information Retrieval application, because it involve text mining on its process. Whereas, categorization technique could be done both Fuzzy C-Means (FCM) and K-Nearest Neighbors (KNN) method. This experiment would consolidate both methods. The aim of the experiment is increasing performance of document categorize. First, FCM is in order to clustering training documents. Second, KNN is in order to categorize testing document until the output of categorization is shown. Result of the experiment is 14 testing documents retrieve relevantly to its category. Meanwhile 6 of 20 testing documents retrieve irrelevant to its category. Result of system evaluation shows that both precision and recall are 0,7.
False-nearest-neighbors algorithm and noise-corrupted time series
Rhodes, Carl; Morari, Manfred
1997-05-01
The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented.
Developing a second nearest-neighbor modified embedded atom method interatomic potential for lithium
International Nuclear Information System (INIS)
Cui, Zhiwei; Gao, Feng; Qu, Jianmin; Cui, Zhihua
2012-01-01
This paper reports the development of a second nearest-neighbor modified embedded atom method (2NN MEAM) interatomic potential for lithium (Li). The 2NN MEAM potential contains 14 adjustable parameters. For a given set of these parameters, a number of physical properties of Li were predicted by molecular dynamics (MD) simulations. By fitting these MD predictions to their corresponding values from either experimental measurements or ab initio simulations, these adjustable parameters in the potential were optimized to yield an accurate and robust interatomic potential. The parameter optimization was carried out using the particle swarm optimization technique. Finally, the newly developed potential was validated by calculating a wide range of material properties of Li, such as thermal expansion, melting temperature, radial distribution function of liquid Li and the structural stability at finite temperature by simulating the disordered–ordered transition
Kogan, Oleg; Refael, Gil; Cross, Michael; Rogers, Jeffrey
2008-03-01
We develop a renormalization group (RG) method to predict frequency clusters and their statistical properties in a 1-dimensional chain of nearest-neighbor coupled Kuramoto oscillators. The intrinsic frequencies and couplings are random numbers chosen from a distribution. The method is designed to work in the regime of strong randomness, where the distribution of intrinsic frequencies and couplings has long tails. Two types of decimation steps are possible: elimination of oscillators with exceptionally large frequency and renormalization of two oscillators bonded by a very large coupling into a single one. Based on these steps, we perform a numerical RG calculation. The oscillators in the renormalized chain correspond to frequency clusters. We compare the RG results with those obtained directly from the numerical solution of the chain's equations of motion.
Jiang, Yuning; Kang, Jinfeng; Wang, Xinan
2017-03-24
Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today's electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance.
International Nuclear Information System (INIS)
Juang, M.T.; Wager, J.F.; Van Vechten, J.A.
1988-01-01
Drain current drift in InP metal insulator semiconductor devices display distinct activation energies and pre-exponential factors. The authors have given evidence that these result from two physical mechanisms: thermionic tunneling of electrons into native oxide traps and phosphorous vacancy nearest neighbor hopping (PVNNH). They here present a computer simulation of the effect of the PVNHH mechanism on flatband voltage shift vs. bias stress time measurements. The simulation is based on an analysis of the kinetics of the PVNNH defect reaction sequence in which the electron concentration in the channel is related to the applied bias by a solution of the Poisson equation. The simulation demonstrates quantitatively that the temperature dependence of the flatband shift is associated with PVNNH for temperatures above room temperature
Directory of Open Access Journals (Sweden)
Firdaus Firdaus
2017-12-01
Full Text Available Non-invasive blood pressure measurement devices are widely available in the marketplace. Most of these devices use the oscillometric principle that store and analyze oscillometric waveforms during cuff deflation to obtain mean arterial pressure, systolic blood pressure and diastolic blood pressure. Those pressure values are determined from the oscillometric waveform envelope. Several methods to detect the envelope of oscillometric pulses utilize a complex algorithm that requires a large capacity memory and certainly difficult to process by a low memory capacity embedded system. A simple nearest-neighbor interpolation method is applied for oscillometric pulse envelope detection in non-invasive blood pressure measurement using microcontroller such ATmega328. The experiment yields 59 seconds average time to process the computation with 3.6% average percent error in blood pressure measurement.
Quantum Algorithm for K-Nearest Neighbors Classification Based on the Metric of Hamming Distance
Ruan, Yue; Xue, Xiling; Liu, Heng; Tan, Jianing; Li, Xi
2017-11-01
K-nearest neighbors (KNN) algorithm is a common algorithm used for classification, and also a sub-routine in various complicated machine learning tasks. In this paper, we presented a quantum algorithm (QKNN) for implementing this algorithm based on the metric of Hamming distance. We put forward a quantum circuit for computing Hamming distance between testing sample and each feature vector in the training set. Taking advantage of this method, we realized a good analog for classical KNN algorithm by setting a distance threshold value t to select k - n e a r e s t neighbors. As a result, QKNN achieves O( n 3) performance which is only relevant to the dimension of feature vectors and high classification accuracy, outperforms Llyod's algorithm (Lloyd et al. 2013) and Wiebe's algorithm (Wiebe et al. 2014).
False-nearest-neighbors algorithm and noise-corrupted time series
International Nuclear Information System (INIS)
Rhodes, C.; Morari, M.
1997-01-01
The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented. copyright 1997 The American Physical Society
Nearest neighbor spacing distributions of low-lying levels of vibrational nuclei
International Nuclear Information System (INIS)
Abul-Magd, A.Y.; Simbel, M.H.
1996-01-01
Energy-level statistics are considered for nuclei whose Hamiltonian is divided into intrinsic and collective-vibrational terms. The levels are described as a random superposition of independent sequences, each corresponding to a given number of phonons. The intrinsic motion is assumed chaotic. The level spacing distribution is found to be intermediate between the Wigner and Poisson distributions and similar in form to the spacing distribution of a system with classical phase space divided into separate regular and chaotic domains. We have obtained approximate expressions for the nearest neighbor spacing and cumulative spacing distribution valid when the level density is described by a constant-temperature formula and not involving additional free parameters. These expressions have been able to achieve good agreement with the experimental spacing distributions. copyright 1996 The American Physical Society
Schmalz, M.; Ritter, G.; Key, R.
Accurate and computationally efficient spectral signature classification is a crucial step in the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications using linear mixing models, signature classification accuracy depends on accurate spectral endmember discrimination [1]. If the endmembers cannot be classified correctly, then the signatures cannot be classified correctly, and object recognition from hyperspectral data will be inaccurate. In practice, the number of endmembers accurately classified often depends linearly on the number of inputs. This can lead to potentially severe classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an comparison of emerging technologies for nonimaging spectral signature classfication based on a highly accurate, efficient search engine called Tabular Nearest Neighbor Encoding (TNE) [3,4] and a neural network technology called Morphological Neural Networks (MNNs) [5]. Based on prior results, TNE can optimize its classifier performance to track input nonergodicities, as well as yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., the neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement algorithms. The open architecture and programmability of TNE's agreement map processing allows a TNE programmer or user to determine classification accuracy, as well as characterize in detail the signatures for which TNE did not obtain classification matches, and why such mis-matches occurred. In this study, we will compare TNE and MNN based endmember classification, using performance metrics such as probability of correct classification (Pd) and rate of false
Directory of Open Access Journals (Sweden)
Roni Akbar
2016-03-01
Full Text Available Handwriting refers to the result of writing by hand (not typed. The writing style of people are not the same. One of the United Nations official languages, Arabic, has a numerical system known as Arabic (Indian numeral. The identification of feature help humans to be able distinguish the patterns. The grouping patterns can be applied to the machine for recognizing object in the image. Connected component labeling is used for separating characters to be easily recognizable. K- nearest neighbors is used to find the similarity value between query image and template images are based on the nearest neighbors class. This analytical study was tested using 100 test images. The top three classification results of Arabic (Indian handwritten recognition use k- nearest neighbors (KNN are 86% when k = 1, 84% when k = 3, and 83% with k = 5.
Kumar, Mukesh; Rath, Nitish Kumar; Rath, Santanu Kumar
2016-04-01
Microarray-based gene expression profiling has emerged as an efficient technique for classification, prognosis, diagnosis, and treatment of cancer. Frequent changes in the behavior of this disease generates an enormous volume of data. Microarray data satisfies both the veracity and velocity properties of big data, as it keeps changing with time. Therefore, the analysis of microarray datasets in a small amount of time is essential. They often contain a large amount of expression, but only a fraction of it comprises genes that are significantly expressed. The precise identification of genes of interest that are responsible for causing cancer are imperative in microarray data analysis. Most existing schemes employ a two-phase process such as feature selection/extraction followed by classification. In this paper, various statistical methods (tests) based on MapReduce are proposed for selecting relevant features. After feature selection, a MapReduce-based K-nearest neighbor (mrKNN) classifier is also employed to classify microarray data. These algorithms are successfully implemented in a Hadoop framework. A comparative analysis is done on these MapReduce-based models using microarray datasets of various dimensions. From the obtained results, it is observed that these models consume much less execution time than conventional models in processing big data. Copyright © 2016 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
E. E. Miandoab
2016-06-01
Full Text Available The inherent uncertainty to factors such as technology and creativity in evolving software development is a major challenge for the management of software projects. To address these challenges the project manager, in addition to examining the project progress, may cope with problems such as increased operating costs, lack of resources, and lack of implementation of key activities to better plan the project. Software Cost Estimation (SCE models do not fully cover new approaches. And this lack of coverage is causing problems in the consumer and producer ends. In order to avoid these problems, many methods have already been proposed. Model-based methods are the most familiar solving technique. But it should be noted that model-based methods use a single formula and constant values, and these methods are not responsive to the increasing developments in the field of software engineering. Accordingly, researchers have tried to solve the problem of SCE using machine learning algorithms, data mining algorithms, and artificial neural networks. In this paper, a hybrid algorithm that combines COA-Cuckoo optimization and K-Nearest Neighbors (KNN algorithms is used. The so-called composition algorithm runs on six different data sets and is evaluated based on eight evaluation criteria. The results show an improved accuracy of estimated cost.
Testing spatial symmetry using contingency tables based on nearest neighbor relations.
Ceyhan, Elvan
2014-01-01
We consider two types of spatial symmetry, namely, symmetry in the mixed or shared nearest neighbor (NN) structures. We use Pielou's and Dixon's symmetry tests which are defined using contingency tables based on the NN relationships between the data points. We generalize these tests to multiple classes and demonstrate that both the asymptotic and exact versions of Pielou's first type of symmetry test are extremely conservative in rejecting symmetry in the mixed NN structure and hence should be avoided or only the Monte Carlo randomized version should be used. Under RL, we derive the asymptotic distribution for Dixon's symmetry test and also observe that the usual independence test seems to be appropriate for Pielou's second type of test. Moreover, we apply variants of Fisher's exact test on the shared NN contingency table for Pielou's second test and determine the most appropriate version for our setting. We also consider pairwise and one-versus-rest type tests in post hoc analysis after a significant overall symmetry test. We investigate the asymptotic properties of the tests, prove their consistency under appropriate null hypotheses, and investigate finite sample performance of them by extensive Monte Carlo simulations. The methods are illustrated on a real-life ecological data set.
Rane, Shantanu; Boufounos, Petros; Vetro, Anthony
2013-09-01
We propose a rate-efficient, feature-agnostic approach for encoding image features for cloud-based nearest neighbor search. We extract quantized random projections of the image features under consideration, transmit these to the cloud server, and perform matching in the space of the quantized projections. The advantage of this approach is that, once the underlying feature extraction algorithm is chosen for maximum discriminability and retrieval performance (e.g., SIFT, or eigen-features), the random projections guarantee a rate-efficient representation and fast server-based matching with negligible loss in accuracy. Using the Johnson-Lindenstrauss Lemma, we show that pair-wise distances between the underlying feature vectors are preserved in the corresponding quantized embeddings. We report experimental results of image retrieval on two image databases with different feature spaces; one using SIFT features and one using face features extracted using a variant of the Viola-Jones face recognition algorithm. For both feature spaces, quantized embeddings enable accurate image retrieval combined with improved bit-rate efficiency and speed of matching, when compared with the underlying feature spaces.
Gan, Yanglan; Guan, Jihong; Zhou, Shuigeng
2009-08-15
Identification of core promoters is a key clue in understanding gene regulations. However, due to the diverse nature of promoter sequences, the accuracy of existing prediction approaches for non-CpG island (simply CGI)-related promoters is not as high as that for CGI-related promoters. This consequently leads to a low genome-wide promoter prediction accuracy. In this article, we first systematically analyze the similarities and differences between the two types of promoters (CGI- and non-CGI-related) from a novel structural perspective, and then devise a unified framework, called PNNP (Pattern-based Nearest Neighbor search for Promoter), to predict both CGI- and non-CGI-related promoters based on their structural features. Our comparative analysis on the structural characteristics of promoters reveals two interesting facts: (i) the structural values of CGI- and non-CGI-related promoters are quite different, but they exhibit nearly similar structural patterns; (ii) the structural patterns of promoters are obviously different from that of non-promoter sequences though the sequences have almost similar structural values. Extensive experiments demonstrate that the proposed PNNP approach is effective in capturing the structural patterns of promoters, and can significantly improve genome-wide performance of promoters prediction, especially non-CGI-related promoters prediction. The implementation of the program PNNP is available at http://admis.tongji.edu.cn/Projects/pnnp.aspx.
An RFID Indoor Positioning Algorithm Based on Bayesian Probability and K-Nearest Neighbor.
Xu, He; Ding, Ye; Li, Peng; Wang, Ruchuan; Li, Yizhu
2017-08-05
The Global Positioning System (GPS) is widely used in outdoor environmental positioning. However, GPS cannot support indoor positioning because there is no signal for positioning in an indoor environment. Nowadays, there are many situations which require indoor positioning, such as searching for a book in a library, looking for luggage in an airport, emergence navigation for fire alarms, robot location, etc. Many technologies, such as ultrasonic, sensors, Bluetooth, WiFi, magnetic field, Radio Frequency Identification (RFID), etc., are used to perform indoor positioning. Compared with other technologies, RFID used in indoor positioning is more cost and energy efficient. The Traditional RFID indoor positioning algorithm LANDMARC utilizes a Received Signal Strength (RSS) indicator to track objects. However, the RSS value is easily affected by environmental noise and other interference. In this paper, our purpose is to reduce the location fluctuation and error caused by multipath and environmental interference in LANDMARC. We propose a novel indoor positioning algorithm based on Bayesian probability and K -Nearest Neighbor (BKNN). The experimental results show that the Gaussian filter can filter some abnormal RSS values. The proposed BKNN algorithm has the smallest location error compared with the Gaussian-based algorithm, LANDMARC and an improved KNN algorithm. The average error in location estimation is about 15 cm using our method.
An RFID Indoor Positioning Algorithm Based on Bayesian Probability and K-Nearest Neighbor
Directory of Open Access Journals (Sweden)
He Xu
2017-08-01
Full Text Available The Global Positioning System (GPS is widely used in outdoor environmental positioning. However, GPS cannot support indoor positioning because there is no signal for positioning in an indoor environment. Nowadays, there are many situations which require indoor positioning, such as searching for a book in a library, looking for luggage in an airport, emergence navigation for fire alarms, robot location, etc. Many technologies, such as ultrasonic, sensors, Bluetooth, WiFi, magnetic field, Radio Frequency Identification (RFID, etc., are used to perform indoor positioning. Compared with other technologies, RFID used in indoor positioning is more cost and energy efficient. The Traditional RFID indoor positioning algorithm LANDMARC utilizes a Received Signal Strength (RSS indicator to track objects. However, the RSS value is easily affected by environmental noise and other interference. In this paper, our purpose is to reduce the location fluctuation and error caused by multipath and environmental interference in LANDMARC. We propose a novel indoor positioning algorithm based on Bayesian probability and K-Nearest Neighbor (BKNN. The experimental results show that the Gaussian filter can filter some abnormal RSS values. The proposed BKNN algorithm has the smallest location error compared with the Gaussian-based algorithm, LANDMARC and an improved KNN algorithm. The average error in location estimation is about 15 cm using our method.
CATEGORIZATION OF GELAM, ACACIA AND TUALANG HONEY ODORPROFILE USING K-NEAREST NEIGHBORS
Directory of Open Access Journals (Sweden)
Nurdiyana Zahed
2018-02-01
Full Text Available Honey authenticity refer to honey types is of great importance issue and interest in agriculture. In current research, several documents of specific types of honey have their own usage in medical field. However, it is quite challenging task to classify different types of honey by simply using our naked eye. This work demostrated a successful an electronic nose (E-nose application as an instrument for identifying odor profile pattern of three common honey in Malaysia (Gelam, Acacia and Tualang honey. The applied E-nose has produced signal for odor measurement in form of numeric resistance (Ω. The data reading have been pre-processed using normalization technique for standardized scale of unique features. Mean features is extracted and boxplot used as the statistical tool to present the data pattern according to three types of honey. Mean features that have been extracted were employed into K-Nearest Neighbors classifier as an input features and evaluated using several splitting ratio. Excellent results were obtained by showing 100% rate of accuracy, sensitivity and specificity of classification from KNN using weigh (k=1, ratio 90:10 and Euclidean distance. The findings confirmed the ability of KNN classifier as intelligent classification to classify different honey types from E-nose calibration. Outperform of other classifier, KNN required less parameter optimization and achieved promising result.
Ronald E. McRoberts; Grant M. Domke; Qi Chen; Erik Næsset; Terje Gobakken
2016-01-01
The relatively small sampling intensities used by national forest inventories are often insufficient to produce the desired precision for estimates of population parameters unless the estimation process is augmented with auxiliary information, usually in the form of remotely sensed data. The k-Nearest Neighbors (k-NN) technique is a non-parametric,multivariate approach...
Wang, Xueyi
2012-02-08
The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.
Directory of Open Access Journals (Sweden)
Jyuo-Min Shyu
2010-11-01
Full Text Available A great deal of work has been done to develop techniques for odor analysis by electronic nose systems. These analyses mostly focus on identifying a particular odor by comparing with a known odor dataset. However, in many situations, it would be more practical if each individual odorant could be determined directly. This paper proposes two methods for such odor components analysis for electronic nose systems. First, a K-nearest neighbor (KNN-based local weighted nearest neighbor (LWNN algorithm is proposed to determine the components of an odor. According to the component analysis, the odor training data is firstly categorized into several groups, each of which is represented by its centroid. The examined odor is then classified as the class of the nearest centroid. The distance between the examined odor and the centroid is calculated based on a weighting scheme, which captures the local structure of each predefined group. To further determine the concentration of each component, odor models are built by regressions. Then, a weighted and constrained least-squares (WCLS method is proposed to estimate the component concentrations. Experiments were carried out to assess the effectiveness of the proposed methods. The LWNN algorithm is able to classify mixed odors with different mixing ratios, while the WCLS method can provide good estimates on component concentrations.
Xia, Wenjun; Mita, Yoshio; Shibata, Tadashi
2016-05-01
Aiming at efficient data condensation and improving accuracy, this paper presents a hardware-friendly template reduction (TR) method for the nearest neighbor (NN) classifiers by introducing the concept of critical boundary vectors. A hardware system is also implemented to demonstrate the feasibility of using an field-programmable gate array (FPGA) to accelerate the proposed method. Initially, k -means centers are used as substitutes for the entire template set. Then, to enhance the classification performance, critical boundary vectors are selected by a novel learning algorithm, which is completed within a single iteration. Moreover, to remove noisy boundary vectors that can mislead the classification in a generalized manner, a global categorization scheme has been explored and applied to the algorithm. The global characterization automatically categorizes each classification problem and rapidly selects the boundary vectors according to the nature of the problem. Finally, only critical boundary vectors and k -means centers are used as the new template set for classification. Experimental results for 24 data sets show that the proposed algorithm can effectively reduce the number of template vectors for classification with a high learning speed. At the same time, it improves the accuracy by an average of 2.17% compared with the traditional NN classifiers and also shows greater accuracy than seven other TR methods. We have shown the feasibility of using a proof-of-concept FPGA system of 256 64-D vectors to accelerate the proposed method on hardware. At a 50-MHz clock frequency, the proposed system achieves a 3.86 times higher learning speed than on a 3.4-GHz PC, while consuming only 1% of the power of that used by the PC.
Using K-Nearest Neighbor Classification to Diagnose Abnormal Lung Sounds
Directory of Open Access Journals (Sweden)
Chin-Hsing Chen
2015-06-01
Full Text Available A reported 30% of people worldwide have abnormal lung sounds, including crackles, rhonchi, and wheezes. To date, the traditional stethoscope remains the most popular tool used by physicians to diagnose such abnormal lung sounds, however, many problems arise with the use of a stethoscope, including the effects of environmental noise, the inability to record and store lung sounds for follow-up or tracking, and the physician’s subjective diagnostic experience. This study has developed a digital stethoscope to help physicians overcome these problems when diagnosing abnormal lung sounds. In this digital system, mel-frequency cepstral coefficients (MFCCs were used to extract the features of lung sounds, and then the K-means algorithm was used for feature clustering, to reduce the amount of data for computation. Finally, the K-nearest neighbor method was used to classify the lung sounds. The proposed system can also be used for home care: if the percentage of abnormal lung sound frames is > 30% of the whole test signal, the system can automatically warn the user to visit a physician for diagnosis. We also used bend sensors together with an amplification circuit, Bluetooth, and a microcontroller to implement a respiration detector. The respiratory signal extracted by the bend sensors can be transmitted to the computer via Bluetooth to calculate the respiratory cycle, for real-time assessment. If an abnormal status is detected, the device will warn the user automatically. Experimental results indicated that the error in respiratory cycles between measured and actual values was only 6.8%, illustrating the potential of our detector for home care applications.
Djoufack, Z I; Tala-Tebue, E; Nguenang, J P; Kenfack-Jiotsa, A
2016-10-01
We report in this work, an analytical study of quantum soliton in 1D Heisenberg spin chains with Dzyaloshinsky-Moriya Interaction (DMI) and Next-Nearest-Neighbor Interactions (NNNI). By means of the time-dependent Hartree approximation and the semi-discrete multiple-scale method, the equation of motion for the single-boson wave function is reduced to the nonlinear Schrödinger equation. It comes from this present study that the spectrum of the frequencies increases, its periodicity changes, in the presence of NNNI. The antisymmetric feature of the DMI was probed from the dispersion curve while changing the sign of the parameter controlling it. Five regions were identified in the dispersion spectrum, when the NNNI are taken into account instead of three as in the opposite case. In each of these regions, the quantum model can exhibit quantum stationary localized and stable bright or dark soliton solutions. In each region, we could set up quantum localized n-boson Hartree states as well as the analytical expression of their energy level, respectively. The accuracy of the analytical studies is confirmed by the excellent agreement with the numerical calculations, and it certifies the stability of the stationary quantum localized solitons solutions exhibited in each region. In addition, we found that the intensity of the localization of quantum localized n-boson Hartree states increases when the NNNI are considered. We also realized that the intensity of Hartree n-boson states corresponding to quantum discrete soliton states depend on the wave vector.
Directory of Open Access Journals (Sweden)
Y. Erfanifard
2014-03-01
Full Text Available The ecological relationship between trees is important in the sustainable management of forests. Studying this relationship in spatial ecology, different indices are applied that are based on distance to nearest neighbor. The aim of this research was introduction of important indices based on nearest neighbor analysis and their application in the investigation of ecological relationship between Persian oak coppice trees in Zagros forests. A 9 ha plot of these forests in Kohgilouye - BoyerAhmad province was selected that was completely homogeneous. This plot was covered with Persian oak coppice trees that their point map was obtained after registering their spatial location. Five nearest neighbor indices of G(r, F(r, J(r, GF(r and CE were then applied to study the spatial pattern and relationship of these trees. The results showed that Persian oak coppice trees were located regularly in the homogeneous plot and they were not dependent ecologically. These trees were independent and did not affect the establishment of each other.
A Hierarchical Multi-Output Nearest Neighbor Model for Multi-Output Dependence Learning
Morris, Richard G.; Martinez, Tony; Smith, Michael R.
2014-01-01
Multi-Output Dependence (MOD) learning is a generalization of standard classification problems that allows for multiple outputs that are dependent on each other. A primary issue that arises in the context of MOD learning is that for any given input pattern there can be multiple correct output patterns. This changes the learning task from function approximation to relation approximation. Previous algorithms do not consider this problem, and thus cannot be readily applied to MOD problems. To pe...
NC Machine Tools Fault Diagnosis Based on Kernel PCA and k-Nearest Neighbor Using Vibration Signals
Directory of Open Access Journals (Sweden)
Zhou Yuqing
2015-01-01
Full Text Available This paper focuses on the fault diagnosis for NC machine tools and puts forward a fault diagnosis method based on kernel principal component analysis (KPCA and k-nearest neighbor (kNN. A data-dependent KPCA based on covariance matrix of sample data is designed to overcome the subjectivity in parameter selection of kernel function and is used to transform original high-dimensional data into low-dimensional manifold feature space with the intrinsic dimensionality. The kNN method is modified to adapt the fault diagnosis of tools that can determine thresholds of multifault classes and is applied to detect potential faults. An experimental analysis in NC milling machine tools is developed; the testing result shows that the proposed method is outperforming compared to the other two methods in tool fault diagnosis.
International Nuclear Information System (INIS)
Hernandez-Quiroz, Saul; Benet, Luis
2010-01-01
We study the nearest-neighbor distributions of the k-body embedded ensembles of random matrices for n bosons distributed over two-degenerate single-particle states. This ensemble, as a function of k, displays a transition from harmonic-oscillator behavior (k=1) to random-matrix-type behavior (k=n). We show that a large and robust quasidegeneracy is present for a wide interval of values of k when the ensemble is time-reversal invariant. These quasidegenerate levels are Shnirelman doublets which appear due to the integrability and time-reversal invariance of the underlying classical systems. We present results related to the frequency in the spectrum of these degenerate levels in terms of k and discuss the statistical properties of the splittings of these doublets.
Owczarzy, R; Vallone, P M; Goldstein, R F; Benight, A S
1999-01-01
Melting experiments were conducted on 22 DNA dumbbells as a function of solvent ionic strength from 25-115 mM Na(+). The dumbbell molecules have short duplex regions comprised of 16-20 base pairs linked on both ends by T(4) single-strand loops. Only the 4-8 central base pairs of the dumbbell stems differ for different molecules, and the six base pairs on both sides of the central sequence and adjoining loops on both ends are the same in every molecule. Results of melting analysis on the 22 new DNA dumbbells are combined with our previous results on 17 other DNA dumbbells, with stem lengths containing from 14-18 base pairs, reported in the first article of this series (Doktycz, Goldstein, Paner, Gallo, and Benight, Biopoly 32, 1992, 849-864). The combination of results comprises a database of optical melting parameters for 39 DNA dumbbells in ionic strengths from 25-115 mM Na(+). This database is employed to evaluate the thermodynamics of singlet, doublet, and triplet sequence-dependent interactions in duplex DNA. Analysis of the 25 mM Na(+) data reveals the existence of significant sequence-dependent triplet or next-nearest-neighbor interactions. The enthalpy of these interactions is evaluated for all possible triplets. Some of the triplet enthalpy values are less than the uncertainty in their evaluation, indicating no measurable interaction for that particular sequence. This finding suggests that the thermodynamic stability of duplex DNA depends on solvent ionic strength in a sequence-dependent manner. As a part of the analysis, the nearest-neighbor (base pair doublet) interactions in 55, 85, and 115 mM Na(+) are also reevaluated from the larger database. Copyright 2000 John Wiley & Sons, Inc.
Manganaro, Alberto; Pizzo, Fabiola; Lombardo, Anna; Pogliaghi, Alberto; Benfenati, Emilio
2016-02-01
The ability of a substance to resist degradation and persist in the environment needs to be readily identified in order to protect the environment and human health. Many regulations require the assessment of persistence for substances commonly manufactured and marketed. Besides laboratory-based testing methods, in silico tools may be used to obtain a computational prediction of persistence. We present a new program to develop k-Nearest Neighbor (k-NN) models. The k-NN algorithm is a similarity-based approach that predicts the property of a substance in relation to the experimental data for its most similar compounds. We employed this software to identify persistence in the sediment compartment. Data on half-life (HL) in sediment were obtained from different sources and, after careful data pruning the final dataset, containing 297 organic compounds, was divided into four experimental classes. We developed several models giving satisfactory performances, considering that both the training and test set accuracy ranged between 0.90 and 0.96. We finally selected one model which will be made available in the near future in the freely available software platform VEGA. This model offers a valuable in silico tool that may be really useful for fast and inexpensive screening. Copyright © 2015 Elsevier Ltd. All rights reserved.
Renormalization-group studies of antiferromagnetic chains. I. Nearest-neighbor interactions
International Nuclear Information System (INIS)
Rabin, J.M.
1980-01-01
The real-space renormalization-group method introduced by workers at the Stanford Linear Accelerator Center (SLAC) is used to study one-dimensional antiferromagnetic chains at zero temperature. Calculations using three-site blocks (for the Heisenberg-Ising model) and two-site blocks (for the isotropic Heisenberg model) are compared with exact results. In connection with the two-site calculation a duality transformation is introduced under which the isotropic Heisenberg model is self-dual. Such duality transformations can be defined for models other than those considered here, and may be useful in various block-spin calculations
Self-consistent-field calculations of proteinlike incorporations in polyelectrolyte complex micelles
Lindhoud, Saskia; Cohen Stuart, Martinus Abraham; Norde, Willem; Leermakers, Frans A.M.
2009-01-01
Self-consistent field theory is applied to model the structure and stability of polyelectrolyte complex micelles with incorporated protein (molten globule) molecules in the core. The electrostatic interactions that drive the micelle formation are mimicked by nearest-neighbor interactions using
Liu, Da-You; Chen, Hui-Ling; Yang, Bo; Lv, Xin-En; Li, Li-Na; Liu, Jie
2012-10-01
In this paper, we present an enhanced fuzzy k-nearest neighbor (FKNN) classifier based computer aided diagnostic (CAD) system for thyroid disease. The neighborhood size k and the fuzzy strength parameter m in FKNN classifier are adaptively specified by the particle swarm optimization (PSO) approach. The adaptive control parameters including time-varying acceleration coefficients (TVAC) and time-varying inertia weight (TVIW) are employed to efficiently control the local and global search ability of PSO algorithm. In addition, we have validated the effectiveness of the principle component analysis (PCA) in constructing a more discriminative subspace for classification. The effectiveness of the resultant CAD system, termed as PCA-PSO-FKNN, has been rigorously evaluated against the thyroid disease dataset, which is commonly used among researchers who use machine learning methods for thyroid disease diagnosis. Compared to the existing methods in previous studies, the proposed system has achieved the highest classification accuracy reported so far via 10-fold cross-validation (CV) analysis, with the mean accuracy of 98.82% and with the maximum accuracy of 99.09%. Promisingly, the proposed CAD system might serve as a new candidate of powerful tools for diagnosing thyroid disease with excellent performance.
Directory of Open Access Journals (Sweden)
Muh Subhan
2018-01-01
Full Text Available Radical content in procedural meaning is content which have provoke the violence, spread the hatred and anti nationalism. Radical definition for each country is different, especially in Indonesia. Radical content is more identical with provocation issue, ethnic and religious hatred that is called SARA in Indonesian languange. SARA content is very difficult to detect due to the large number, unstructure system and many noise can be caused multiple interpretations. This problem can threat the unity and harmony of the religion. According to this condition, it is required a system that can distinguish the radical content or not. In this system, we propose text mining approach using DF threshold and Human Brain as the feature extraction. The system is divided into several steps, those are collecting data which is including at preprocessing part, text mining, selection features, classification for grouping the data with class label, simillarity calculation of data training, and visualization to the radical content or non radical content. The experimental result show that using combination from 10-cross validation and k-Nearest Neighbor (kNN as the classification methods achieve 66.37% accuracy performance with 7 k value of kNN method[1].
Directory of Open Access Journals (Sweden)
Asaad Mahdi
2011-05-01
Full Text Available The main objective of this paper is to investigate the performance of fuzzy disease diagnosis by comparing its results with two statistical classification methods used in the diagnosis of diseases namely the K-Nearest Neighbor and the Naïve Bayes classifiers. The comparisons were made using
the latest XLMiner® and Medcalc® statistical software’s. The first step was using fuzzy relation such as the occurrence relation and confirmability relation on a sample of 149 patients suffering from chicken pox, dengue and flu taken from different general and private hospitals and clinics in Kuala Lumpur to diagnose the three diseases. Fourteen symptoms were used in the diagnoses such as high fever, headache, nausea, vomiting, rash, joint pain, muscle pain, bleeding, loss of appetite, diarrhea, cough, sore throat, abdominal pain and runny nose. The second step was using the KNearest Neighbor classification method and the Naïve Bayes classification method on the same sample to diagnose the three diseases. The final step was the comparison between the three methods using performance tests, McNemar and Kappa tests. The result of the comparison between the three methods showed that fuzzy diagnosis outperforms the other two methods in disease diagnosis.
Directory of Open Access Journals (Sweden)
Dewi Nurdiyah
2016-06-01
Full Text Available Fertility eggs test are steps that must be performed in an attempt to hatch eggs. Fertility test usually use egg candling. The purpose of observation is to choose eggs fertile (eggs contained embryos and infertile eggs (eggs that are no embryos. And then fertilized egg will be entered into the incubator for hatching eggs and infertile can be egg consumption. However, there are obstacles in the process of sorting the eggs are less time efficient and inaccuracies of human vision to distinguish between fertile and infertile eggs. To overcome this problem, it can be used Computer Vision technology is having such a principle of human vision. It used to identify an object based on certain characteristics, so that the object can be classified. The aim of this study to comparasion classify image fertile and infertile eggs with SVM (Support Vector Machine algorithm and K-Nearest Neighbor Algorithm based on input from bloodspot texture analysis and blood vessels with GLCM (Gray Level Co-ocurance Matrix. Eggs image studied are 6 day old eggs. It is expected that the proposed method is an appropriate method for classification image fertile and infertile eggs.
Yang, Dongzheng; Hu, Xixi; Zhang, Dong H.; Xie, Daiqian
2018-02-01
Solving the time-independent close coupling equations of a diatom-diatom inelastic collision system by using the rigorous close-coupling approach is numerically difficult because of its expensive matrix manipulation. The coupled-states approximation decouples the centrifugal matrix by neglecting the important Coriolis couplings completely. In this work, a new approximation method based on the coupled-states approximation is presented and applied to time-independent quantum dynamic calculations. This approach only considers the most important Coriolis coupling with the nearest neighbors and ignores weaker Coriolis couplings with farther K channels. As a result, it reduces the computational costs without a significant loss of accuracy. Numerical tests for para-H2+ortho-H2 and para-H2+HD inelastic collision were carried out and the results showed that the improved method dramatically reduces the errors due to the neglect of the Coriolis couplings in the coupled-states approximation. This strategy should be useful in quantum dynamics of other systems.
Ronald E. McRoberts; Steen Magnussen; Erkki O. Tomppo; Gherardo. Chirici
2011-01-01
Nearest neighbors techniques have been shown to be useful for estimating forest attributes, particularly when used with forest inventory and satellite image data. Published reports of positive results have been truly international in scope. However, for these techniques to be more useful, they must be able to contribute to scientific inference which, for sample-based...
International Nuclear Information System (INIS)
Hu, Chao; Jain, Gaurav; Zhang, Puqiang; Schmidt, Craig; Gomadam, Parthasarathy; Gorka, Tom
2014-01-01
Highlights: • We develop a data-driven method for the battery capacity estimation. • Five charge-related features that are indicative of the capacity are defined. • The kNN regression model captures the dependency of the capacity on the features. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the battery health condition by estimating the battery capacity over the life-time. This paper presents a data-driven method for estimating the capacity of Li-ion battery based on the charge voltage and current curves. The contributions of this paper are three-fold: (i) the definition of five characteristic features of the charge curves that are indicative of the capacity, (ii) the development of a non-linear kernel regression model, based on the k-nearest neighbor (kNN) regression, that captures the complex dependency of the capacity on the five features, and (iii) the adaptation of particle swarm optimization (PSO) to finding the optimal combination of feature weights for creating a kNN regression model that minimizes the cross validation (CV) error in the capacity estimation. Verification with 10 years’ continuous cycling data suggests that the proposed method is able to accurately estimate the capacity of Li-ion battery throughout the whole life-time
Directory of Open Access Journals (Sweden)
Mingju E
Full Text Available Extra-pair copulation is considered to be a means by which females can modify their initial mate choice, and females might obtain indirect benefits to offspring fitness by engaging in this behavior. Here, we examined the patterns of extra-pair paternity and female preferences in the yellow-rumped flycatcher (Ficedula zanthopygia. We found that female yellow-rumped flycatchers are more likely to choose larger and relatively highly heterozygous males than their social mates as extra-pair mates, that the genetic similarity of pairs that produced mixed-paternity offspring did not differ from the similarity of pairs producing only within-pair offspring, and that extra-pair offspring were more heterozygous than their half-siblings. These findings support the good genes hypothesis but do not exclude the compatibility hypothesis. Most female yellow-rumped flycatchers attained extra-pair paternity with distant males rather than their nearest accessible neighboring males, and no differences in genetic and phenotypic characteristics were detected between cuckolded males and their nearest neighbors. There was no evidence that extra-pair mating by female flycatchers reduced inbreeding. Moreover, breeding density, breeding synchrony and their interaction did not affect the occurrence of extra-pair paternity in this species. Our results suggest that the variation in extra-pair paternity distribution between nearest neighbors in some passerine species might result from female preference for highly heterozygous males.
Surmach, M. A.; Chen, B. J.; Deng, Z.; Jin, C. Q.; Glasbrenner, J. K.; Mazin, I. I.; Ivanov, A.; Inosov, D. S.
2018-03-01
Dilute magnetic semiconductors (DMS) are nonmagnetic semiconductors doped with magnetic transition metals. The recently discovered DMS material (Ba1 -xKx) (Zn1-yMny) 2As2 offers a unique and versatile control of the Curie temperature TC by decoupling the spin (Mn2 +, S =5 /2 ) and charge (K+) doping in different crystallographic layers. In an attempt to describe from first-principles calculations the role of hole doping in stabilizing ferromagnetic order, it was recently suggested that the antiferromagnetic exchange coupling J between the nearest-neighbor Mn ions would experience a nearly twofold suppression upon doping 20% of holes by potassium substitution. At the same time, further-neighbor interactions become increasingly ferromagnetic upon doping, leading to a rapid increase of TC. Using inelastic neutron scattering, we have observed a localized magnetic excitation at about 13 meV associated with the destruction of the nearest-neighbor Mn-Mn singlet ground state. Hole doping results in a notable broadening of this peak, evidencing significant particle-hole damping, but with only a minor change in the peak position. We argue that this unexpected result can be explained by a combined effect of superexchange and double-exchange interactions.
Shariq, Ahmed
2012-01-01
A next nearest neighbor evaluation procedure of atom probe tomography data provides distributions of the distances between atoms. The width of these distributions for metallic glasses studied so far is a few Angstrom reflecting the spatial resolution of the analytical technique. However, fitting Gaussian distributions to the distribution of atomic distances yields average distances with statistical uncertainties of 2 to 3 hundredth of an Angstrom. Fe 40Ni40B20 metallic glass ribbons are characterized this way in the as quenched state and for a state heat treated at 350 °C for 1 h revealing a change in the structure on the sub-nanometer scale. By applying the statistical tool of the χ2 test a slight deviation from a random distribution of B-atoms in the as quenched sample is perceived, whereas a pronounced elemental inhomogeneity of boron is detected for the annealed state. In addition, the distance distribution of the first fifteen atomic neighbors is determined by using this algorithm for both annealed and as quenched states. The next neighbor evaluation algorithm evinces a steric periodicity of the atoms when the next neighbor distances are normalized by the first next neighbor distance. A comparison of the nearest neighbor atomic distribution for as quenched and annealed state shows accumulation of Ni and B. Moreover, it also reveals the tendency of Fe and B to move slightly away from each other, an incipient step to Ni rich boride formation. © 2011 Elsevier B.V.
Datta, A.; Banerjee, S.; Finley, A.O.; Hamm, N.A.S.; Schaap, M.
2016-01-01
Particulate matter (PM) is a class of malicious environmental pollutants known to be detrimental to human health. Regulatory efforts aimed at curbing PM levels in different countries often require high resolution space–time maps that can identify red-flag regions exceeding statutory concentration
Iskin, M.
2016-01-01
We consider a two-component Fermi gas with attractive interactions on a square optical lattice, and study the interplay of Zeeman field, spin-orbit coupling, and next-nearest-neighbor hopping on the ground-state phase diagrams in the entire BCS-BEC evolution. In particular, we first classify and distinguish all possible superfluid phases by the momentum-space topology of their zero-energy quasiparticle-quasihole excitations, and then numerically establish a plethora of quantum phase transitions in between. These transitions are further signaled and evidenced by the changes in the corresponding topological invariant of the system, i.e., its Chern number. Lastly, we find that the superfluid phase exhibits a reentrant structure, separated by a fingering normal phase, the origin of which is traced back to the changes in the single-particle density of states.
Energy Technology Data Exchange (ETDEWEB)
Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L
2014-08-01
Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrument on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.
Whitmore, Lee; Mavridis, Lazaros; Wallace, B A; Janes, Robert W
2018-01-01
Circular dichroism spectroscopy is a well-used, but simple method in structural biology for providing information on the secondary structure and folds of proteins. DichroMatch (DM@PCDDB) is an online tool that is newly available in the Protein Circular Dichroism Data Bank (PCDDB), which takes advantage of the wealth of spectral and metadata deposited therein, to enable identification of spectral nearest neighbors of a query protein based on four different methods of spectral matching. DM@PCDDB can potentially provide novel information about structural relationships between proteins and can be used in comparison studies of protein homologs and orthologs. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
Ayu Cyntya Dewi, Dyah; Shaufiah; Asror, Ibnu
2018-03-01
SMS (Short Message Service) is on e of the communication services that still be the main choice, although now the phone grow with various applications. Along with the development of various other communication media, some countries lowered SMS rates to keep the interest of mobile users. It resulted in increased spam SMS that used by several parties, one of them for advertisement. Given the kind of multi-lingual documents in a message SMS, the Web, and others, necessary for effective multilingual or cross-lingual processing techniques is becoming increasingly important. The steps that performed in this research is data / messages first preprocessing then represented into a graph model. Then calculated using GKNN method. From this research we get the maximum accuracy is 98.86 with training data in Indonesian language and testing data in indonesian language with K 10 and threshold 0.001.
Directory of Open Access Journals (Sweden)
Phan Thanh Noi
2017-12-01
Full Text Available In previous classification studies, three non-parametric classifiers, Random Forest (RF, k-Nearest Neighbor (kNN, and Support Vector Machine (SVM, were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI. In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km2 within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85% when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets.
Directory of Open Access Journals (Sweden)
A. Moosavian
2013-01-01
Full Text Available Vibration analysis is an accepted method in condition monitoring of machines, since it can provide useful and reliable information about machine working condition. This paper surveys a new scheme for fault diagnosis of main journal-bearings of internal combustion (IC engine based on power spectral density (PSD technique and two classifiers, namely, K-nearest neighbor (KNN and artificial neural network (ANN. Vibration signals for three different conditions of journal-bearing; normal, with oil starvation condition and extreme wear fault were acquired from an IC engine. PSD was applied to process the vibration signals. Thirty features were extracted from the PSD values of signals as a feature source for fault diagnosis. KNN and ANN were trained by training data set and then used as diagnostic classifiers. Variable K value and hidden neuron count (N were used in the range of 1 to 20, with a step size of 1 for KNN and ANN to gain the best classification results. The roles of PSD, KNN and ANN techniques were studied. From the results, it is shown that the performance of ANN is better than KNN. The experimental results dèmonstrate that the proposed diagnostic method can reliably separate different fault conditions in main journal-bearings of IC engine.
Bergmann, Tommy; Heinke, Florian; Labudde, Dirk
2017-09-01
The age determination of blood traces provides important hints for the chronological assessment of criminal events and their reconstruction. Current methods are often expensive, involve significant experimental complexity and often fail to perform when being applied to aged blood samples taken from different substrates. In this work an absorption spectroscopy-based blood stain age estimation method is presented, which utilizes 400-640nm absorption spectra in computation. Spectral data from 72 differently aged pig blood stains (2h to three weeks) dried on three different substrate surfaces (cotton, polyester and glass) were acquired and the turnover-time correlations were utilized to develop a straightforward age estimation scheme. More precisely, data processing includes data dimensionality reduction, upon which classic k-nearest neighbor classifiers are employed. This strategy shows good agreement between observed and predicted blood stain age (r>0.9) in cross-validation. The presented estimation strategy utilizes spectral data from dissolved blood samples to bypass spectral artifacts which are well known to interfere with other spectral methods such as reflection spectroscopy. Results indicate that age estimations can be drawn from such absorbance spectroscopic data independent from substrate the blood dried on. Since data in this study was acquired under laboratory conditions, future work has to consider perturbing environmental conditions in order to assess real-life applicability. Copyright © 2017 Elsevier B.V. All rights reserved.
He, Runnan; Wang, Kuanquan; Li, Qince; Yuan, Yongfeng; Zhao, Na; Liu, Yang; Zhang, Henggui
2017-12-01
Cardiovascular diseases are associated with high morbidity and mortality. However, it is still a challenge to diagnose them accurately and efficiently. Electrocardiogram (ECG), a bioelectrical signal of the heart, provides crucial information about the dynamical functions of the heart, playing an important role in cardiac diagnosis. As the QRS complex in ECG is associated with ventricular depolarization, therefore, accurate QRS detection is vital for interpreting ECG features. In this paper, we proposed a real-time, accurate, and effective algorithm for QRS detection. In the algorithm, a proposed preprocessor with a band-pass filter was first applied to remove baseline wander and power-line interference from the signal. After denoising, a method combining K-Nearest Neighbor (KNN) and Particle Swarm Optimization (PSO) was used for accurate QRS detection in ECGs with different morphologies. The proposed algorithm was tested and validated using 48 ECG records from MIT-BIH arrhythmia database (MITDB), achieved a high averaged detection accuracy, sensitivity and positive predictivity of 99.43, 99.69, and 99.72%, respectively, indicating a notable improvement to extant algorithms as reported in literatures.
Directory of Open Access Journals (Sweden)
Leonhard Suchenwirth
2014-07-01
Full Text Available Among the machine learning tools being used in recent years for environmental applications such as forestry, self-organizing maps (SOM and the k-nearest neighbor (kNN algorithm have been used successfully. We applied both methods for the mapping of organic carbon (Corg in riparian forests due to their considerably high carbon storage capacity. Despite the importance of floodplains for carbon sequestration, a sufficient scientific foundation for creating large-scale maps showing the spatial Corg distribution is still missing. We estimated organic carbon in a test site in the Danube Floodplain based on RapidEye remote sensing data and additional geodata. Accordingly, carbon distribution maps of vegetation, soil, and total Corg stocks were derived. Results were compared and statistically evaluated with terrestrial survey data for outcomes with pure remote sensing data and for the combination with additional geodata using bias and the Root Mean Square Error (RMSE. Results show that SOM and kNN approaches enable us to reproduce spatial patterns of riparian forest Corg stocks. While vegetation Corg has very high RMSEs, outcomes for soil and total Corg stocks are less biased with a lower RMSE, especially when remote sensing and additional geodata are conjointly applied. SOMs show similar percentages of RMSE to kNN estimations.
Directory of Open Access Journals (Sweden)
Fuqian Shi
2012-01-01
Full Text Available Emotional cellular (EC, proposed in our previous works, is a kind of semantic cell that contains kernel and shell and the kernel is formalized by a triple- L = , where P denotes a typical set of positive examples relative to word-L, d is a pseudodistance measure on emotional two-dimensional space: valence-arousal, and δ is a probability density function on positive real number field. The basic idea of EC model is to assume that the neighborhood radius of each semantic concept is uncertain, and this uncertainty will be measured by one-dimensional density function δ. In this paper, product form features were evaluated by using ECs and to establish the product style database, fuzzy case based reasoning (FCBR model under a defined similarity measurement based on fuzzy nearest neighbors (FNN incorporating EC was applied to extract product styles. A mathematical formalized inference system for product style was also proposed, and it also includes uncertainty measurement tool emotional cellular. A case study of style acquisition of mobile phones illustrated the effectiveness of the proposed methodology.
Digital terrain model generalization incorporating scale, semantic and cognitive constraints
Partsinevelos, Panagiotis; Papadogiorgaki, Maria
2014-05-01
research scheme comprises of the combination of SOM with the variations of other widely used generalization algorithms. For instance, an adaptation of the Douglas-Peucker line simplification method in 3D data is used in order to reduce the initial nodes, while maintaining their actual coordinates. Furthermore, additional methods are deployed, aiming to corroborate and verify the significance of each node, such as mathematical algorithms exploiting the pixel's nearest neighbors. Finally, besides the quantitative evaluation of error vs information preservation in a DTM, cognitive inputs from geoscience experts are incorporated in order to test, fine-tune and advance our algorithm. Under the described strategy that incorporates mechanical, topology, semantic and cognitive restrains, results demonstrate the necessity to integrate these characteristics in describing raster DTM surfaces. Acknowledgements: This work is partially supported under the framework of the "Cooperation 2011" project ATLANTAS (11_SYN_6_1937) funded from the Operational Program "Competitiveness and Entrepreneurship" (co-funded by the European Regional Development Fund (ERDF)) and managed by the Greek General Secretariat for Research and Technology.
Directory of Open Access Journals (Sweden)
Fen Wei
2016-01-01
Full Text Available In order to sufficiently capture the useful fault-related information available in the multiple vibration sensors used in rotation machinery, while concurrently avoiding the introduction of the limitation of dimensionality, a new fault diagnosis method for rotation machinery based on supervised second-order tensor locality preserving projection (SSTLPP and weighted k-nearest neighbor classifier (WKNNC with an assembled matrix distance metric (AMDM is presented. Second-order tensor representation of multisensor fused conditional features is employed to replace the prevailing vector description of features from a single sensor. Then, an SSTLPP algorithm under AMDM (SSTLPP-AMDM is presented to realize dimensional reduction of original high-dimensional feature tensor. Compared with classical second-order tensor locality preserving projection (STLPP, the SSTLPP-AMDM algorithm not only considers both local neighbor information and class label information but also replaces the existing Frobenius distance measure with AMDM for construction of the similarity weighting matrix. Finally, the obtained low-dimensional feature tensor is input into WKNNC with AMDM to implement the fault diagnosis of the rotation machinery. A fault diagnosis experiment is performed for a gearbox which demonstrates that the second-order tensor formed multisensor fused fault data has good results for multisensor fusion fault diagnosis and the formulated fault diagnosis method can effectively improve diagnostic accuracy.
Ma, Yitao; Miura, Sadahiko; Honjo, Hiroaki; Ikeda, Shoji; Hanyu, Takahiro; Ohno, Hideo; Endoh, Tetsuo
2017-04-01
A high-density nonvolatile associative memory (NV-AM) based on spin transfer torque magnetoresistive random access memory (STT-MRAM), which achieves highly concurrent and ultralow-power nearest neighbor search with full adaptivity of the template data format, has been proposed and fabricated using the 90 nm CMOS/70 nm perpendicular-magnetic-tunnel-junction hybrid process. A truly compact current-mode circuitry is developed to realize flexibly controllable and high-parallel similarity evaluation, which makes the NV-AM adaptable to any dimensionality and component-bit of template data. A compact dual-stage time-domain minimum searching circuit is also developed, which can freely extend the system for more template data by connecting multiple NM-AM cores without additional circuits for integrated processing. Both the embedded STT-MRAM module and the computing circuit modules in this NV-AM chip are synchronously power-gated to completely eliminate standby power and maximally reduce operation power by only activating the currently accessed circuit blocks. The operations of a prototype chip at 40 MHz are demonstrated by measurement. The average operation power is only 130 µW, and the circuit density is less than 11 µm2/bit. Compared with the latest conventional works in both volatile and nonvolatile approaches, more than 31.3% circuit area reductions and 99.2% power improvements are achieved, respectively. Further power performance analyses are discussed, which verify the special superiority of the proposed NV-AM in low-power and large-memory-based VLSIs.
Social aggregation in pea aphids: experiment and random walk modeling.
Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J; Topaz, Chad M
2013-01-01
From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.
Incorporating Resilience into Dynamic Social Models
2016-07-20
AFRL-AFOSR-VA-TR-2016-0258 Incorporating Resilience into Dynamic Social Models Eunice Santos UNIVERSITY OF TEXAS AT EL PASO 500 UNIV ST ADMIN BLDG...REPORT TYPE Final Report 3. DATES COVERED (From - To) 3/1/13-12/31/14 4. TITLE AND SUBTITLE Incorporating Resilience into Dynamic Social Models 5a...AVAILABILITY STATEMENT 13. SUPPLEMENTARY NOTES 14. ABSTRACT We propose an overarching framework designed to incorporate various aspects of social resilience
Prototype-Incorporated Emotional Neural Network.
Oyedotun, Oyebade K; Khashman, Adnan
2017-08-15
Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.
Incorporating groundwater flow into the WEPP model
William Elliot; Erin Brooks; Tim Link; Sue Miller
2010-01-01
The water erosion prediction project (WEPP) model is a physically-based hydrology and erosion model. In recent years, the hydrology prediction within the model has been improved for forest watershed modeling by incorporating shallow lateral flow into watershed runoff prediction. This has greatly improved WEPP's hydrologic performance on small watersheds with...
Simple model of stacking-fault energies
DEFF Research Database (Denmark)
Stokbro, Kurt; Jacobsen, Lærke Wedel
1993-01-01
A simple model for the energetics of stacking faults in fcc metals is constructed. The model contains third-nearest-neighbor pairwise interactions and a term involving the fourth moment of the electronic density of states. The model is in excellent agreement with recently published local-density ......A simple model for the energetics of stacking faults in fcc metals is constructed. The model contains third-nearest-neighbor pairwise interactions and a term involving the fourth moment of the electronic density of states. The model is in excellent agreement with recently published local...
Information Retrieval Document Classified with K-Nearest Neighbor
Directory of Open Access Journals (Sweden)
Alifian Sukma
2018-01-01
Evaluation is done by using the testing documents as much as 20 documents, with a value of k = {37, 41, 43}. Evaluation system shows the level of success in classifying documents on the value of k = 43 with a value precision of 0501. System test results showed that 20 document testing used can be classified according to the actual category
Utilization of Singularity Exponent in Nearest Neighbor Based Classifier
Czech Academy of Sciences Publication Activity Database
Jiřina, Marcel; Jiřina jr., M.
2013-01-01
Roč. 30, č. 1 (2013), s. 3-29 ISSN 0176-4268 Grant - others:Czech Technical University(CZ) CZ68407700 Institutional support: RVO:67985807 Keywords : multivariate data * probability density estimation * classification * probability distribution mapping function * probability density mapping function * power approximation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.571, year: 2013
Assembly Neural Network with Nearest-Neighbor Recognition Algorithm
Czech Academy of Sciences Publication Activity Database
Goltsev, A.; Húsek, Dušan; Frolov, A.
2005-01-01
Roč. 15, - (2005), s. 9-22 ISSN 1210-0552 R&D Projects: GA MŠk 1M0567 Grant - others:RFBR(RU) 02-01-00457 Keywords : assembly neural network * unsupervised learning * binary Hebbian rule * pattern recognition * texture segmentation * classification Subject RIV: BA - General Mathematics
Clustered K nearest neighbor algorithm for daily inflow forecasting
Akbari, M.; Van Overloop, P.J.A.T.M.; Afshar, A.
2010-01-01
Instance based learning (IBL) algorithms are a common choice among data driven algorithms for inflow forecasting. They are based on the similarity principle and prediction is made by the finite number of similar neighbors. In this sense, the similarity of a query instance is estimated according to
Czech Academy of Sciences Publication Activity Database
Tarasenko, Alexander
2018-01-01
Roč. 95, Jan (2018), s. 37-40 ISSN 1386-9477 R&D Projects: GA MŠk LO1409; GA MŠk LM2015088 Institutional support: RVO:68378271 Keywords : lattice gas systems * kinetic Monte Carlo simulations * diffusion and migration Subject RIV: BE - Theoretical Physics OBOR OECD: Atomic, molecular and chemical physics (physics of atoms and molecules including collision, interaction with radiation, magnetic resonances, Mössbauer effect) Impact factor: 2.221, year: 2016
Incorporating neurophysiological concepts in mathematical thermoregulation models
Kingma, Boris R. M.; Vosselman, M. J.; Frijns, A. J. H.; van Steenhoven, A. A.; van Marken Lichtenbelt, W. D.
2014-01-01
Skin blood flow (SBF) is a key player in human thermoregulation during mild thermal challenges. Various numerical models of SBF regulation exist. However, none explicitly incorporates the neurophysiology of thermal reception. This study tested a new SBF model that is in line with experimental data on thermal reception and the neurophysiological pathways involved in thermoregulatory SBF control. Additionally, a numerical thermoregulation model was used as a platform to test the function of the neurophysiological SBF model for skin temperature simulation. The prediction-error of the SBF-model was quantified by root-mean-squared-residual (RMSR) between simulations and experimental measurement data. Measurement data consisted of SBF (abdomen, forearm, hand), core and skin temperature recordings of young males during three transient thermal challenges (1 development and 2 validation). Additionally, ThermoSEM, a thermoregulation model, was used to simulate body temperatures using the new neurophysiological SBF-model. The RMSR between simulated and measured mean skin temperature was used to validate the model. The neurophysiological model predicted SBF with an accuracy of RMSR thermoregulation models can be equipped with SBF control functions that are based on neurophysiology without loss of performance. The neurophysiological approach in modelling thermoregulation is favourable over engineering approaches because it is more in line with the underlying physiology.
Incorporating incorporating economic models into seasonal pool conservation planning
Freeman, Robert C.; Bell, Kathleen P.; Calhoun, Aram J.K.; Loftin, Cyndy
2012-01-01
Massachusetts, New Jersey, Connecticut, and Maine have adopted regulatory zones around seasonal (vernal) pools to conserve terrestrial habitat for pool-breeding amphibians. Most amphibians require access to distinct seasonal habitats in both terrestrial and aquatic ecosystems because of their complex life histories. These habitat requirements make them particularly vulnerable to land uses that destroy habitat or limit connectivity (or permeability) among habitats. Regulatory efforts focusing on breeding pools without consideration of terrestrial habitat needs will not ensure the persistence of pool-breeding amphibians. We used GIS to combine a discrete-choice, parcel-scale economic model of land conversion with a landscape permeability model based on known habitat requirements of wood frogs (Lithobates sylvaticus) in Maine (USA) to examine permeability among habitat elements for alternative future scenarios. The economic model predicts future landscapes under different subdivision open space and vernal pool regulatory requirements. Our model showed that even “no build” permit zones extending 76 m (250 ft) outward from the pool edge were insufficient to assure permeability among required habitat elements. Furthermore, effectiveness of permit zones may be inconsistent due to interactions with other growth management policies, highlighting the need for local and state planning for the long-term persistence of pool-breeding amphibians in developing landscapes.
Pairing correlations in a generalized Hubbard model for the cuprates
Arrachea, Liliana; Aligia, A. A.
2000-04-01
Using numerical diagonalization of a 4×4 cluster, we calculate on-site s, extended-s, and dx2-y2 pairing correlation functions (PCF's) in an effective generalized Hubbard model for the cuprates, with nearest-neighbor correlated hopping and next-nearest-neighbor hopping t'. The vertex contributions to the PCF's are significantly enhanced, relative to the t-t'-U model. The behavior of the PCF's and their vertex contributions, and signatures of anomalous flux quantization, indicate superconductivity in the d-wave channel for moderate doping and in the s-wave channel for high doping and small U.
dx2-y2 superconductivity in a generalized Hubbard model
Arrachea, Liliana; Aligia, A. A.
1999-01-01
We consider an extended Hubbard model with nearest-neighbor correlated hopping and next-nearest-neighbor hopping t' obtained as an effective model for cuprate superconductors. Using a generalized Hartree-Fock BCS approximation, we find that for high enough t' and doping, antiferromagnetism is destroyed and the system exhibits d-wave superconductivity. Near optimal doping we consider the effect of antiferromagnetic spin fluctuations on the normal self-energy using a phenomenological susceptibility. The resulting superconducting critical temperature as a function of doping is in good agreement with experiment.
Pairing Correlations in a Generalized Hubbard Model for the Cuprates
Arrachea, L.; Aligia, A.
1999-01-01
Using numerical diagonalization of a 4x4 cluster, we calculate on-site s, extended s and d pairing correlation functions (PCF) in an effective generalized Hubbard model for the cuprates, with nearest-neighbor correlated hopping and next nearest-neighbor hopping t'. The vertex contributions (VC) to the PCF are significantly enhanced, relative to the t-t'-U model. The behavior of the PCF and their VC, and signatures of anomalous flux quantization, indicate superconductivity in the d-wave channe...
Off-lattice model for the phase behavior of lipid-cholesterol bilayers
DEFF Research Database (Denmark)
Nielsen, Morten; Miao, Ling; Ipsen, John Hjorth
1999-01-01
and previous approximate theories have suggested that cholesterol incorporated into lipid bilayers has different microscopic effects on lipid-chain packing and conformations and that cholesterol thereby leads to decoupling of the two ordering processes, manifested by a special equilibrium phase, "liquid......-ordered phase," where bilayers are liquid (with translational disorder) but lipid chains are conformationally ordered. We present in this paper a microscopic model that describes this decoupling phenomena and which yields a phase diagram consistent with experimental observations. The model is an off......-lattice model based on a two-dimensional random triangulation algorithm and represents lipid and cholesterol molecules by hard-core particles with internal (spin-type) degrees of freedom that have nearest-neighbor interactions. The phase equilibria described by the model, specifically in terms of phase diagrams...
Directory of Open Access Journals (Sweden)
Drzewiecki Wojciech
2016-12-01
Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.
J1x-J1y-J2 square-lattice anisotropic Heisenberg model
Pires, A. S. T.
2017-08-01
The spin one Heisenberg model with an easy-plane single-ion anisotropy and spatially anisotropic nearest-neighbor coupling, frustrated by a next-nearest neighbor interaction, is studied at zero temperature using a SU(3) Schwinger boson formalism (sometimes also referred to as flavor wave theory) in a mean field approximation. The local constraint is enforced by introducing a Lagrange multiplier. The enlarged Hilbert space of S = 1 spins lead to a nematic phase that is ubiquitous to S = 1 spins with single ion anisotropy. The phase diagram shows two magnetically ordered phase, separated by a quantum paramagnetic (nematic) phase.
Abelian tensor models on the lattice
Chaudhuri, Soumyadeep; Giraldo-Rivera, Victor I.; Joseph, Anosh; Loganayagam, R.; Yoon, Junggi
2018-04-01
We consider a chain of Abelian Klebanov-Tarnopolsky fermionic tensor models coupled through quartic nearest-neighbor interactions. We characterize the gauge-singlet spectrum for small chains (L =2 ,3 ,4 ,5 ) and observe that the spectral statistics exhibits strong evidence in favor of quasi-many-body localization.
Energy Technology Data Exchange (ETDEWEB)
Deviren, Bayram [Institute of Science, Erciyes University, Kayseri 38039 (Turkey); Canko, Osman [Department of Physics, Erciyes University, Kayseri 38039 (Turkey); Keskin, Mustafa [Department of Physics, Erciyes University, Kayseri 38039 (Turkey)], E-mail: keskin@erciyes.edu.tr
2008-09-15
The Ising model with three alternative layers on the honeycomb and square lattices is studied by using the effective-field theory with correlations. We consider that the nearest-neighbor spins of each layer are coupled ferromagnetically and the adjacent spins of the nearest-neighbor layers are coupled either ferromagnetically or anti-ferromagnetically depending on the sign of the bilinear exchange interactions. We investigate the thermal variations of the magnetizations and present the phase diagrams. The phase diagrams contain the paramagnetic, ferromagnetic and anti-ferromagnetic phases, and the system also exhibits a tricritical behavior.
International Nuclear Information System (INIS)
Deviren, Bayram; Canko, Osman; Keskin, Mustafa
2008-01-01
The Ising model with three alternative layers on the honeycomb and square lattices is studied by using the effective-field theory with correlations. We consider that the nearest-neighbor spins of each layer are coupled ferromagnetically and the adjacent spins of the nearest-neighbor layers are coupled either ferromagnetically or anti-ferromagnetically depending on the sign of the bilinear exchange interactions. We investigate the thermal variations of the magnetizations and present the phase diagrams. The phase diagrams contain the paramagnetic, ferromagnetic and anti-ferromagnetic phases, and the system also exhibits a tricritical behavior
Incorporating direct marketing activity into latent attrition models
Schweidel, David A.; Knox, George
2013-01-01
When defection is unobserved, latent attrition models provide useful insights about customer behavior and accurate forecasts of customer value. Yet extant models ignore direct marketing efforts. Response models incorporate the effects of direct marketing, but because they ignore latent attrition,
A Financial Market Model Incorporating Herd Behaviour.
Wray, Christopher M; Bishop, Steven R
2016-01-01
Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the market
A Financial Market Model Incorporating Herd Behaviour.
Directory of Open Access Journals (Sweden)
Christopher M Wray
Full Text Available Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space, and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space, numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock
Incorporating territory compression into population models
Ridley, J; Komdeur, J; Sutherland, WJ; Sutherland, William J.
The ideal despotic distribution, whereby the lifetime reproductive success a territory's owner achieves is unaffected by population density, is a mainstay of behaviour-based population models. We show that the population dynamics of an island population of Seychelles warblers (Acrocephalus
A MODEL FOR INCORPORATING SPECIALIST NURSE ...
African Journals Online (AJOL)
2009-09-15
Sep 15, 2009 ... In this study, the development of a model is regarded as being consistent with middle-range theory generation (George 2002:6), which, in turn, guides the education practice of specialist nurses. According to Smith and Liehr,. [m]iddle range theory can be defined as a set of related ideas that are focused on ...
DEFF Research Database (Denmark)
Høst-Madsen, Anders; Shah, Peter Jivan; Hansen, Torben
1987-01-01
Computer-simulation techniques are used to study the domain-growth kinetics of (2×1) ordering in a two-dimensional Ising model with nonconserved order parameter and with variable ratio α of next-nearest- and nearest-neighbor interactions. At zero temperature, persistent growth characterized...
Martins, George; Xavier, Jose; Arrachea, Liliana; Dagotto, Elbio
2002-03-01
Numerical calculations illustrate the effect of the sign of the next nearest-neighbor hopping term t' on the 2-hole properties of the t-t'-J model. Working mainly on 2-leg ladders, in the -1.0 0. This interference is destructive for t'<0.
Incorporating published univariable associations in diagnostic and prognostic modeling
T.P.A. Debray (Thomas); H. Koffijberg (Hendrik); D. Lu (Difei); Y. Vergouwe (Yvonne); E.W. Steyerberg (Ewout); K.G.M. Moons (Karel)
2012-01-01
textabstractBackground: Diagnostic and prognostic literature is overwhelmed with studies reporting univariable predictor-outcome associations. Currently, methods to incorporate such information in the construction of a prediction model are underdeveloped and unfamiliar to many researchers. Methods.
Incorporating model uncertainty into optimal insurance contract design
Pflug, G.; Timonina-Farkas, A.; Hochrainer-Stigler, S.
2017-01-01
In stochastic optimization models, the optimal solution heavily depends on the selected probability model for the scenarios. However, the scenario models are typically chosen on the basis of statistical estimates and are therefore subject to model error. We demonstrate here how the model uncertainty can be incorporated into the decision making process. We use a nonparametric approach for quantifying the model uncertainty and a minimax setup to find model-robust solutions. The method is illust...
True dose from incorporated activities. Models for internal dosimetry
International Nuclear Information System (INIS)
Breustedt, B.; Eschner, W.; Nosske, D.
2012-01-01
The assessment of doses after incorporation of radionuclides cannot use direct measurements of the doses, as for example dosimetry in external radiation fields. The only observables are activities in the body or in excretions. Models are used to calculate the doses based on the measured activities. The incorporated activities and the resulting doses can vary by more than seven orders of magnitude between occupational and medical exposures. Nevertheless the models and calculations applied in both cases are similar. Since the models for the different applications have been developed independently by ICRP and MIRD different terminologies have been used. A unified terminology is being developed. (orig.)
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Incorporating agricultural land cover in conceptual rainfall runoff models
Euser, Tanja; Hrachowitz, Markus; Winsemius, Hessel; Savenije, Hubert
2015-04-01
Incorporating spatially variable information is a frequently discussed option to increase the performance of (semi) distributed conceptual rainfall runoff models. One of the methods to do this is by using these spatially variable information to delineate Hydrological Response Units (HRUs) within a catchment. This study tests whether the incorporation of an additional agricultural HRU in a conceptual hydrological model can better reflect the spatial differences in runoff generation and therefore improve the simulation of the wetting phase in autumn. The study area is the meso-scale Ourthe catchment in Belgium. A previous study in this area showed that spatial patterns in runoff generation were already better represented by incorporation of a wetland and a hillslope HRU, compared to a lumped model structure. The influences which are considered by including an agriculture HRU are increased drainage speed due to roads, plough pans and increased infiltration excess overland flow (drainage pipes area only limited present), and variable vegetation patterns due to sowing and harvesting. In addition, the vegetation is not modelled as a static resistance towards evaporation, but the Jarvis stress functions are used to increase the realism of the modelled transpiration; in land-surface models the Jarvis stress functions are already often used for modelling transpiration. The results show that an agricultural conceptualisation in addition to wetland and hillslope conceptualisations leads to small improvements in the modelled discharge. However, the influence is larger on the representation of spatial patterns and the modelled contributions of different HRUs to the total discharge.
Incorporating functional inter-relationships into protein function prediction algorithms
Directory of Open Access Journals (Sweden)
Kumar Vipin
2009-05-01
Full Text Available Abstract Background Functional classification schemes (e.g. the Gene Ontology that serve as the basis for annotation efforts in several organisms are often the source of gold standard information for computational efforts at supervised protein function prediction. While successful function prediction algorithms have been developed, few previous efforts have utilized more than the protein-to-functional class label information provided by such knowledge bases. For instance, the Gene Ontology not only captures protein annotations to a set of functional classes, but it also arranges these classes in a DAG-based hierarchy that captures rich inter-relationships between different classes. These inter-relationships present both opportunities, such as the potential for additional training examples for small classes from larger related classes, and challenges, such as a harder to learn distinction between similar GO terms, for standard classification-based approaches. Results We propose a method to enhance the performance of classification-based protein function prediction algorithms by addressing the issue of using these interrelationships between functional classes constituting functional classification schemes. Using a standard measure for evaluating the semantic similarity between nodes in an ontology, we quantify and incorporate these inter-relationships into the k-nearest neighbor classifier. We present experiments on several large genomic data sets, each of which is used for the modeling and prediction of over hundred classes from the GO Biological Process ontology. The results show that this incorporation produces more accurate predictions for a large number of the functional classes considered, and also that the classes benefitted most by this approach are those containing the fewest members. In addition, we show how our proposed framework can be used for integrating information from the entire GO hierarchy for improving the accuracy of
Incorporating Responsiveness to Marketing Efforts When Modeling Brand Choice
D. Fok (Dennis); Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)
2001-01-01
textabstractIn this paper we put forward a brand choice model which incorporates responsiveness to marketing efforts as a form of structural heterogeneity. We introduce two latent segments of households. The households in the first segment are assumed to respond to marketing efforts while households
"Violent Intent Modeling: Incorporating Cultural Knowledge into the Analytical Process
Energy Technology Data Exchange (ETDEWEB)
Sanfilippo, Antonio P.; Nibbs, Faith G.
2007-08-24
While culture has a significant effect on the appropriate interpretation of textual data, the incorporation of cultural considerations into data transformations has not been systematic. Recognizing that the successful prevention of terrorist activities could hinge on the knowledge of the subcultures, Anthropologist and DHS intern Faith Nibbs has been addressing the need to incorporate cultural knowledge into the analytical process. In this Brown Bag she will present how cultural ideology is being used to understand how the rhetoric of group leaders influences the likelihood of their constituents to engage in violent or radicalized behavior, and how violent intent modeling can benefit from understanding that process.
How to incorporate generic refraction models into multistatic tracking algorithms
Crouse, D. F.
The vast majority of literature published on target tracking ignores the effects of atmospheric refraction. When refraction is considered, the solutions are generally tailored to a simple exponential atmospheric refraction model. This paper discusses how arbitrary refraction models can be incorporated into tracking algorithms. Attention is paid to multistatic tracking problems, where uncorrected refractive effects can worsen track accuracy and consistency in centralized tracking algorithms, and can lead to difficulties in track-to-track association in distributed tracking filters. Monostatic and bistatic track initialization using refraction-corrupted measurements is discussed. The results are demonstrated using an exponential refractive model, though an arbitrary refraction profile can be substituted.
Phase transitions for Ising model with four competing interactions
International Nuclear Information System (INIS)
Ganikhodjaev, N.N.; Rozikov, U.A.
2004-11-01
In this paper we consider an Ising model with four competing interactions (external field, nearest neighbor, second neighbors and triples of neighbors) on the Cayley tree of order two. We show that for some parameter values of the model there is phase transition. Our second result gives a complete description of periodic Gibbs measures for the model. We also construct uncountably many non-periodic extreme Gibbs measures. (author)
Incorporating nitrogen fixing cyanobacteria in the global biogeochemical model HAMOCC
Paulsen, Hanna; Ilyina, Tatiana; Six, Katharina
2015-04-01
Nitrogen fixation by marine diazotrophs plays a fundamental role in the oceanic nitrogen and carbon cycle as it provides a major source of 'new' nitrogen to the euphotic zone that supports biological carbon export and sequestration. Since most global biogeochemical models include nitrogen fixation only diagnostically, they are not able to capture its spatial pattern sufficiently. Here we present the incorporation of an explicit, dynamic representation of diazotrophic cyanobacteria and the corresponding nitrogen fixation in the global ocean biogeochemical model HAMOCC (Hamburg Ocean Carbon Cycle model), which is part of the Max Planck Institute for Meteorology Earth system model (MPI-ESM). The parameterization of the diazotrophic growth is thereby based on available knowledge about the cyanobacterium Trichodesmium spp., which is considered as the most significant pelagic nitrogen fixer. Evaluation against observations shows that the model successfully reproduces the main spatial distribution of cyanobacteria and nitrogen fixation, covering large parts of the tropical and subtropical oceans. Besides the role of cyanobacteria in marine biogeochemical cycles, their capacity to form extensive surface blooms induces a number of bio-physical feedback mechanisms in the Earth system. The processes driving these interactions, which are related to the alteration of heat absorption, surface albedo and momentum input by wind, are incorporated in the biogeochemical and physical model of the MPI-ESM in order to investigate their impacts on a global scale. First preliminary results will be shown.
Importance of incorporating agriculture in conceptual rainfall-runoff models
de Boer-Euser, Tanja; Hrachowitz, Markus; Winsemius, Hessel; Savenije, Hubert
2016-04-01
Incorporating spatially variable information is a frequently discussed option to increase the performance of (semi-)distributed conceptual rainfall-runoff models. One of the methods to do this is by using this spatially variable information to delineate Hydrological Response Units (HRUs) within a catchment. In large parts of Europe the original forested land cover is replaced by an agricultural land cover. This change in land cover probably affects the dominant runoff processes in the area, for example by increasing the Hortonian overland flow component, especially on the flatter and higher elevated parts of the catchment. A change in runoff processes implies a change in HRUs as well. A previous version of our model distinguished wetlands (areas close to the stream) from the remainder of the catchment. However, this configuration was not able to reproduce all fast runoff processes, both in summer as in winter. Therefore, this study tests whether the reproduction of fast runoff processes can be improved by incorporating a HRU which explicitly accounts for the effect of agriculture. A case study is carried out in the Ourthe catchment in Belgium. For this case study the relevance of different process conceptualisations is tested stepwise. Among the conceptualisations are Hortonian overland flow in summer and winter, reduced infiltration capacity due to a partly frozen soil and the relative effect of rainfall and snow smelt in case of this frozen soil. The results show that the named processes can make a large difference on event basis, especially the Hortonian overland flow in summer and the combination of rainfall and snow melt on (partly) frozen soil in winter. However, differences diminish when the modelled period of several years is evaluated based on standard metrics like Nash-Sutcliffe Efficiency. These results emphasise on one hand the importance of incorporating the effects of agricultural in conceptual models and on the other hand the importance of more event
Incorporating model parameter uncertainty into inverse treatment planning
International Nuclear Information System (INIS)
Lian Jun; Xing Lei
2004-01-01
Radiobiological treatment planning depends not only on the accuracy of the models describing the dose-response relation of different tumors and normal tissues but also on the accuracy of tissue specific radiobiological parameters in these models. Whereas the general formalism remains the same, different sets of model parameters lead to different solutions and thus critically determine the final plan. Here we describe an inverse planning formalism with inclusion of model parameter uncertainties. This is made possible by using a statistical analysis-based frameset developed by our group. In this formalism, the uncertainties of model parameters, such as the parameter a that describes tissue-specific effect in the equivalent uniform dose (EUD) model, are expressed by probability density function and are included in the dose optimization process. We found that the final solution strongly depends on distribution functions of the model parameters. Considering that currently available models for computing biological effects of radiation are simplistic, and the clinical data used to derive the models are sparse and of questionable quality, the proposed technique provides us with an effective tool to minimize the effect caused by the uncertainties in a statistical sense. With the incorporation of the uncertainties, the technique has potential for us to maximally utilize the available radiobiology knowledge for better IMRT treatment
Incorporating published univariable associations in diagnostic and prognostic modeling
Directory of Open Access Journals (Sweden)
A Debray Thomas P
2012-08-01
Full Text Available Abstract Background Diagnostic and prognostic literature is overwhelmed with studies reporting univariable predictor-outcome associations. Currently, methods to incorporate such information in the construction of a prediction model are underdeveloped and unfamiliar to many researchers. Methods This article aims to improve upon an adaptation method originally proposed by Greenland (1987 and Steyerberg (2000 to incorporate previously published univariable associations in the construction of a novel prediction model. The proposed method improves upon the variance estimation component by reconfiguring the adaptation process in established theory and making it more robust. Different variants of the proposed method were tested in a simulation study, where performance was measured by comparing estimated associations with their predefined values according to the Mean Squared Error and coverage of the 90% confidence intervals. Results Results demonstrate that performance of estimated multivariable associations considerably improves for small datasets where external evidence is included. Although the error of estimated associations decreases with increasing amount of individual participant data, it does not disappear completely, even in very large datasets. Conclusions The proposed method to aggregate previously published univariable associations with individual participant data in the construction of a novel prediction models outperforms established approaches and is especially worthwhile when relatively limited individual participant data are available.
A mathematical model for incorporating biofeedback into human postural control
Directory of Open Access Journals (Sweden)
Ersal Tulga
2013-02-01
Full Text Available Abstract Background Biofeedback of body motion can serve as a balance aid and rehabilitation tool. To date, mathematical models considering the integration of biofeedback into postural control have represented this integration as a sensory addition and limited their application to a single degree-of-freedom representation of the body. This study has two objectives: 1 to develop a scalable method for incorporating biofeedback into postural control that is independent of the model’s degrees of freedom, how it handles sensory integration, and the modeling of its postural controller; and 2 to validate this new model using multidirectional perturbation experimental results. Methods Biofeedback was modeled as an additional torque to the postural controller torque. For validation, this biofeedback modeling approach was applied to a vibrotactile biofeedback device and incorporated into a two-link multibody model with full-state-feedback control that represents the dynamics of bipedal stance. Average response trajectories of body sway and center of pressure (COP to multidirectional surface perturbations of subjects with vestibular deficits were used for model parameterization and validation in multiple perturbation directions and for multiple display resolutions. The quality of fit was quantified using average error and cross-correlation values. Results The mean of the average errors across all tactor configurations and perturbations was 0.24° for body sway and 0.39 cm for COP. The mean of the cross-correlation value was 0.97 for both body sway and COP. Conclusions The biofeedback model developed in this study is capable of capturing experimental response trajectory shapes with low average errors and high cross-correlation values in both the anterior-posterior and medial-lateral directions for all perturbation directions and spatial resolution display configurations considered. The results validate that biofeedback can be modeled as an additional
Zhang, Jian; Yang, Xiao-hua; Chen, Xiao-juan
2015-01-01
Due to nonlinear and multiscale characteristics of temperature time series, a new model called wavelet network model based on multiple criteria decision making (WNMCDM) has been proposed, which combines the advantage of wavelet analysis, multiple criteria decision making, and artificial neural network. One case for forecasting extreme monthly maximum temperature of Miyun Reservoir has been conducted to examine the performance of WNMCDM model. Compared with nearest neighbor bootstrapping regr...
Accurate modeling of defects in graphene transport calculations
Linhart, Lukas; Burgdörfer, Joachim; Libisch, Florian
2018-01-01
We present an approach for embedding defect structures modeled by density functional theory into large-scale tight-binding simulations. We extract local tight-binding parameters for the vicinity of the defect site using Wannier functions. In the transition region between the bulk lattice and the defect the tight-binding parameters are continuously adjusted to approach the bulk limit far away from the defect. This embedding approach allows for an accurate high-level treatment of the defect orbitals using as many as ten nearest neighbors while keeping a small number of nearest neighbors in the bulk to render the overall computational cost reasonable. As an example of our approach, we consider an extended graphene lattice decorated with Stone-Wales defects, flower defects, double vacancies, or silicon substitutes. We predict distinct scattering patterns mirroring the defect symmetries and magnitude that should be experimentally accessible.
Incorporating modelled subglacial hydrology into inversions for basal drag
Koziol, Conrad P.; Arnold, Neil
2017-12-01
A key challenge in modelling coupled ice-flow-subglacial hydrology is initializing the state and parameters of the system. We address this problem by presenting a workflow for initializing these values at the start of a summer melt season. The workflow depends on running a subglacial hydrology model for the winter season, when the system is not forced by meltwater inputs, and ice velocities can be assumed constant. Key parameters of the winter run of the subglacial hydrology model are determined from an initial inversion for basal drag using a linear sliding law. The state of the subglacial hydrology model at the end of winter is incorporated into an inversion of basal drag using a non-linear sliding law which is a function of water pressure. We demonstrate this procedure in the Russell Glacier area and compare the output of the linear sliding law with two non-linear sliding laws. Additionally, we compare the modelled winter hydrological state to radar observations and find that it is in line with summer rather than winter observations.
Incorporating modelled subglacial hydrology into inversions for basal drag
Directory of Open Access Journals (Sweden)
C. P. Koziol
2017-12-01
Full Text Available A key challenge in modelling coupled ice-flow–subglacial hydrology is initializing the state and parameters of the system. We address this problem by presenting a workflow for initializing these values at the start of a summer melt season. The workflow depends on running a subglacial hydrology model for the winter season, when the system is not forced by meltwater inputs, and ice velocities can be assumed constant. Key parameters of the winter run of the subglacial hydrology model are determined from an initial inversion for basal drag using a linear sliding law. The state of the subglacial hydrology model at the end of winter is incorporated into an inversion of basal drag using a non-linear sliding law which is a function of water pressure. We demonstrate this procedure in the Russell Glacier area and compare the output of the linear sliding law with two non-linear sliding laws. Additionally, we compare the modelled winter hydrological state to radar observations and find that it is in line with summer rather than winter observations.
Safety models incorporating graph theory based transit indicators.
Quintero, Liliana; Sayed, Tarek; Wahba, Mohamed M
2013-01-01
There is a considerable need for tools to enable the evaluation of the safety of transit networks at the planning stage. One interesting approach for the planning of public transportation systems is the study of networks. Network techniques involve the analysis of systems by viewing them as a graph composed of a set of vertices (nodes) and edges (links). Once the transport system is visualized as a graph, various network properties can be evaluated based on the relationships between the network elements. Several indicators can be calculated including connectivity, coverage, directness and complexity, among others. The main objective of this study is to investigate the relationship between network-based transit indicators and safety. The study develops macro-level collision prediction models that explicitly incorporate transit physical and operational elements and transit network indicators as explanatory variables. Several macro-level (zonal) collision prediction models were developed using a generalized linear regression technique, assuming a negative binomial error structure. The models were grouped into four main themes: transit infrastructure, transit network topology, transit route design, and transit performance and operations. The safety models showed that collisions were significantly associated with transit network properties such as: connectivity, coverage, overlapping degree and the Local Index of Transit Availability. As well, the models showed a significant relationship between collisions and some transit physical and operational attributes such as the number of routes, frequency of routes, bus density, length of bus and 3+ priority lanes. Copyright © 2012 Elsevier Ltd. All rights reserved.
An electricity generation planning model incorporating demand response
International Nuclear Information System (INIS)
Choi, Dong Gu; Thomas, Valerie M.
2012-01-01
Energy policies that aim to reduce carbon emissions and change the mix of electricity generation sources, such as carbon cap-and-trade systems and renewable electricity standards, can affect not only the source of electricity generation, but also the price of electricity and, consequently, demand. We develop an optimization model to determine the lowest cost investment and operation plan for the generating capacity of an electric power system. The model incorporates demand response to price change. In a case study for a U.S. state, we show the price, demand, and generation mix implications of a renewable electricity standard, and of a carbon cap-and-trade policy with and without initial free allocation of carbon allowances. This study shows that both the demand moderating effects and the generation mix changing effects of the policies can be the sources of carbon emissions reductions, and also shows that the share of the sources could differ with different policy designs. The case study provides different results when demand elasticity is excluded, underscoring the importance of incorporating demand response in the evaluation of electricity generation policies. - Highlights: ► We develop an electric power system optimization model including demand elasticity. ► Both renewable electricity and carbon cap-and-trade policies can moderate demand. ► Both policies affect the generation mix, price, and demand for electricity. ► Moderated demand can be a significant source of carbon emission reduction. ► For cap-and-trade policies, initial free allowances change outcomes significantly.
Tantalum strength model incorporating temperature, strain rate and pressure
Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt
Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Incorporating Plant Phenology Dynamics in a Biophysical Canopy Model
Barata, Raquel A.; Drewry, Darren
2012-01-01
The Multi-Layer Canopy Model (MLCan) is a vegetation model created to capture plant responses to environmental change. Themodel vertically resolves carbon uptake, water vapor and energy exchange at each canopy level by coupling photosynthesis, stomatal conductance and leaf energy balance. The model is forced by incoming shortwave and longwave radiation, as well as near-surface meteorological conditions. The original formulation of MLCan utilized canopy structural traits derived from observations. This project aims to incorporate a plant phenology scheme within MLCan allowing these structural traits to vary dynamically. In the plant phenology scheme implemented here, plant growth is dependent on environmental conditions such as air temperature and soil moisture. The scheme includes functionality that models plant germination, growth, and senescence. These growth stages dictate the variation in six different vegetative carbon pools: storage, leaves, stem, coarse roots, fine roots, and reproductive. The magnitudes of these carbon pools determine land surface parameters such as leaf area index, canopy height, rooting depth and root water uptake capacity. Coupling this phenology scheme with MLCan allows for a more flexible representation of the structure and function of vegetation as it responds to changing environmental conditions.
Statistical Mechanics Model for the Dynamics of Collective Epigenetic Histone Modification
Zhang, Hang; Tian, Xiao-Jun; Mukhopadhyay, Abhishek; Kim, K. S.; Xing, Jianhua
2014-02-01
Epigenetic histone modifications play an important role in the maintenance of different cell phenotypes. The exact molecular mechanism for inheritance of the modification patterns over cell generations remains elusive. We construct a Potts-type model based on experimentally observed nearest-neighbor enzyme lateral interactions and nucleosome covalent modification state biased enzyme recruitment. The model can lead to effective nonlocal interactions among nucleosomes suggested in previous theoretical studies, and epigenetic memory is robustly inheritable against stochastic cellular processes.
Incorporation of chemical kinetic models into process control
International Nuclear Information System (INIS)
Herget, C.J.; Frazer, J.W.
1981-01-01
An important consideration in chemical process control is to determine the precise rationing of reactant streams, particularly when a large time delay exists between the mixing of the reactants and the measurement of the product. In this paper, a method is described for incorporating chemical kinetic models into the control strategy in order to achieve optimum operating conditions. The system is first characterized by determining a reaction rate surface as a function of all input reactant concentrations over a feasible range. A nonlinear constrained optimization program is then used to determine the combination of reactants which produces the specified yield at minimum cost. This operating condition is then used to establish the nominal concentrations of the reactants. The actual operation is determined through a feedback control system employing a Smith predictor. The method is demonstrated on a laboratory bench scale enzyme reactor
Phase transitions in the Haldane-Hubbard model within coherent potential approximation
Le, Duc-Anh; Tran, Minh-Tien; Tran, Thi-Thanh-Mai; Nguyen, Thi-Thao; Nguyen, Thi-Huong; Hoang, Anh-Tuan
2018-03-01
Within the coherent potential approximation we study the two-dimensional Haldane-Hubbard model, in which an interplay between topology and correlation effects is realized. The model essentially describes correlated electrons moving in a honeycomb lattice with zero net magnetic flux. The influence of the next-nearest-neighbor hopping and electron correlations on the metal-insulator transitions are investigated by monitoring the density of states at the Fermi level and the energy gap. The topological properties of the insulators is determined by the Chern number. With a given next-nearest-neighbor hopping, electron correlations drive the system from the topological Chern insulator to a metal, and then to the topologically trivial Mott insulator.
Modelling of spectroscopic batch process data using grey models to incorporate external information
Gurden, S. P.; Westerhuis, J. A.; Bijlsma, S.; Smilde, A. K.
2001-01-01
In both analytical and process chemistry, one common aim is to build models describing measured data. In cases where additional information about the chemical system is available, this can be incorporated into the model with the aim of improving model fit and interpretability. A model which consists
Incorporating a 360 Degree Evaluation Model IOT Transform the USMC Performance Evaluation System
2005-02-08
Incorporating a 360 Evaluation Model IOT Transform the USMC Performance Evaluation System EWS 2005 Subject Area Manpower...Incorporating a 360 Evaluation Model IOT Transform the USMC Performance Evaluation System” Contemporary...COVERED 00-00-2005 to 00-00-2005 4. TITLE AND SUBTITLE Incorporating a 360 Evaluation Model IOT Transform the USMC Performance
Directory of Open Access Journals (Sweden)
Fachruddin Fachruddin
2017-07-01
Full Text Available Software Effort Estimation adalah proses estimasi biaya perangkat lunak sebagai suatu proses penting dalam melakukan proyek perangkat lunak. Berbagai penelitian terdahulu telah melakukan estimasi usaha perangkat lunak dengan berbagai metode, baik metode machine learning maupun non machine learning. Penelitian ini mengadakan set eksperimen seleksi atribut pada parameter proyek menggunakan teknik k-nearest neighbours sebagai estimasinya dengan melakukan seleksi atribut menggunakan information gain dan mutual information serta bagaimana menemukan parameter proyek yang paling representif pada software effort estimation. Dataset software estimation effort yang digunakan pada eksperimen adalah yakni albrecht, china, kemerer dan mizayaki94 yang dapat diperoleh dari repositori data khusus Software Effort Estimation melalui url http://openscience.us/repo/effort/. Selanjutnya peneliti melakukan pembangunan aplikasi seleksi atribut untuk menyeleksi parameter proyek. Sistem ini menghasilkan dataset arff yang telah diseleksi. Aplikasi ini dibangun dengan bahasa java menggunakan IDE Netbean. Kemudian dataset yang telah di-generate merupakan parameter hasil seleksi yang akan dibandingkan pada saat melakukan Software Effort Estimation menggunakan tool WEKA . Seleksi Fitur berhasil menurunkan nilai error estimasi (yang diwakilkan oleh nilai RAE dan RMSE. Artinya bahwa semakin rendah nilai error (RAE dan RMSE maka semakin akurat nilai estimasi yang dihasilkan. Estimasi semakin baik setelah di lakukan seleksi fitur baik menggunakan information gain maupun mutual information. Dari nilai error yang dihasilkan maka dapat disimpulkan bahwa dataset yang dihasilkan seleksi fitur dengan metode information gain lebih baik dibanding mutual information namun, perbedaan keduanya tidak terlalu signifikan.
ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms
DEFF Research Database (Denmark)
Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander
2017-01-01
visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...
A Coupled k-Nearest Neighbor Algorithm for Multi-Label Classification
2015-05-22
0.046 n yeast 2417 103 14 4.237 0.303 198 0.082 n image 2000 294 5 1.236 0.247 20 0.010 n scene 2407 294 6 1.074 0.179 15 0.006 n enron 1702 1001 53...scene 0.078(1) 0.152(5) 0.084(2) 0.089(3) 0.104(4) enron 0.061(4) 0.052(2) 0.052(2) 0.064(5) 0.047(1) genbase 0.003(2) 0.004(3) 0.005(4) 0.005(4...0.228(2) 0.237(5) 0.232(3) image 0.267(1) 0.601(5) 0.319(3) 0.432(4) 0.314(2) scene 0.197(1) 0.821(5) 0.219(2) 0.235(3) 0.251(4) enron 0.308(3) 0.237(1
Ghinita, Gabriel
2010-12-15
Mobile devices with global positioning capabilities allow users to retrieve points of interest (POI) in their proximity. To protect user privacy, it is important not to disclose exact user coordinates to un-trusted entities that provide location-based services. Currently, there are two main approaches to protect the location privacy of users: (i) hiding locations inside cloaking regions (CRs) and (ii) encrypting location data using private information retrieval (PIR) protocols. Previous work focused on finding good trade-offs between privacy and performance of user protection techniques, but disregarded the important issue of protecting the POI dataset D. For instance, location cloaking requires large-sized CRs, leading to excessive disclosure of POIs (O({pipe}D{pipe}) in the worst case). PIR, on the other hand, reduces this bound to O(√{pipe}D{pipe}), but at the expense of high processing and communication overhead. We propose hybrid, two-step approaches for private location-based queries which provide protection for both the users and the database. In the first step, user locations are generalized to coarse-grained CRs which provide strong privacy. Next, a PIR protocol is applied with respect to the obtained query CR. To protect against excessive disclosure of POI locations, we devise two cryptographic protocols that privately evaluate whether a point is enclosed inside a rectangular region or a convex polygon. We also introduce algorithms to efficiently support PIR on dynamic POI sub-sets. We provide solutions for both approximate and exact NN queries. In the approximate case, our method discloses O(1) POI, orders of magnitude fewer than CR- or PIR-based techniques. For the exact case, we obtain optimal disclosure of a single POI, although with slightly higher computational overhead. Experimental results show that the hybrid approaches are scalable in practice, and outperform the pure-PIR approach in terms of computational and communication overhead. © 2010 Springer Science+Business Media, LLC.
Nearest neighbor affects G:C to A:T transitions induced by alkylating agents.
Glickman, B W; Horsfall, M J; Gordon, A J; Burns, P A
1987-01-01
The influence of local DNA sequence on the distribution of G:C to A:T transitions induced in the lacI gene of E. coli by a series of alkylating agents has been analyzed. In the case of nitrosoguanidine, two nitrosoureas and a nitrosamine, a strong preference for mutation at sites proceeded 5' by a purine base was noted. This preference was observed with both methyl and ethyl donors where the predicted common ultimate alkylating species is the alkyl diazonium ion. In contrast, this preference was not seen following treatment with ethylmethanesulfonate. The observed preference for 5'PuG-3' site over 5'-PyG-3' sites corresponds well with alterations observed in the Ha-ras oncogene recovered after treatment with NMU. This indicates that the mutations recovered in the oncogenes are likely the direct consequence of the alkylation treatment and that the local sequence effects seen in E. coli also appear to occur in mammalian cells. PMID:3329097
Nearest neighbor affects G:C to A:T transitions induced by alkylating agents
Energy Technology Data Exchange (ETDEWEB)
Glickman, B.W.; Horsfall, M.J.; Gordon, A.J.E.; Burns, P.A.
1987-12-01
The influence of local DNA sequence on the distribution of G:C to A:T transitions induced in the lacI gene of E. coli by a series of alkylating agents has been analyzed. In the case of nitrosoguanidine, two nitrosoureas and a nitrosamine, a strong preference for mutation at sites proceeded 5' by a purine base was noted. This preferences was observed with both methyl and ethyl donors where the predicted common ultimate alkylating species in the alkyl diazonium ion. In contrast, this preferences was not seen following treatment with ethylmethanesulfonate. The observed preference for 5'PuG-3' site over 5'-PyG-3' sites corresponds well with alterations observed in the Ha-ras oncogene recovered after treatment with NMU. This indicates that the mutations recovered in the oncogenes are likely the direct consequence of the alkylation treatment and that the local sequence effects seen in E. coli also appear to occur in mammalian cells.
Cellular Class Encoding Approach to Increasing Efficiency of Nearest Neighbor Searching
2009-03-26
An algorithm for finding nearest neighbours in (approximately) constant average time”, Pattern Recognition Letters, vol. 4, 1986. [2] Mico , L...Oncina, J., Vidal, E., “An algorithm for finding nearest neighbours in constant average time with a linear space complexity ”, Pattern Recognition
PERBANDINGAN K-NEAREST NEIGHBOR DAN NAIVE BAYES UNTUK KLASIFIKASI TANAH LAYAK TANAM POHON JATI
Directory of Open Access Journals (Sweden)
Didik Srianto
2016-10-01
Full Text Available Data mining adalah proses menganalisa data dari perspektif yang berbeda dan menyimpulkannya menjadi informasi-informasi penting yang dapat dipakai untuk meningkatkan keuntungan, memperkecil biaya pengeluaran, atau bahkan keduanya. Secara teknis, data mining dapat disebut sebagai proses untuk menemukan korelasi atau pola dari ratusan atau ribuan field dari sebuah relasional database yang besar. Pada perum perhutani KPH SEMARANG saat ini masih menggunakan cara manual untuk menentukan jenis tanaman (jati / non jati. K-Nearest Neighbour atau k-NN merupakan algoritma data mining yang dapat digunakan untuk proses klasifikasi dan regresi. Naive bayes Classifier merupakan suatu teknik yang dapat digunakan untuk teknik klasifikasi. Pada penelitian ini k-NN dan Naive Bayes akan digunakan untuk mengklasifikasi data pohon jati dari perum perhutani KPH SEMARANG. Yang mana hasil klasifikasi dari k-NN dan Naive Bayes akan dibandingkan hasilnya. Pengujian dilakukan menggunakan software RapidMiner. Setelah dilakukan pengujian k-NN dianggap lebih baik dari Naife Bayes dengan akurasi 96.66% dan 82.63. Kata kunci -k-NN,Klasifikasi,Naive Bayes,Penanaman Pohon Jati
Analytical results for entanglement in the five-qubit anisotropic Heisenberg model
International Nuclear Information System (INIS)
Wang Xiaoguang
2004-01-01
We solve the eigenvalue problem of the five-qubit anisotropic Heisenberg model, without use of Bethe's ansatz, and give analytical results for entanglement and mixedness of two nearest-neighbor qubits. The entanglement takes its maximum at Δ=1 (Δ>1) for the case of zero (finite) temperature with Δ being the anisotropic parameter. In contrast, the mixedness takes its minimum at Δ=1 (Δ>1) for the case of zero (finite) temperature
Identification of interactions using model-based multifactor dimensionality reduction.
Gola, Damian; König, Inke R
2016-01-01
Common complex traits may involve multiple genetic and environmental factors and their interactions. Many methods have been proposed to identify these interaction effects, among them several machine learning and data mining methods. These are attractive for identifying interactions because they do not rely on specific genetic model assumptions. To handle the computational burden arising from an exhaustive search, including all possible combinations of factors, filter methods try to select promising factors in advance. Model-based multifactor dimensionality reduction (MB-MDR), a semiparametric machine learning method allowing adjustment for confounding variables and lower level effects, is applied to Genetic Analysis Workshop 19 (GAW19) data to identify interaction effects on different traits. Several filtering methods based on the nearest neighbor algorithm are assessed in terms of compatibility with MB-MDR. Single nucleotide polymorphism (SNP) rs859400 shows a significant interaction effect (corrected p value <0.05) with age on systolic blood pressure (SBP). We identified 23 SNP-SNP interaction effects on hypertension status (HS), 42 interaction effects on SBP, and 26 interaction effects on diastolic blood pressure (DBP). Several of these SNPs are in strong linkage disequilibrium (LD). Three of the interaction effects on HS are identified in filtered subsets. The considered filtering methods seem not to be appropriate to use with MB-MDR. LD pruning is further quality control to be incorporated, which can reduce the combinatorial burden by removing redundant SNPs.
Regional Calibration of SCS-CN L-THIA Model: Application for Ungauged Basins
Directory of Open Access Journals (Sweden)
Ji-Hong Jeon
2014-05-01
Full Text Available Estimating surface runoff for ungauged watershed is an important issue. The Soil Conservation Service Curve Number (SCS-CN method developed from long-term experimental data is widely used to estimate surface runoff from gaged or ungauged watersheds. Many modelers have used the documented SCS-CN parameters without calibration, sometimes resulting in significant errors in estimating surface runoff. Several methods for regionalization of SCS-CN parameters were evaluated. The regionalization methods include: (1 average; (2 land use area weighted average; (3 hydrologic soil group area weighted average; (4 area combined land use and hydrologic soil group weighted average; (5 spatial nearest neighbor; (6 inverse distance weighted average; and (7 global calibration method, and model performance for each method was evaluated with application to 14 watersheds located in Indiana. Eight watersheds were used for calibration and six watersheds for validation. For the validation results, the spatial nearest neighbor method provided the highest average Nash-Sutcliffe (NS value at 0.58 for six watersheds but it included the lowest NS value and variance of NS values of this method was the highest. The global calibration method provided the second highest average NS value at 0.56 with low variation of NS values. Although the spatial nearest neighbor method provided the highest average NS value, this method was not statistically different than other methods. However, the global calibration method was significantly different than other methods except the spatial nearest neighbor method. Therefore, we conclude that the global calibration method is appropriate to regionalize SCS-CN parameters for ungauged watersheds.
First-order phase transition in a 2D random-field Ising model with conflicting dynamics
Crokidakis, N.
2009-01-01
The effects of locally random magnetic fields are considered in a nonequilibrium Ising model defined on a square lattice with nearest-neighbors interactions. In order to generate the random magnetic fields, we have considered random variables $\\{h\\}$ that change randomly with time according to a double-gaussian probability distribution, which consists of two single gaussian distributions, centered at $+h_{o}$ and $-h_{o}$, with the same width $\\sigma$. This distribution is very general, and c...
J{sub 1x}-J{sub 1y}-J{sub 2} square-lattice anisotropic Heisenberg model
Energy Technology Data Exchange (ETDEWEB)
Pires, A.S.T., E-mail: antpires@frisica.ufmg.br
2017-08-01
Highlights: • We use the SU(3) Schwinger boson formalism. • We present the phase diagram at zero temperature. • We calculate the quadrupole structure factor. - Abstract: The spin one Heisenberg model with an easy-plane single-ion anisotropy and spatially anisotropic nearest-neighbor coupling, frustrated by a next-nearest neighbor interaction, is studied at zero temperature using a SU(3) Schwinger boson formalism (sometimes also referred to as flavor wave theory) in a mean field approximation. The local constraint is enforced by introducing a Lagrange multiplier. The enlarged Hilbert space of S = 1 spins lead to a nematic phase that is ubiquitous to S = 1 spins with single ion anisotropy. The phase diagram shows two magnetically ordered phase, separated by a quantum paramagnetic (nematic) phase.
Incorporation of ice sheet models into an Earth system model: Focus ...
Indian Academy of Sciences (India)
Oleg Rybak
2018-03-06
Mar 6, 2018 ... Elaboration of a modern Earth system model (ESM) requires incorporation of ice sheet dynamics. Coupling of an ice sheet model (ICM) to an AOGCM is complicated by essential differences in spatial and temporal scales of cryospheric, atmospheric and oceanic components. To overcome this difficulty, we ...
Bieniek, Maciej; Korkusiński, Marek; Szulakowska, Ludmiła; Potasz, Paweł; Ozfidan, Isil; Hawrylak, Paweł
2018-02-01
We present here the minimal tight-binding model for a single layer of transition metal dichalcogenides (TMDCs) MX 2(M , metal; X , chalcogen) which illuminates the physics and captures band nesting, massive Dirac fermions, and valley Landé and Zeeman magnetic field effects. TMDCs share the hexagonal lattice with graphene but their electronic bands require much more complex atomic orbitals. Using symmetry arguments, a minimal basis consisting of three metal d orbitals and three chalcogen dimer p orbitals is constructed. The tunneling matrix elements between nearest-neighbor metal and chalcogen orbitals are explicitly derived at K ,-K , and Γ points of the Brillouin zone. The nearest-neighbor tunneling matrix elements connect specific metal and sulfur orbitals yielding an effective 6 ×6 Hamiltonian giving correct composition of metal and chalcogen orbitals but not the direct gap at K points. The direct gap at K , correct masses, and conduction band minima at Q points responsible for band nesting are obtained by inclusion of next-neighbor Mo-Mo tunneling. The parameters of the next-nearest-neighbor model are successfully fitted to MX 2(M =Mo ; X =S ) density functional ab initio calculations of the highest valence and lowest conduction band dispersion along K -Γ line in the Brillouin zone. The effective two-band massive Dirac Hamiltonian for MoS2, Landé g factors, and valley Zeeman splitting are obtained.
Implementing the Standards: Incorporating Mathematical Modeling into the Curriculum.
Swetz, Frank
1991-01-01
Following a brief historical review of the mechanism of mathematical modeling, examples are included that associate a mathematical model with given data (changes in sea level) and that model a real-life situation (process of parallel parking). Also provided is the rationale for the curricular implementation of mathematical modeling. (JJK)
Incorporating inductances in tissue-scale models of cardiac electrophysiology
Rossi, Simone; Griffith, Boyce E.
2017-09-01
In standard models of cardiac electrophysiology, including the bidomain and monodomain models, local perturbations can propagate at infinite speed. We address this unrealistic property by developing a hyperbolic bidomain model that is based on a generalization of Ohm's law with a Cattaneo-type model for the fluxes. Further, we obtain a hyperbolic monodomain model in the case that the intracellular and extracellular conductivity tensors have the same anisotropy ratio. In one spatial dimension, the hyperbolic monodomain model is equivalent to a cable model that includes axial inductances, and the relaxation times of the Cattaneo fluxes are strictly related to these inductances. A purely linear analysis shows that the inductances are negligible, but models of cardiac electrophysiology are highly nonlinear, and linear predictions may not capture the fully nonlinear dynamics. In fact, contrary to the linear analysis, we show that for simple nonlinear ionic models, an increase in conduction velocity is obtained for small and moderate values of the relaxation time. A similar behavior is also demonstrated with biophysically detailed ionic models. Using the Fenton-Karma model along with a low-order finite element spatial discretization, we numerically analyze differences between the standard monodomain model and the hyperbolic monodomain model. In a simple benchmark test, we show that the propagation of the action potential is strongly influenced by the alignment of the fibers with respect to the mesh in both the parabolic and hyperbolic models when using relatively coarse spatial discretizations. Accurate predictions of the conduction velocity require computational mesh spacings on the order of a single cardiac cell. We also compare the two formulations in the case of spiral break up and atrial fibrillation in an anatomically detailed model of the left atrium, and we examine the effect of intracellular and extracellular inductances on the virtual electrode phenomenon.
A quantum model of exaptation: incorporating potentiality into evolutionary theory.
Gabora, Liane; Scott, Eric O; Kauffman, Stuart
2013-09-01
The phenomenon of preadaptation, or exaptation (wherein a trait that originally evolved to solve one problem is co-opted to solve a new problem) presents a formidable challenge to efforts to describe biological phenomena using a classical (Kolmogorovian) mathematical framework. We develop a quantum framework for exaptation with examples from both biological and cultural evolution. The state of a trait is written as a linear superposition of a set of basis states, or possible forms the trait could evolve into, in a complex Hilbert space. These basis states are represented by mutually orthogonal unit vectors, each weighted by an amplitude term. The choice of possible forms (basis states) depends on the adaptive function of interest (e.g., ability to metabolize lactose or thermoregulate), which plays the role of the observable. Observables are represented by self-adjoint operators on the Hilbert space. The possible forms (basis states) corresponding to this adaptive function (observable) are called eigenstates. The framework incorporates key features of exaptation: potentiality, contextuality, nonseparability, and emergence of new features. However, since it requires that one enumerate all possible contexts, its predictive value is limited, consistent with the assertion that there exists no biological equivalent to "laws of motion" by which we can predict the evolution of the biosphere. Copyright © 2013 Elsevier Ltd. All rights reserved.
Incorporating Contagion in Portfolio Credit Risk Models Using Network Theory
Anagnostou, I.; Sourabh, S.; Kandhai, D.
2018-01-01
Portfolio credit risk models estimate the range of potential losses due to defaults or deteriorations in credit quality. Most of these models perceive default correlation as fully captured by the dependence on a set of common underlying risk factors. In light of empirical evidence, the ability of
Incorporating measurement error in n = 1 psychological autoregressive modeling
Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988
Improving hydrological simulations by incorporating GRACE data for model calibration
Bai, Peng; Liu, Xiaomang; Liu, Changming
2018-02-01
Hydrological model parameters are typically calibrated by observed streamflow data. This calibration strategy is questioned when the simulated hydrological variables of interest are not limited to streamflow. Well-performed streamflow simulations do not guarantee the reliable reproduction of other hydrological variables. One of the reasons is that hydrological model parameters are not reasonably identified. The Gravity Recovery and Climate Experiment (GRACE)-derived total water storage change (TWSC) data provide an opportunity to constrain hydrological model parameterizations in combination with streamflow observations. In this study, a multi-objective calibration scheme based on GRACE-derived TWSC and streamflow observations was compared with the traditional single-objective calibration scheme based on only streamflow simulations. Two hydrological models were employed on 22 catchments in China with different climatic conditions. The model evaluations were performed using observed streamflows, GRACE-derived TWSC, and actual evapotranspiration (ET) estimates from flux towers and from the water balance approach. Results showed that the multi-objective calibration scheme provided more reliable TWSC and ET simulations without significant deterioration in the accuracy of streamflow simulations than the single-objective calibration. The improvement in TWSC and ET simulations was more significant in relatively dry catchments than in relatively wet catchments. In addition, hydrological models calibrated using GRACE-derived TWSC data alone cannot obtain accurate runoff simulations in ungauged catchments. This study highlights the importance of including additional constraints in addition to streamflow observations to improve performances of hydrological models.
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
Incorporating spiritual beliefs into a cognitive model of worry.
Rosmarin, David H; Pirutinsky, Steven; Auerbach, Randy P; Björgvinsson, Thröstur; Bigda-Peyton, Joseph; Andersson, Gerhard; Pargament, Kenneth I; Krumrei, Elizabeth J
2011-07-01
Cognitive theory and research have traditionally highlighted the relevance of the core beliefs about oneself, the world, and the future to human emotions. For some individuals, however, core beliefs may also explicitly involve spiritual themes. In this article, we propose a cognitive model of worry, in which positive/negative beliefs about the Divine affect symptoms through the mechanism of intolerance of uncertainty. Using mediation analyses, we found support for our model across two studies, in particular, with regards to negative spiritual beliefs. These findings highlight the importance of assessing for spiritual alongside secular convictions when creating cognitive-behavioral case formulations in the treatment of religious individuals. © 2011 Wiley Periodicals, Inc.
75 FR 56487 - Airworthiness Directives; Erickson Air-Crane Incorporated Model S-64F Helicopters
2010-09-16
... Federal Aviation Administration 14 CFR Part 39 RIN 2120-AA64 Airworthiness Directives; Erickson Air-Crane... Air-Crane Incorporated (Erickson Air-Crane) Model S- 64F helicopters. The AD would require, at... the service information identified in this proposed AD from Erickson Air-Crane Incorporated, 3100...
Denys Yemshanov; Frank H Koch; Mark Ducey
2015-01-01
Uncertainty is inherent in model-based forecasts of ecological invasions. In this chapter, we explore how the perceptions of that uncertainty can be incorporated into the pest risk assessment process. Uncertainty changes a decision makerâs perceptions of risk; therefore, the direct incorporation of uncertainty may provide a more appropriate depiction of risk. Our...
75 FR 20265 - Airworthiness Directives; Liberty Aerospace Incorporated Model XL-2 Airplanes
2010-04-19
...-020-AD; Amendment 39-16264; AD 2009-08-05 R1] RIN 2120-AA64 Airworthiness Directives; Liberty... Liberty Aerospace Incorporated Model XL-2 airplanes. AD 2009-08-05 currently requires repetitively... approved the incorporation by reference of Liberty Aerospace, Inc. Service Document Critical Service...
Day-to-day route choice modeling incorporating inertial behavior
van Essen, Mariska Alice; Rakha, H.; Vreeswijk, Jacob Dirk; Wismans, Luc Johannes Josephus; van Berkum, Eric C.
2015-01-01
Accurate route choice modeling is one of the most important aspects when predicting the effects of transport policy and dynamic traffic management. Moreover, the effectiveness of intervention measures to a large extent depends on travelers’ response to the changes these measures cause. As a
Modelling toluene oxidation : Incorporation of mass transfer phenomena
Hoorn, J.A.A.; van Soolingen, J.; Versteeg, G. F.
The kinetics of the oxidation of toluene have been studied in close interaction with the gas-liquid mass transfer occurring in the reactor. Kinetic parameters for a simple model have been estimated on basis of experimental observations performed under industrial conditions. The conclusions for the
Do Knowledge-Component Models Need to Incorporate Representational Competencies?
Rau, Martina Angela
2017-01-01
Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…
Incorporation of ice sheet models into an Earth system model: Focus on methodology of coupling
Rybak, Oleg; Volodin, Evgeny; Morozova, Polina; Nevecherja, Artiom
2018-03-01
Elaboration of a modern Earth system model (ESM) requires incorporation of ice sheet dynamics. Coupling of an ice sheet model (ICM) to an AOGCM is complicated by essential differences in spatial and temporal scales of cryospheric, atmospheric and oceanic components. To overcome this difficulty, we apply two different approaches for the incorporation of ice sheets into an ESM. Coupling of the Antarctic ice sheet model (AISM) to the AOGCM is accomplished via using procedures of resampling, interpolation and assigning to the AISM grid points annually averaged meanings of air surface temperature and precipitation fields generated by the AOGCM. Surface melting, which takes place mainly on the margins of the Antarctic peninsula and on ice shelves fringing the continent, is currently ignored. AISM returns anomalies of surface topography back to the AOGCM. To couple the Greenland ice sheet model (GrISM) to the AOGCM, we use a simple buffer energy- and water-balance model (EWBM-G) to account for orographically-driven precipitation and other sub-grid AOGCM-generated quantities. The output of the EWBM-G consists of surface mass balance and air surface temperature to force the GrISM, and freshwater run-off to force thermohaline circulation in the oceanic block of the AOGCM. Because of a rather complex coupling procedure of GrIS compared to AIS, the paper mostly focuses on Greenland.
Constitutive modeling of coronary artery bypass graft with incorporated torsion
Czech Academy of Sciences Publication Activity Database
Horný, L.; Chlup, Hynek; Žitný, R.; Adámek, T.
2009-01-01
Roč. 49, č. 2 (2009), s. 273-277 ISSN 0543-5846 R&D Projects: GA ČR(CZ) GA106/08/0557 Institutional research plan: CEZ:AV0Z20760514 Keywords : coronary artery bypass graft * constitutive model * digital image correlation Subject RIV: BJ - Thermodynamics Impact factor: 0.439, year: 2009 http://web.tuke.sk/sjf-kamam/mmams2009/contents.pdf
Incorporating affective bias in models of human decision making
Nygren, Thomas E.
1991-01-01
Research on human decision making has traditionally focused on how people actually make decisions, how good their decisions are, and how their decisions can be improved. Recent research suggests that this model is inadequate. Affective as well as cognitive components drive the way information about relevant outcomes and events is perceived, integrated, and used in the decision making process. The affective components include how the individual frames outcomes as good or bad, whether the individual anticipates regret in a decision situation, the affective mood state of the individual, and the psychological stress level anticipated or experienced in the decision situation. A focus of the current work has been to propose empirical studies that will attempt to examine in more detail the relationships between the latter two critical affective influences (mood state and stress) on decision making behavior.
Berg, M. D.; Kim, H. S.; Friendlich, M. A.; Perez, C. E.; Seidlick, C. M.; LaBel, K. A.
2011-01-01
We present SEU test and analysis of the Microsemi ProASIC3 FPGA. SEU Probability models are incorporated for device evaluation. Included is a comparison to the RTAXS FPGA illustrating the effectiveness of the overall testing methodology.
INCORPORATION OF MECHANISTIC INFORMATION IN THE ARSENIC PBPK MODEL DEVELOPMENT PROCESS
INCORPORATING MECHANISTIC INSIGHTS IN A PBPK MODEL FOR ARSENICElaina M. Kenyon, Michael F. Hughes, Marina V. Evans, David J. Thomas, U.S. EPA; Miroslav Styblo, University of North Carolina; Michael Easterling, Analytical Sciences, Inc.A physiologically based phar...
Design ensemble machine learning model for breast cancer diagnosis.
Hsieh, Sheau-Ling; Hsieh, Sung-Huai; Cheng, Po-Hsun; Chen, Chi-Huang; Hsu, Kai-Ping; Lee, I-Shun; Wang, Zhenyu; Lai, Feipei
2012-10-01
In this paper, we classify the breast cancer of medical diagnostic data. Information gain has been adapted for feature selections. Neural fuzzy (NF), k-nearest neighbor (KNN), quadratic classifier (QC), each single model scheme as well as their associated, ensemble ones have been developed for classifications. In addition, a combined ensemble model with these three schemes has been constructed for further validations. The experimental results indicate that the ensemble learning performs better than individual single ones. Moreover, the combined ensemble model illustrates the highest accuracy of classifications for the breast cancer among all models.
Individual discriminative face recognition models based on subsets of features
DEFF Research Database (Denmark)
Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær
2007-01-01
The accuracy of data classification methods depends considerably on the data representation and on the selected features. In this work, the elastic net model selection is used to identify meaningful and important features in face recognition. Modelling the characteristics which distinguish one...... selection techniques such as forward selection or lasso regression become inadequate. In the experimental section, the performance of the elastic net model is compared with geometrical and color based algorithms widely used in face recognition such as Procrustes nearest neighbor, Eigenfaces, or Fisher...
Development and Calibration of an Item Response Model That Incorporates Response Time
Wang, Tianyou; Hanson, Bradley H.
2005-01-01
This article proposes an item response model that incorporates response time. A parameter estimation procedure using the EM algorithm is developed. The procedure is evaluated with both real and simulated test data. The results suggest that the estimation procedure works well in estimating model parameters. By using response time data, estimation…
Localized defects in classical one-dimensional models
International Nuclear Information System (INIS)
Tang, L.H.; Griffiths, R.B.
1988-01-01
Several aspects of localized defects in the Frenkel-Kontorova, classical XY chain and analogous models with a finite range of interactions are discussed from a general point of view. Precise definitions are given for defect phase shifts (charges) and for creation, pinning, and interaction energies. Corresponding definitions are also provided for interfaces (localized regions separating two phases). For the nearest-neighbor Frenkel-Kontorova model, the various defect energies are related to areas enclosed by contours joining heteroclinic points of the area-preserving map generated by the conditions of mechanical equilibrium
Quantum decoration transformation for spin models
International Nuclear Information System (INIS)
Braz, F.F.; Rodrigues, F.C.; Souza, S.M. de; Rojas, Onofre
2016-01-01
It is quite relevant the extension of decoration transformation for quantum spin models since most of the real materials could be well described by Heisenberg type models. Here we propose an exact quantum decoration transformation and also showing interesting properties such as the persistence of symmetry and the symmetry breaking during this transformation. Although the proposed transformation, in principle, cannot be used to map exactly a quantum spin lattice model into another quantum spin lattice model, since the operators are non-commutative. However, it is possible the mapping in the “classical” limit, establishing an equivalence between both quantum spin lattice models. To study the validity of this approach for quantum spin lattice model, we use the Zassenhaus formula, and we verify how the correction could influence the decoration transformation. But this correction could be useless to improve the quantum decoration transformation because it involves the second-nearest-neighbor and further nearest neighbor couplings, which leads into a cumbersome task to establish the equivalence between both lattice models. This correction also gives us valuable information about its contribution, for most of the Heisenberg type models, this correction could be irrelevant at least up to the third order term of Zassenhaus formula. This transformation is applied to a finite size Heisenberg chain, comparing with the exact numerical results, our result is consistent for weak xy-anisotropy coupling. We also apply to bond-alternating Ising–Heisenberg chain model, obtaining an accurate result in the limit of the quasi-Ising chain.
A Bayesian model for pooling gene expression studies that incorporates co-regulation information.
Directory of Open Access Journals (Sweden)
Erin M Conlon
Full Text Available Current Bayesian microarray models that pool multiple studies assume gene expression is independent of other genes. However, in prokaryotic organisms, genes are arranged in units that are co-regulated (called operons. Here, we introduce a new Bayesian model for pooling gene expression studies that incorporates operon information into the model. Our Bayesian model borrows information from other genes within the same operon to improve estimation of gene expression. The model produces the gene-specific posterior probability of differential expression, which is the basis for inference. We found in simulations and in biological studies that incorporating co-regulation information improves upon the independence model. We assume that each study contains two experimental conditions: a treatment and control. We note that there exist environmental conditions for which genes that are supposed to be transcribed together lose their operon structure, and that our model is best carried out for known operon structures.
Incorporation of the capillary hysteresis model HYSTR into the numerical code TOUGH
International Nuclear Information System (INIS)
Niemi, A.; Bodvarsson, G.S.; Pruess, K.
1991-11-01
As part of the work performed to model flow in the unsaturated zone at Yucca Mountain Nevada, a capillary hysteresis model has been developed. The computer program HYSTR has been developed to compute the hysteretic capillary pressure -- liquid saturation relationship through interpolation of tabulated data. The code can be easily incorporated into any numerical unsaturated flow simulator. A complete description of HYSTR, including a brief summary of the previous hysteresis literature, detailed description of the program, and instructions for its incorporation into a numerical simulator are given in the HYSTR user's manual (Niemi and Bodvarsson, 1991a). This report describes the incorporation of HYSTR into the numerical code TOUGH (Transport of Unsaturated Groundwater and Heat; Pruess, 1986). The changes made and procedures for the use of TOUGH for hysteresis modeling are documented
On phase transitions of the Potts model with three competing interactions on Cayley tree
Directory of Open Access Journals (Sweden)
S. Temir
2011-06-01
Full Text Available In the present paper we study a phase transition problem for the Potts model with three competing interactions, the nearest neighbors, the second neighbors and triples of neighbors and non-zero external field on Cayley tree of order two. We prove that for some parameter values of the model there is phase transition. We reduce the problem of describing by limiting Gibbs measures to the problem of solving a system of nonlinear functional equations. We extend the results obtained by Ganikhodjaev and Rozikov [Math. Phys. Anal. Geom., 2009, vol. 12, No. 2, 141-156] on phase transition for the Ising model to the Potts model setting.
Simulation of Forest Carbon Fluxes Using Model Incorporation and Data Assimilation
Min Yan; Xin Tian; Zengyuan Li; Erxue Chen; Xufeng Wang; Zongtao Han; Hong Sun
2016-01-01
This study improved simulation of forest carbon fluxes in the Changbai Mountains with a process-based model (Biome-BGC) using incorporation and data assimilation. Firstly, the original remote sensing-based MODIS MOD_17 GPP (MOD_17) model was optimized using refined input data and biome-specific parameters. The key ecophysiological parameters of the Biome-BGC model were determined through the Extended Fourier Amplitude Sensitivity Test (EFAST) sensitivity analysis. Then the optimized MOD_17 mo...
Specific heat of a non-local attractive Hubbard model
Energy Technology Data Exchange (ETDEWEB)
Calegari, E.J., E-mail: eleonir@ufsm.br [Laboratório de Teoria da Matéria Condensada, Departamento de Física, UFSM, 97105-900, Santa Maria, RS (Brazil); Lobo, C.O. [Laboratório de Teoria da Matéria Condensada, Departamento de Física, UFSM, 97105-900, Santa Maria, RS (Brazil); Magalhaes, S.G. [Instituto de Física, Universidade Federal Fluminense, Av. Litorânea s/n, 24210, 346, Niterói, Rio de Janeiro (Brazil); Chaves, C.M.; Troper, A. [Centro Brasileiro de Pesquisas Físicas, Rua Xavier Sigaud 150, 22290-180, Rio de Janeiro, RJ (Brazil)
2013-10-01
The specific heat C(T) of an attractive (interaction G<0) non-local Hubbard model is investigated within a two-pole approximation that leads to a set of correlation functions, which play an important role as a source of anomalies as the pseudogap. For a giving range of G and n{sub T} (where n{sub T}=n{sub ↑}+n{sub ↓}), the specific heat as a function of the temperature presents a two peak structure. Nevertehelesss, the presence of a pseudogap eliminates the two peak structure. The effects of the second nearest-neighbor hopping on C(T) are also investigated.
Kinetic Ising model for desorption from a chain
Geldart, D. J. W.; Kreuzer, H. J.; Rys, Franz S.
1986-10-01
Adsorption along a linear chain of adsorption sites is considered in an Ising model with nearest neighbor interactions. The kinetics are studied in a master equation approach with transition probabilities describing single spin flips to mimic adsorption-desorption processes. Exchange of two spins to account for diffusion can be included as well. Numerical results show that desorption is frequently of fractional (including zero) order. Only at low coverage and high temperature is desorption a first order process. Finite size effects and readsorption are also studied.
A new experimental procedure for incorporation of model contaminants in polymer hosts
Papaspyrides, C.D.; Voultzatis, Y.; Pavlidou, S.; Tsenoglou, C.; Dole, P.; Feigenbaum, A.; Paseiro, P.; Pastorelli, S.; Cruz Garcia, C. de la; Hankemeier, T.; Aucejo, S.
2005-01-01
A new experimental procedure for incorporation of model contaminants in polymers was developed as part of a general scheme for testing the efficiency of functional barriers in food packaging. The aim was to progressively pollute polymers in a controlled fashion up to a high level in the range of
A lattice model for influenza spreading.
Directory of Open Access Journals (Sweden)
Antonella Liccardo
Full Text Available We construct a stochastic SIR model for influenza spreading on a D-dimensional lattice, which represents the dynamic contact network of individuals. An age distributed population is placed on the lattice and moves on it. The displacement from a site to a nearest neighbor empty site, allows individuals to change the number and identities of their contacts. The dynamics on the lattice is governed by an attractive interaction between individuals belonging to the same age-class. The parameters, which regulate the pattern dynamics, are fixed fitting the data on the age-dependent daily contact numbers, furnished by the Polymod survey. A simple SIR transmission model with a nearest neighbors interaction and some very basic adaptive mobility restrictions complete the model. The model is validated against the age-distributed Italian epidemiological data for the influenza A(H1N1 during the [Formula: see text] season, with sensible predictions for the epidemiological parameters. For an appropriate topology of the lattice, we find that, whenever the accordance between the contact patterns of the model and the Polymod data is satisfactory, there is a good agreement between the numerical and the experimental epidemiological data. This result shows how rich is the information encoded in the average contact patterns of individuals, with respect to the analysis of the epidemic spreading of an infectious disease.
INCORPORATING MULTIPLE OBJECTIVES IN PLANNING MODELS OF LOW-RESOURCE FARMERS
Flinn, John C.; Jayasuriya, Sisira; Knight, C. Gregory
1980-01-01
Linear goal programming provides a means of formally incorporating the multiple goals of a household into the analysis of farming systems. Using this approach, the set of plans which come as close as possible to achieving a set of desired goals under conditions of land and cash scarcity are derived for a Filipino tenant farmer. A challenge in making LGP models empirically operational is the accurate definition of the goals of the farm household being modelled.
Gramatica, Paola; Papa, Ester; Marrocchi, Assunta; Minuti, Lucio; Taticchi, Aldo
2007-03-01
Various polycyclic aromatic hydrocarbons (PAHs), ubiquitous environmental pollutants, are recognized mutagens and carcinogens. A homogeneous set of mutagenicity data (TA98 and TA100,+S9) for 32 benzocyclopentaphenanthrenes/chrysenes was modeled by the quantitative structure-activity relationship classification methods k-nearest neighbor and classification and regression tree, using theoretical holistic molecular descriptors. Genetic algorithm provided the selection of the best subset of variables for modeling mutagenicity. The models were validated by leave-one-out and leave-50%-out approaches and have good performance, with sensitivity and specificity ranges of 90-100%. Mutagenicity assessment for these PAHs requires only a few theoretical descriptors of their molecular structure.
Band structure and orbital character of monolayer MoS2 with eleven-band tight-binding model
Shahriari, Majid; Ghalambor Dezfuli, Abdolmohammad; Sabaeian, Mohammad
2018-02-01
In this paper, based on a tight-binding (TB) model, first we present the calculations of eigenvalues as band structure and then present the eigenvectors as probability amplitude for finding electron in atomic orbitals for monolayer MoS2 in the first Brillouin zone. In these calculations we are considering hopping processes between the nearest-neighbor Mo-S, the next nearest-neighbor in-plan Mo-Mo, and the next nearest-neighbor in-plan and out-of-plan S-S atoms in a three-atom based unit cell of two-dimensional rhombic MoS2. The hopping integrals have been solved in terms of Slater-Koster and crystal field parameters. These parameters are calculated by comparing TB model with the density function theory (DFT) in the high-symmetry k-points (i.e. the K- and Γ-points). In our TB model all the 4d Mo orbitals and the 3p S orbitals are considered and detailed analysis of the orbital character of each energy level at the main high-symmetry points of the Brillouin zone is described. In comparison with DFT calculations, our results of TB model show a very good agreement for bands near the Fermi level. However for other bands which are far from the Fermi level, some discrepancies between our TB model and DFT calculations are observed. Upon the accuracy of Slater-Koster and crystal field parameters, on the contrary of DFT, our model provide enough accuracy to calculate all allowed transitions between energy bands that are very crucial for investigating the linear and nonlinear optical properties of monolayer MoS2.
Incorporation of all hazard categories into U.S. NRC PRA models
International Nuclear Information System (INIS)
Sancaktar, Selim; Ferrante, Fernando; Siu, Nathan; Coyne, Kevin
2014-01-01
Over the last two decades, the U.S. Nuclear Regulatory Commission (NRC) has maintained independent probabilistic risk assessment (PRA) models to calculate nuclear power plant (NPP) core damage frequency (CDF) from internal events at power. These models are known as Standardized Plan Analysis Risk (SPAR) models. There are 79 such models representing 104 domestic nuclear plants; with some SPAR models representing more than one unit on the site. These models allow the NRC risk analysts to perform independent quantitative risk estimates of operational events and degraded plant conditions. It is well recognized that using only the internal events contribution to overall plant risk estimates provides a useful, but limited, assessment of the complete plant risk profile. Inclusion, of all hazard categories applicable to a plant in the plant PRA model would provide a more comprehensive assessment of a plant risk. However, implementation of a more comprehensive treatment of additional hazard categories (e.g., fire, flooding, high winds, seismic) presents a number of challenges, including technical considerations. The U.S. NRC has been incorporating additional hazard categories into its set of nuclear power plant PRA models since 2004. Currently, 18 SPAR models include additional hazard categories such as internal flooding, internal fire, seismic, and wind events. In most cases, these external hazard models were derived from Generic Letter 88-20 Individual Plant Examination of External Events (IPEEE) reports. Recently, NRC started incorporating detailed Fire PRA (FPRA) information based on the current licensing effort that allows licensees to transition into a risk-informed fire protection framework, as well as additional external hazards developed by some licensees into enhanced SPAR models. These updated external hazards SPAR models are referred to as SPAR All-Hazard (SPAR-AHZ) models (i.e., they incorporate additional risk contributors beyond internal events). This paper
Directory of Open Access Journals (Sweden)
Ismail eAdeniran
2013-07-01
Full Text Available Introduction Genetic forms of the Short QT Syndrome (SQTS arise due to cardiac ion channel mutations leading to accelerated ventricular repolarisation, arrhythmias and sudden cardiac death. Results from experimental and simulation studies suggest that changes to refractoriness and tissue vulnerability produce a substrate favourable to re-entry. Potential electromechanical consequences of the SQTS are less well understood. The aim of this study was to utilize electromechanically coupled human ventricle models to explore electromechanical consequences of the SQTS. Methods and results: The Rice et al. mechanical model was coupled to the ten Tusscher et al. ventricular cell model. Previously validated K+ channel formulations for SQT variants 1 and 3 were incorporated. Functional effects of the SQTS mutations on transients, sarcomere length shortening and contractile force at the single cell level were evaluated with and without the consideration of stretch activated channel current (Isac. Without Isac, the SQTS mutations produced dramatic reductions in the amplitude of transients, sarcomere length shortening and contractile force. When Isac was incorporated, there was a considerable attenuation of the effects of SQTS-associated action potential shortening on Ca2+ transients, sarcomere shortening and contractile force. Single cell models were then incorporated into 3D human ventricular tissue models. The timing of maximum deformation was delayed in the SQTS setting compared to control. Conclusion: The incorporation of Isac appears to be an important consideration in modelling functional effects of SQT 1 and 3 mutations on cardiac electro-mechanical coupling. Whilst there is little evidence of profoundly impaired cardiac contractile function in SQTS patients, our 3D simulations correlate qualitatively with reported evidence for dissociation between ventricular repolarization and the end of mechanical systole.
Geneugelijk, K; Niemann, M; de Hoop, T; Spierings, E
2016-01-01
The IMGT/HLA database contains every publicly available HLA sequence. However, most of these HLA protein sequences are restricted to the alpha-1/alpha-2 domain for HLA class-I and alpha-1/beta-1 domain for HLA class-II. Nevertheless, also polymorphism outside these domains may play a role in
Directory of Open Access Journals (Sweden)
V. Alijani
2013-07-01
Full Text Available To implement correct management of forest ecosystems, enough information in relation to the structures of tree species is necessary. In this study, the structures of trees species in Fagus, Fagus-Carpinus, Carpinus-Fagus and Carpinus-Quercus types were investigated and compared in Hyrcanian forest. The data used in this study was collected from 239 plots with an area of 1000 m2 in Gorazbon district of Kheyrud forest, and Crancod (ver. 1.3 software was employed to calculate the uniform angle (Wi, Mingling (DMi, DBH dominance (TDi and Height dominance (THi indices. The result of uniform angle index showed a random positioning for the trees in the studied types. Also, the result of mingling index showed a low mixture for four studied types. The result of this index indicated an intra-specific competition for Fagus orientalis and Carpinus betulus and an inter-specific competition for other species. The average value of DBH and Height dominance indices showed a relative similarity among the studied types. The result of these indices showed that some species such as Acer velutinum,، Tilia begonifolia and Alnus subcordata are dominant and species including Ulmus glabra and Diospyros lotus are dominated. The comparing of similar species structure showed a non significant difference for positioning, DBH and height dominance features in different types. Also, this comparison showed a significant difference in mingling feature of Carpinus betulus, Fagus orientalis, Acer velutinum, Tilia begonifolia, and also deadwoods in the studied types. The utilized indices in this study had a high ability in the description of forest types' structures and also the ecological features of trees species.
Purwanti, Endah; Calista, Evelyn
2017-05-01
Leukemia is a type of cancer which is caused by malignant neoplasms in leukocyte cells. Leukemia disease which can cause death quickly enough for the sufferer is a type of acute lymphocyte leukemia (ALL). In this study, we propose automatic detection of lymphocyte leukemia through classification of lymphocyte cell images obtained from peripheral blood smear single cell. There are two main objectives in this study. The first is to extract featuring cells. The second objective is to classify the lymphocyte cells into two classes, namely normal and abnormal lymphocytes. In conducting this study, we use combination of shape feature and histogram feature, and the classification algorithm is k-nearest Neighbour with k variation is 1, 3, 5, 7, 9, 11, 13, and 15. The best level of accuracy, sensitivity, and specificity in this study are 90%, 90%, and 90%, and they were obtained from combined features of area-perimeter-mean-standard deviation with k=7.
A Deep NuSTAR Survey of M31: Compact object types in our Nearest Neighbor Galaxy
Hornschemeier, Ann E.; Wik, Daniel R.; Yukita, Mihoko; Ptak, Andrew; Venters, Tonia M.; Lehmer, Bret; Maccarone, Thomas J.; Zezas, Andreas; Harrison, Fiona; Stern, Daniel; Williams, Benjamin F.; Vulic, Neven
2017-08-01
X-ray binaries (XRBs) trace young and old stellar populations in galaxies, and thus star formation rate and star formation history/stellar mass. X-ray emission from XRBs may be responsible for significant amounts of heating of the early Intergalactic Medium at Cosmic Dawn and may also play a significant role in reionization. Until recently, the E>10 keV (hard X-ray) emission from these populations could only be studied for XRBs in our own galaxy, where it is often difficult to measure accurate distances and thus luminosities. We have observed M31 in 4 NuSTAR fields for a total exposure of 1.4 Ms, covering the young stellar population in a swath of the disk (within the footprint of the Panchromatic Hubble Andromeda Treasury (PHAT) Survey) and older populations in the bulge. We detected more than 100 sources in the 4-25 keV band, where hard band (12-25 keV) emission has allowed us to discriminate between black holes and neutron stars in different accretion states. The luminosity function of the hard band detected sources are compared to Swift/BAT and INTEGRAL-derived luminosity functions of the Milky Way population, which reveals an excess of luminous sources in M31 when correcting for star formation rate and stellar mass.
DEFF Research Database (Denmark)
Marinakis, Yannis; Dounias, Georgios; Jantzen, Jan
2009-01-01
The term pap-smear refers to samples of human cells stained by the so-called Papanicolaou method. The purpose of the Papanicolaou method is to diagnose pre-cancerous cell changes before they progress to invasive carcinoma. In this paper a metaheuristic algorithm is proposed in order to classify t...... other previously applied intelligent approaches....
Improving Watershed-Scale Hydrodynamic Models by Incorporating Synthetic 3D River Bathymetry Network
Dey, S.; Saksena, S.; Merwade, V.
2017-12-01
Digital Elevation Models (DEMs) have an incomplete representation of river bathymetry, which is critical for simulating river hydrodynamics in flood modeling. Generally, DEMs are augmented with field collected bathymetry data, but such data are available only at individual reaches. Creating a hydrodynamic model covering an entire stream network in the basin requires bathymetry for all streams. This study extends a conceptual bathymetry model, River Channel Morphology Model (RCMM), to estimate the bathymetry for an entire stream network for application in hydrodynamic modeling using a DEM. It is implemented at two large watersheds with different relief and land use characterizations: coastal Guadalupe River basin in Texas with flat terrain and a relatively urban White River basin in Indiana with more relief. After bathymetry incorporation, both watersheds are modeled using HEC-RAS (1D hydraulic model) and Interconnected Pond and Channel Routing (ICPR), a 2-D integrated hydrologic and hydraulic model. A comparison of the streamflow estimated by ICPR at the outlet of the basins indicates that incorporating bathymetry influences streamflow estimates. The inundation maps show that bathymetry has a higher impact on flat terrains of Guadalupe River basin when compared to the White River basin.
Hobbs, Brian P.; Sargent, Daniel J.; Carlin, Bradley P.
2014-01-01
Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model. PMID:24795786
Incorporating Pass-Phrase Dependent Background Models for Text-Dependent Speaker verification
DEFF Research Database (Denmark)
Sarkar, Achintya Kumar; Tan, Zheng-Hua
2018-01-01
is compared to conventional text-independent background model based TD-SV systems using either Gaussian mixture model (GMM)-universal background model (UBM) or Hidden Markov model (HMM)-UBM or i-vector paradigms. In addition, we consider two approaches to build PBMs: speaker-independent and speaker......In this paper, we propose pass-phrase dependent background models (PBMs) for text-dependent (TD) speaker verification (SV) to integrate the pass-phrase identification process into the conventional TD-SV system, where a PBM is derived from a text-independent background model through adaptation using...... and the selected PBM is then used for the log likelihood ratio (LLR) calculation with respect to the claimant model. The proposed method incorporates the pass-phrase identification step in the LLR calculation, which is not considered in conventional standalone TD-SV systems. The performance of the proposed method...
Incorporating Mobility in Growth Modeling for Multilevel and Longitudinal Item Response Data.
Choi, In-Hee; Wilson, Mark
2016-01-01
Multilevel data often cannot be represented by the strict form of hierarchy typically assumed in multilevel modeling. A common example is the case in which subjects change their group membership in longitudinal studies (e.g., students transfer schools; employees transition between different departments). In this study, cross-classified and multiple membership models for multilevel and longitudinal item response data (CCMM-MLIRD) are developed to incorporate such mobility, focusing on students' school change in large-scale longitudinal studies. Furthermore, we investigate the effect of incorrectly modeling school membership in the analysis of multilevel and longitudinal item response data. Two types of school mobility are described, and corresponding models are specified. Results of the simulation studies suggested that appropriate modeling of the two types of school mobility using the CCMM-MLIRD yielded good recovery of the parameters and improvement over models that did not incorporate mobility properly. In addition, the consequences of incorrectly modeling the school effects on the variance estimates of the random effects and the standard errors of the fixed effects depended upon mobility patterns and model specifications. Two sets of large-scale longitudinal data are analyzed to illustrate applications of the CCMM-MLIRD for each type of school mobility.
Exact ground-state phase diagrams for the spin-3/2 Blume-Emery-Griffiths model
International Nuclear Information System (INIS)
Canko, Osman; Keskin, Mustafa; Deviren, Bayram
2008-01-01
We have calculated the exact ground-state phase diagrams of the spin-3/2 Ising model using the method that was proposed and applied to the spin-1 Ising model by Dublenych (2005 Phys. Rev. B 71 012411). The calculated, exact ground-state phase diagrams on the diatomic and triangular lattices with the nearest-neighbor (NN) interaction have been presented in this paper. We have obtained seven and 15 topologically different ground-state phase diagrams for J>0 and J 0 and J<0, respectively, the conditions for the existence of uniform and intermediate phases have also been found
Yanagi, Yuki; Yamashita, Yasufumi; Ueda, Kazuo
2012-12-01
The ferromagnetism of the checkerboard-lattice Hubbard model at quarter filling is one of the few exact ferromagnetic ground states known in the family of Hubbard models. When the nearest neighbor hopping, t1, is negligible compared with the second neighbor one, t2, the system reduces to a collection of Hubbard chains. We find that the 1D character is surprisingly robust as long as t1
Directory of Open Access Journals (Sweden)
Anandakumari Chandrasekharan Sunil Sekhar
2016-05-01
Full Text Available Ultra-small gold nanoparticles incorporated in mesoporous silica thin films with accessible pore channels perpendicular to the substrate are prepared by a modified sol-gel method. The simple and easy spin coating technique is applied here to make homogeneous thin films. The surface characterization using FESEM shows crack-free films with a perpendicular pore arrangement. The applicability of these thin films as catalysts as well as a robust SERS active substrate for model catalysis study is tested. Compared to bare silica film our gold incorporated silica, GSM-23F gave an enhancement factor of 103 for RhB with a laser source 633 nm. The reduction reaction of p-nitrophenol with sodium borohydride from our thin films shows a decrease in peak intensity corresponding to –NO2 group as time proceeds, confirming the catalytic activity. Such model surfaces can potentially bridge the material gap between a real catalytic system and surface science studies.
Microfluidic vascularized bone tissue model with hydroxyapatite-incorporated extracellular matrix.
Jusoh, Norhana; Oh, Soojung; Kim, Sudong; Kim, Jangho; Jeon, Noo Li
2015-10-21
Current in vitro systems mimicking bone tissues fail to fully integrate the three-dimensional (3D) microvasculature and bone tissue microenvironments, decreasing their similarity to in vivo conditions. Here, we propose 3D microvascular networks in a hydroxyapatite (HA)-incorporated extracellular matrix (ECM) for designing and manipulating a vascularized bone tissue model in a microfluidic device. Incorporation of HA of various concentrations resulted in ECM with varying mechanical properties. Sprouting angiogenesis was affected by mechanically modulated HA-extracellular matrix interactions, generating a model of vascularized bone microenvironment. Using this platform, we observed that hydroxyapatite enhanced angiogenic properties such as sprout length, sprouting speed, sprout number, and lumen diameter. This new platform integrates fibrin ECM with the synthetic bone mineral HA to provide in vivo-like microenvironments for bone vessel sprouting.
Modeling fraud detection and the incorporation of forensic specialists in the audit process
DEFF Research Database (Denmark)
Sakalauskaite, Dominyka
Financial statement audits are still comparatively poor in fraud detection. Forensic specialists can play a significant role in increasing audit quality. In this paper, based on prior academic research, I develop a model of fraud detection and the incorporation of forensic specialists in the audit...... process. The intention of the model is to identify the reasons why the audit is weak in fraud detection and to provide the analytical framework to assess whether the incorporation of forensic specialists can help to improve it. The results show that such specialists can potentially improve the fraud...... detection in the audit, but might also cause some negative implications. Overall, even though fraud detection is one of the main topics in research there are very few studies done on the subject of how auditors co-operate with forensic specialists. Thus, the paper concludes with suggestions for further...
Two-dimensional Heisenberg model with nonlinear interactions: 1/N corrections
International Nuclear Information System (INIS)
Caracciolo, Sergio; Mognetti, Bortolo Matteo; Pelissetto, Andrea
2005-01-01
We investigate a two-dimensional classical N-vector model with a generic nearest-neighbor interaction W(σi-bar σj) in the large-N limit, focusing on the finite-temperature transition point at which energy-energy correlations become critical. We show that this transition belongs to the Ising universality class. However, the width of the region in which Ising behavior is observed scales as 1/N3/2 along the magnetic direction and as 1/N in the thermal direction; outside a crossover to mean-field behavior occurs. This explains why only mean-field behavior is observed for N=∼
The ground-state phase diagrams of the spin-3/2 Ising model
International Nuclear Information System (INIS)
Canko, Osman; Keskin, Mustafa
2003-01-01
The ground-state spin configurations are obtained for the spin-3/2 Ising model Hamiltonian with bilinear and biquadratic exchange interactions and a single-ion crystal field. The interactions are assumed to be only between nearest-neighbors. The calculated ground-state phase diagrams are presented on diatomic lattices, such as the square, honeycomb and sc lattices, and triangular lattice in the (Δ/z vertical bar J vertical bar ,K/ vertical bar J vertical bar) and (H/z vertical bar J vertical bar, K/ vertical bar J vertical bar) planes
Phase diagram of an extended Hubbard model with correlated hopping at half filling
Aligia, A. A.; Arrachea, Liliana; Gagliano, E. R.
1995-05-01
We study a generalized Hubbard model with on-site interaction U, nearest-neighbor repulsion V, and general correlated hopping under the condition that the number of doubly occupied sites is conserved. We find the exact ground state (GS) in arbitrary dimension in two wide regions of parameters. In one of them the GS is a Mott insulator (MI) and in the other it is a charge-density wave (CDW). The boundary of the MI and a large part of that of the CDW are determined exactly for relevant lattices. We study numerically the effect of relaxing the above-mentioned condition.
Arrachea, Liliana; Aligia, A. A.; Gagliano, E.
1996-02-01
We study the metal-insulator transition of a generalized Hubbard model in which the magnitude of the nearest-neighbor hopping depends on the occupations of the sites involved. Numerical results for finite chains at half-filling show that when 0 0 for which the system is metallic. This is consistent with a Hartree-Fock calculation. The metallic phase collapses to one point, U = 0, in the Hubbard limit. In the metallic phase we obtain that the superconducting correlations are the dominant ones, at least for doped systems.
Phase diagram of an extended Hubbard model with correlated hopping at half filling
Energy Technology Data Exchange (ETDEWEB)
Aligia, A.A.; Arrachea, L.; Gagliano, E.R. [Centro Atomico Bariloche and Instituto Balseiro, Comision Nacional de Energia Atomica, 8400 S.C. de Bariloche (Argentina)
1995-05-15
We study a generalized Hubbard model with on-site interaction {ital U}, nearest-neighbor repulsion {ital V}, and general correlated hopping under the condition that the number of doubly occupied sites is conserved. We find the exact ground state (GS) in arbitrary dimension in two wide regions of parameters. In one of them the GS is a Mott insulator (MI) and in the other it is a charge-density wave (CDW). The boundary of the MI and a large part of that of the CDW are determined exactly for relevant lattices. We study numerically the effect of relaxing the above-mentioned condition.
The Aerosol Models in MODTRAN: Incorporating Selected Measurements From Northern Australia
2005-12-01
of atmospheric aerosol obtained around Jabiru , N.T., in June and September 2003. These measurements are used to obtain theoretical multimode size...coefficients. The attenuation coefficients are then incorporated into MODTRAN and compared with the default aerosol models. Finally the Jabiru aerosol is...centred on Jabiru in the Northern Territory, Australia. This environment is typical of a Northern Australian dry season climate. Atmospheric transmission
Multi-model inference for incorporating trophic and climate uncertainty into stock assessments
Ianelli, James; Holsman, Kirstin K.; Punt, André E.; Aydin, Kerim
2016-12-01
Ecosystem-based fisheries management (EBFM) approaches allow a broader and more extensive consideration of objectives than is typically possible with conventional single-species approaches. Ecosystem linkages may include trophic interactions and climate change effects on productivity for the relevant species within the system. Presently, models are evolving to include a comprehensive set of fishery and ecosystem information to address these broader management considerations. The increased scope of EBFM approaches is accompanied with a greater number of plausible models to describe the systems. This can lead to harvest recommendations and biological reference points that differ considerably among models. Model selection for projections (and specific catch recommendations) often occurs through a process that tends to adopt familiar, often simpler, models without considering those that incorporate more complex ecosystem information. Multi-model inference provides a framework that resolves this dilemma by providing a means of including information from alternative, often divergent models to inform biological reference points and possible catch consequences. We apply an example of this approach to data for three species of groundfish in the Bering Sea: walleye pollock, Pacific cod, and arrowtooth flounder using three models: 1) an age-structured "conventional" single-species model, 2) an age-structured single-species model with temperature-specific weight at age, and 3) a temperature-specific multi-species stock assessment model. The latter two approaches also include consideration of alternative future climate scenarios, adding another dimension to evaluate model projection uncertainty. We show how Bayesian model-averaging methods can be used to incorporate such trophic and climate information to broaden single-species stock assessments by using an EBFM approach that may better characterize uncertainty.
Stochastic modelling of landfill processes incorporating waste heterogeneity and data uncertainty
International Nuclear Information System (INIS)
Zacharof, A.I.; Butler, A.P.
2004-01-01
A landfill is a very complex heterogeneous environment and as such it presents many modelling challenges. Attempts to develop models that reproduce these complexities generally involve the use of large numbers of spatially dependent parameters that cannot be properly characterised in the face of data uncertainty. An alternative method is presented, which couples a simplified microbial degradation model with a stochastic hydrological and contaminant transport model. This provides a framework for incorporating the complex effects of spatial heterogeneity within the landfill in a simplified manner, along with other key variables. A methodology for handling data uncertainty is also integrated into the model structure. Illustrative examples of the model's output are presented to demonstrate effects of data uncertainty on leachate composition and gas volume prediction
A Constitutive Model for Soft Clays Incorporating Elastic and Plastic Cross-Anisotropy.
Castro, Jorge; Sivasithamparam, Nallathamby
2017-05-25
Natural clays exhibit a significant degree of anisotropy in their fabric, which initially is derived from the shape of the clay platelets, deposition process and one-dimensional consolidation. Various authors have proposed anisotropic elastoplastic models involving an inclined yield surface to reproduce anisotropic behavior of plastic nature. This paper presents a novel constitutive model for soft structured clays that includes anisotropic behavior both of elastic and plastic nature. The new model incorporates stress-dependent cross-anisotropic elastic behavior within the yield surface using three independent elastic parameters because natural clays exhibit cross-anisotropic (or transversely isotropic) behavior after deposition and consolidation. Thus, the model only incorporates an additional variable with a clear physical meaning, namely the ratio between horizontal and vertical stiffnesses, which can be analytically obtained from conventional laboratory tests. The model does not consider evolution of elastic anisotropy, but laboratory results show that large strains are necessary to cause noticeable changes in elastic anisotropic behavior. The model is able to capture initial non-vertical effective stress paths for undrained triaxial tests and to predict deviatoric strains during isotropic loading or unloading.
International Nuclear Information System (INIS)
Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.
2006-01-01
Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)
Directory of Open Access Journals (Sweden)
Wang Yanqing
2016-03-01
Full Text Available A good assignment of code reviewers can effectively utilize the intellectual resources, assure code quality and improve programmers’ skills in software development. However, little research on reviewer assignment of code review has been found. In this study, a code reviewer assignment model is created based on participants’ preference to reviewing assignment. With a constraint of the smallest size of a review group, the model is optimized to maximize review outcomes and avoid the negative impact of “mutual admiration society”. This study shows that the reviewer assignment strategies incorporating either the reviewers’ preferences or the authors’ preferences get much improvement than a random assignment. The strategy incorporating authors’ preference makes higher improvement than that incorporating reviewers’ preference. However, when the reviewers’ and authors’ preference matrixes are merged, the improvement becomes moderate. The study indicates that the majority of the participants have a strong wish to work with reviewers and authors having highest competence. If we want to satisfy the preference of both reviewers and authors at the same time, the overall improvement of learning outcomes may be not the best.
Incorporation of UK Met Office's radiation scheme into CPTEC's global model
Chagas, Júlio C. S.; Barbosa, Henrique M. J.
2009-03-01
Current parameterization of radiation in the CPTEC's (Center for Weather Forecast and Climate Studies, Cachoeira Paulista, SP, Brazil) operational AGCM has its origins in the work of Harshvardhan et al. (1987) and uses the formulation of Ramaswamy and Freidenreich (1992) for the short-wave absorption by water vapor. The UK Met Office's radiation code (Edwards and Slingo, 1996) was incorporated into CPTEC's global model, initially for short-wave only, and some impacts of that were shown by Chagas and Barbosa (2006). Current paper presents some impacts of the complete incorporation (both short-wave and long-wave) of UK Met Office's scheme. Selected results from off-line comparisons with line-by-line benchmark calculations are shown. Impacts on the AGCM's climate are assessed by comparing output of climate runs of current and modified AGCM with products from GEWEX/SRB (Surface Radiation Budget) project.
Global dynamics of a PDE model for aedes aegypti mosquitoe incorporating female sexual preference
Parshad, Rana
2011-01-01
In this paper we study the long time dynamics of a reaction diffusion system, describing the spread of Aedes aegypti mosquitoes, which are the primary cause of dengue infection. The system incorporates a control attempt via the sterile insect technique. The model incorporates female mosquitoes sexual preference for wild males over sterile males. We show global existence of strong solution for the system. We then derive uniform estimates to prove the existence of a global attractor in L-2(Omega), for the system. The attractor is shown to be L-infinity(Omega) regular and posess state of extinction, if the injection of sterile males is large enough. We also provide upper bounds on the Hausdorff and fractal dimensions of the attractor.
Darradi, R.; Richter, J.; Farnell, D. J. J.
2004-01-01
We investigate the phase diagram of the Heisenberg antiferromagnet on the square lattice with two different nearest-neighbor bonds $J$ and $J'$ ($J$-$J'$ model) at zero temperature. The model exhibits a quantum phase transition at a critical value $J'_c > J$ between a semi-classically ordered N\\'eel and a magnetically disordered quantum paramagnetic phase of valence-bond type, which is driven by local singlet formation on $J'$ bonds. We study the influence of spin quantum number $s$ on this p...
Aryanpour, K.; Pickett, W. E.; Scalettar, R. T.
2006-01-01
We employ dynamical mean field theory (DMFT) with a Quantum Monte Carlo (QMC) atomic solver to investigate the finite temperature Mott transition in the Hubbard model with the nearest neighbor hopping on a triangular lattice at half-filling. We estimate the value of the critical interaction to be $U_c=12.0 \\pm 0.5$ in units of the hopping amplitude $t$ through the evolution of the magnetic moment, spectral function, internal energy and specific heat as the interaction $U$ and temperature $T$ ...
Incorporation of the Driver’s Personality Profile in an Agent Model
Directory of Open Access Journals (Sweden)
Mian Muhammad Mubasher
2015-12-01
Full Text Available Urban traffic flow is a complex system. Behavior of an individual driver can have butterfly effect which can become root cause of an emergent phenomenon such as congestion or accident. Interaction of drivers with each other and the surrounding environment forms the dynamics of traffic flow. Hence global effects of traffic flow depend upon the behavior of each individual driver. Due to several applications of driver models in serious games, urban traffic planning and simulations, study of a realistic driver model is important. Hhence cognitive models of a driver agent are required. In order to address this challenge concepts from cognitive science and psychology are employed to design a computational model of driver cognition which is capable of incorporating law abidance and social norms using big five personality profile.
A local non-parametric model for trade sign inference
Blazejewski, Adam; Coggins, Richard
2005-03-01
We investigate a regularity in market order submission strategies for 12 stocks with large market capitalization on the Australian Stock Exchange. The regularity is evidenced by a predictable relationship between the trade sign (trade initiator), size of the trade, and the contents of the limit order book before the trade. We demonstrate this predictability by developing an empirical inference model to classify trades into buyer-initiated and seller-initiated. The model employs a local non-parametric method, k-nearest neighbor, which in the past was used successfully for chaotic time series prediction. The k-nearest neighbor with three predictor variables achieves an average out-of-sample classification accuracy of 71.40%, compared to 63.32% for the linear logistic regression with seven predictor variables. The result suggests that a non-linear approach may produce a more parsimonious trade sign inference model with a higher out-of-sample classification accuracy. Furthermore, for most of our stocks the observed regularity in market order submissions seems to have a memory of at least 30 trading days.
Self-Organized Criticality in an Anisotropic Earthquake Model
Li, Bin-Quan; Wang, Sheng-Jun
2018-03-01
We have made an extensive numerical study of a modified model proposed by Olami, Feder, and Christensen to describe earthquake behavior. Two situations were considered in this paper. One situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly not averagely and keeps itself to zero. The other situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly and keeps some energy for itself instead of reset to zero. Different boundary conditions were considered as well. By analyzing the distribution of earthquake sizes, we found that self-organized criticality can be excited only in the conservative case or the approximate conservative case in the above situations. Some evidence indicated that the critical exponent of both above situations and the original OFC model tend to the same result in the conservative case. The only difference is that the avalanche size in the original model is bigger. This result may be closer to the real world, after all, every crust plate size is different. Supported by National Natural Science Foundation of China under Grant Nos. 11675096 and 11305098, the Fundamental Research Funds for the Central Universities under Grant No. GK201702001, FPALAB-SNNU under Grant No. 16QNGG007, and Interdisciplinary Incubation Project of SNU under Grant No. 5
Reyes, J. J.; Liu, M.; Tague, C.; Choate, J. S.; Evans, R. D.; Johnson, K. A.; Adam, J. C.
2013-12-01
Rangelands provide an opportunity to investigate the coupled feedbacks between human activities and natural ecosystems. These areas comprise at least one-third of the Earth's surface and provide ecological support for birds, insects, wildlife and agricultural animals including grazing lands for livestock. Capturing the interactions among water, carbon, and nitrogen cycles within the context of regional scale patterns of climate and management is important to understand interactions, responses, and feedbacks between rangeland systems and humans, as well as provide relevant information to stakeholders and policymakers. The overarching objective of this research is to understand the full consequences, intended and unintended, of human activities and climate over time in rangelands by incorporating dynamics related to rangeland management into an eco-hydrologic model that also incorporates biogeochemical and soil processes. Here we evaluate our model over ungrazed and grazed sites for different rangeland ecosystems. The Regional Hydro-ecologic Simulation System (RHESSys) is a process-based, watershed-scale model that couples water with carbon and nitrogen cycles. Climate, soil, vegetation, and management effects within the watershed are represented in a nested landscape hierarchy to account for heterogeneity and the lateral movement of water and nutrients. We incorporated a daily time-series of plant biomass loss from rangeland to represent grazing. The TRY Plant Trait Database was used to parameterize genera of shrubs and grasses in different rangeland types, such as tallgrass prairie, Intermountain West cold desert, and shortgrass steppe. In addition, other model parameters captured the reallocation of carbon and nutrients after grass defoliation. Initial simulations were conducted at the Curlew Valley site in northern Utah, a former International Geosphere-Biosphere Programme Desert Biome site. We found that grasses were most sensitive to model parameters affecting
Incorporating microbiota data into epidemiologic models: examples from vaginal microbiota research.
van de Wijgert, Janneke H; Jespers, Vicky
2016-05-01
Next generation sequencing and quantitative polymerase chain reaction technologies are now widely available, and research incorporating these methods is growing exponentially. In the vaginal microbiota (VMB) field, most research to date has been descriptive. The purpose of this article is to provide an overview of different ways in which next generation sequencing and quantitative polymerase chain reaction data can be used to answer clinical epidemiologic research questions using examples from VMB research. We reviewed relevant methodological literature and VMB articles (published between 2008 and 2015) that incorporated these methodologies. VMB data have been analyzed using ecologic methods, methods that compare the presence or relative abundance of individual taxa or community compositions between different groups of women or sampling time points, and methods that first reduce the complexity of the data into a few variables followed by the incorporation of these variables into traditional biostatistical models. To make future VMB research more clinically relevant (such as studying associations between VMB compositions and clinical outcomes and the effects of interventions on the VMB), it is important that these methods are integrated with rigorous epidemiologic methods (such as appropriate study designs, sampling strategies, and adjustment for confounding). Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Modified permeability modeling of coal incorporating sorption-induced matrix shrinkage
Soni, Aman
The variation in the cleat permeability of coalbed methane (CBM) reservoirs is attributed primarily to two cardinal processes, with opposing effects. Increase in effective stresses with reduction in pore pressure tends to decrease the cleat permeability, whereas the sorption-induced coal matrix shrinkage actuates reduction in the effective stresses which increases the reservoir permeability. The net effect of the two processes determines the pressure-dependent-permeability and, hence, the overall trend of CBM production with depletion. Several analytical models have been developed and used to predict the dynamic behavior of CBM reservoir permeability during production through pressure depletion, all based on combining the two effects. The purpose of this study was to introduce modifications to two most commonly used permeability models, namely the Palmer and Mansoori, and Shi and Durucan, for permeability variation and evaluate their performance when projecting gas production. The basis for the modification is the linear relationship between the volume of sorbed gas and the associated matrix shrinkage. Hence, the impact of matrix shrinkage is incorporated as a function of the amount of gas produced, or that remaining in coal, at any time during production. Since the exact production from a reservoir is known throughout its life, this significantly simplifies the process of permeability modeling. Furthermore, the modification is also expected to streamline the process of modeling by classifying the shrinkage parameters for coals of different regions, but with similar characteristics. A good analogy is the San Juan basin, where sorption characteristics of coal are so well understood and defined that operators no longer carry out laboratory sorption work. The goal is to achieve the same for incorporation of the matrix shrinkage behavior. Another modification is to incorporate the matrix, or grain, compressibility effect of coal as a correction factor in the Shi and
Shugar, Andrea
2017-04-01
Genetic counselors are trained health care professionals who effectively integrate both psychosocial counseling and information-giving into their practice. Preparing genetic counseling students for clinical practice is a challenging task, particularly when helping them develop effective and active counseling skills. Resistance to incorporating these skills may stem from decreased confidence, fear of causing harm or a lack of clarity of psycho-social goals. The author reflects on the personal challenges experienced in teaching genetic counselling students to work with psychological and social complexity, and proposes a Genetic Counseling Adaptation Continuum model and methodology to guide students in the use of advanced counseling skills.
Drzewiecki, Wojciech
2016-12-01
In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.
Incorporation of detailed eye model into polygon-mesh versions of ICRP-110 reference phantoms.
Nguyen, Thang Tat; Yeom, Yeon Soo; Kim, Han Sung; Wang, Zhao Jun; Han, Min Cheol; Kim, Chan Hyeong; Lee, Jai Ki; Zankl, Maria; Petoussi-Henss, Nina; Bolch, Wesley E; Lee, Choonsik; Chung, Beom Sun
2015-11-21
The dose coefficients for the eye lens reported in ICRP 2010 Publication 116 were calculated using both a stylized model and the ICRP-110 reference phantoms, according to the type of radiation, energy, and irradiation geometry. To maintain consistency of lens dose assessment, in the present study we incorporated the ICRP-116 detailed eye model into the converted polygon-mesh (PM) version of the ICRP-110 reference phantoms. After the incorporation, the dose coefficients for the eye lens were calculated and compared with those of the ICRP-116 data. The results showed generally a good agreement between the newly calculated lens dose coefficients and the values of ICRP 2010 Publication 116. Significant differences were found for some irradiation cases due mainly to the use of different types of phantoms. Considering that the PM version of the ICRP-110 reference phantoms preserve the original topology of the ICRP-110 reference phantoms, it is believed that the PM version phantoms, along with the detailed eye model, provide more reliable and consistent dose coefficients for the eye lens.
McKenna, M. H.; Alter, R. E.; Swearingen, M. E.; Wilson, D. K.
2017-12-01
Many larger sources, such as volcanic eruptions and nuclear detonations, produce infrasound (acoustic waves with a frequency lower than humans can hear, namely 0.1-20 Hz) that can propagate over global scales. But many smaller infrastructure sources, such as bridges, dams, and buildings, also produce infrasound, though with a lower amplitude that tends to propagate only over regional scales (up to 150 km). In order to accurately calculate regional-scale infrasound propagation, we have incorporated high-resolution, three-dimensional forecasts from the Weather Research and Forecasting (WRF) meteorological model into a signal propagation modeling system called Environmental Awareness for Sensor and Emitter Employment (EASEE), developed at the US Army Engineer Research and Development Center. To quantify the improvement of infrasound propagation predictions with more realistic weather data, we conducted sensitivity studies with different propagation ranges and horizontal resolutions and compared them to default predictions with no weather model data. We describe the process of incorporating WRF output into EASEE for conducting these acoustic propagation simulations and present the results of the aforementioned sensitivity studies.
DEFF Research Database (Denmark)
Köster, Fritz; Hinrichsen, H.H.; St. John, Michael
2001-01-01
We investigate whether a process-oriented approach based on the results of field, laboratory, and modelling studies can be used to develop a stock-environment-recruitment model for Central Baltic cod (Gadus morhua). Based on exploratory statistical analysis, significant variables influencing...... affecting survival of eggs, predation by clupeids on eggs, larval transport, and cannibalism. Results showed that recruitment in the most important spawning area, the Bornholm Basin, during 1976-1995 was related to egg production; however, other factors affecting survival of the eggs (oxygen conditions......, predation) were also significant and when incorporated explained 69% of the variation in 0-group recruitment. In other spawning areas, variable hydrographic conditions did not allow for regular successful egg development. Hence, relatively simple models proved sufficient to predict recruitment of 0-group...
Murphy, Kelly E.
2012-01-13
Fibroblasts and their activated phenotype, myofibroblasts, are the primary cell types involved in the contraction associated with dermal wound healing. Recent experimental evidence indicates that the transformation from fibroblasts to myofibroblasts involves two distinct processes: The cells are stimulated to change phenotype by the combined actions of transforming growth factor β (TGFβ) and mechanical tension. This observation indicates a need for a detailed exploration of the effect of the strong interactions between the mechanical changes and growth factors in dermal wound healing. We review the experimental findings in detail and develop a model of dermal wound healing that incorporates these phenomena. Our model includes the interactions between TGFβ and collagenase, providing a more biologically realistic form for the growth factor kinetics than those included in previous mechanochemical descriptions. A comparison is made between the model predictions and experimental data on human dermal wound healing and all the essential features are well matched. © 2012 Society for Mathematical Biology.
DEFF Research Database (Denmark)
Köster, Fritz; Hinrichsen, H.H.; St. John, Michael
2001-01-01
We investigate whether a process-oriented approach based on the results of field, laboratory, and modelling studies can be used to develop a stock-environment-recruitment model for Central Baltic cod (Gadus morhua). Based on exploratory statistical analysis, significant variables influencing...... survival of early life stages and varying systematically among spawning sites were incorporated into stock-recruitment models, first for major cod spawning sites and then combined for the entire Central Baltic. Variables identified included potential egg production by the spawning stock, abiotic conditions...... affecting survival of eggs, predation by clupeids on eggs, larval transport, and cannibalism. Results showed that recruitment in the most important spawning area, the Bornholm Basin, during 1976-1995 was related to egg production; however, other factors affecting survival of the eggs (oxygen conditions...
Barnett, Tony; Fournié, Guillaume; Gupta, Sunetra; Seeley, Janet
2015-01-01
Incorporation of 'social' variables into epidemiological models remains a challenge. Too much detail and models cease to be useful; too little and the very notion of infection - a highly social process in human populations - may be considered with little reference to the social. The French sociologist Émile Durkheim proposed that the scientific study of society required identification and study of 'social currents'. Such 'currents' are what we might today describe as 'emergent properties', specifiable variables appertaining to individuals and groups, which represent the perspectives of social actors as they experience the environment in which they live their lives. Here we review the ways in which one particular emergent property, hope, relevant to a range of epidemiological situations, might be used in epidemiological modelling of infectious diseases in human populations. We also indicate how such an approach might be extended to include a range of other potential emergent properties to represent complex social and economic processes bearing on infectious disease transmission.
Incorporating remote sensing-based ET estimates into the Community Land Model version 4.5
Wang, Dagang; Wang, Guiling; Parr, Dana T.; Liao, Weilin; Xia, Youlong; Fu, Congsheng
2017-07-01
Land surface models bear substantial biases in simulating surface water and energy budgets despite the continuous development and improvement of model parameterizations. To reduce model biases, Parr et al. (2015) proposed a method incorporating satellite-based evapotranspiration (ET) products into land surface models. Here we apply this bias correction method to the Community Land Model version 4.5 (CLM4.5) and test its performance over the conterminous US (CONUS). We first calibrate a relationship between the observational ET from the Global Land Evaporation Amsterdam Model (GLEAM) product and the model ET from CLM4.5, and assume that this relationship holds beyond the calibration period. During the validation or application period, a simulation using the default CLM4.5 (CLM) is conducted first, and its output is combined with the calibrated observational-vs.-model ET relationship to derive a corrected ET; an experiment (CLMET) is then conducted in which the model-generated ET is overwritten with the corrected ET. Using the observations of ET, runoff, and soil moisture content as benchmarks, we demonstrate that CLMET greatly improves the hydrological simulations over most of the CONUS, and the improvement is stronger in the eastern CONUS than the western CONUS and is strongest over the Southeast CONUS. For any specific region, the degree of the improvement depends on whether the relationship between observational and model ET remains time-invariant (a fundamental hypothesis of the Parr et al. (2015) method) and whether water is the limiting factor in places where ET is underestimated. While the bias correction method improves hydrological estimates without improving the physical parameterization of land surface models, results from this study do provide guidance for physically based model development effort.
Incorporating herd immunity effects into cohort models of vaccine cost-effectiveness.
Bauch, Chris T; Anonychuk, Andrea M; Van Effelterre, Thierry; Pham, Bá Z; Merid, Maraki Fikre
2009-01-01
Cohort models are often used in cost-effectiveness analysis (CEA) of vaccination. However, because they cannot capture herd immunity effects, cohort models underestimate the reduction in incidence caused by vaccination. Dynamic models capture herd immunity effects but are often not adopted in vaccine CEA. The objective was to develop a pseudo-dynamic approximation that can be incorporated into an existing cohort model to capture herd immunity effects. The authors approximated changing force of infection due to universal vaccination for a pediatric infectious disease. The projected lifetime cases in a cohort were compared under 1) a cohort model, 2) a cohort model with pseudo-dynamic approximation, and 3) an age-structured susceptible-exposed-infectious-recovered compartmental (dynamic) model. The authors extended the methodology to sexually transmitted infections. For average to high values of vaccine coverage (P > 60%) and small to average values of the basic reproduction number (R(0) vaccination programs for many common infections, the pseudo-dynamic approximation significantly improved projected lifetime cases and was close to projections of the full dynamic model. For large values of R(0) (R(0) > 15), projected lifetime cases were similar under the dynamic model and the cohort model, both with and without pseudo-dynamic approximation. The approximation captures changes in the mean age at infection in the 1st vaccinated cohort. This methodology allows for preliminary assessment of herd immunity effects on CEA of universal vaccination for pediatric infectious diseases. The method requires simple adjustments to an existing cohort model and less data than a full dynamic model.
Incorporating remote sensing-based ET estimates into the Community Land Model version 4.5
Directory of Open Access Journals (Sweden)
D. Wang
2017-07-01
Full Text Available Land surface models bear substantial biases in simulating surface water and energy budgets despite the continuous development and improvement of model parameterizations. To reduce model biases, Parr et al. (2015 proposed a method incorporating satellite-based evapotranspiration (ET products into land surface models. Here we apply this bias correction method to the Community Land Model version 4.5 (CLM4.5 and test its performance over the conterminous US (CONUS. We first calibrate a relationship between the observational ET from the Global Land Evaporation Amsterdam Model (GLEAM product and the model ET from CLM4.5, and assume that this relationship holds beyond the calibration period. During the validation or application period, a simulation using the default CLM4.5 (CLM is conducted first, and its output is combined with the calibrated observational-vs.-model ET relationship to derive a corrected ET; an experiment (CLMET is then conducted in which the model-generated ET is overwritten with the corrected ET. Using the observations of ET, runoff, and soil moisture content as benchmarks, we demonstrate that CLMET greatly improves the hydrological simulations over most of the CONUS, and the improvement is stronger in the eastern CONUS than the western CONUS and is strongest over the Southeast CONUS. For any specific region, the degree of the improvement depends on whether the relationship between observational and model ET remains time-invariant (a fundamental hypothesis of the Parr et al. (2015 method and whether water is the limiting factor in places where ET is underestimated. While the bias correction method improves hydrological estimates without improving the physical parameterization of land surface models, results from this study do provide guidance for physically based model development effort.
Incorporation of mantle effects in lithospheric stress modeling: the Eurasian plate
Ruckstuhl, K.; Wortel, M. J. R.; Govers, R.; Meijer, P.
2009-04-01
The intraplate stress field is the result of forces acting on the lithosphere and as such contains valuable information on the dynamics of plate tectonics. Studies modeling the intraplate stress field have followed two different approaches, with the emphasis either on the lithosphere itself or the underlying convecting mantle. For most tectonic plates on earth one or both methods have been quiet successful in reproducing the large scale stress field. The Eurasian plate however has remained a challenge. A probable cause is that due to the complexity of the plate successful models require both an active mantle and well defined boundary forces. We therefore construct a model for the Eurasian plate in which we combine both modeling approaches by incorporating the effects of an active mantle in a model based on a lithospheric approach, where boundary forces are modeled explicitly. The assumption that the whole plate is in dynamical equilibrium allows for imposing a torque balance on the plate, which provides extra constraints on the forces that cannot be calculated a priori. Mantle interaction is modeled as a shear at the base of the plate obtained from global mantle flow models from literature. A first order approximation of the increased excess pressure of the anomalous ridge near the Iceland hotspot is incorporated. Results are evaluated by comparison with World Stress Map data. Direct incorporation of the sublithospheric stresses from mantle flow modeling in our force model is not possible, due to a discrepancy in the magnitude of the integrated mantle shear and lithospheric forces of around one order of magnitude, prohibiting balance of the torques. This magnitude discrepancy is a well known fundamental problem in geodynamics and we choose to close the gap between the two different approaches by scaling down the absolute magnitude of the sublithospheric stresses. Becker and O'Connell (G3,2,2001) showed that various mantle flow models show a considerable spread in
Alizadeh, Siamak; Sriramula, Srinivas
2017-11-01
Redundant safety systems are commonly used in the process industry to respond to hazardous events. In redundant systems composed of identical units, Common Cause Failures (CCFs) can significantly influence system performance with regards to reliability and safety. However, their impact has been overlooked due to the inherent complexity of modelling common cause induced failures. This article develops a reliability model for a redundant safety system using Markov analysis approach. The proposed model incorporates process demands in conjunction with CCF for the first time and evaluates their impacts on the reliability quantification of safety systems without automatic diagnostics. The reliability of the Markov model is quantified by considering the Probability of Failure on Demand (PFD) as a measure for low demand systems. The safety performance of the model is analysed using Hazardous Event Frequency (HEF) to evaluate the frequency of entering a hazardous state that will lead to an accident if the situation is not controlled. The utilisation of Markov model for a simple case study of a pressure protection system is demonstrated and it is shown that the proposed approach gives a sufficiently accurate result for all demand rates, durations, component failure rates and corresponding repair rates for low demand mode of operation. The Markov model proposed in this paper assumes the absence of automatic diagnostics, along with multiple stage repair strategy for CCFs and restoration of the system from hazardous state to the "as good as new" state. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Fuzzy Logic-Based Model That Incorporates Personality Traits for Heterogeneous Pedestrians
Directory of Open Access Journals (Sweden)
Zhuxin Xue
2017-10-01
Full Text Available Most models designed to simulate pedestrian dynamical behavior are based on the assumption that human decision-making can be described using precise values. This study proposes a new pedestrian model that incorporates fuzzy logic theory into a multi-agent system to address cognitive behavior that introduces uncertainty and imprecision during decision-making. We present a concept of decision preferences to represent the intrinsic control factors of decision-making. To realize the different decision preferences of heterogeneous pedestrians, the Five-Factor (OCEAN personality model is introduced to model the psychological characteristics of individuals. Then, a fuzzy logic-based approach is adopted for mapping the relationships between the personality traits and the decision preferences. Finally, we have developed an application using our model to simulate pedestrian dynamical behavior in several normal or non-panic scenarios, including a single-exit room, a hallway with obstacles, and a narrowing passage. The effectiveness of the proposed model is validated with a user study. The results show that the proposed model can generate more reasonable and heterogeneous behavior in the simulation and indicate that individual personality has a noticeable effect on pedestrian dynamical behavior.
Incorporating vehicle mix in stimulus-response car-following models
Directory of Open Access Journals (Sweden)
Saidi Siuhi
2016-06-01
Full Text Available The objective of this paper is to incorporate vehicle mix in stimulus-response car-following models. Separate models were estimated for acceleration and deceleration responses to account for vehicle mix via both movement state and vehicle type. For each model, three sub-models were developed for different pairs of following vehicles including “automobile following automobile,” “automobile following truck,” and “truck following automobile.” The estimated model parameters were then validated against other data from a similar region and roadway. The results indicated that drivers' behaviors were significantly different among the different pairs of following vehicles. Also the magnitude of the estimated parameters depends on the type of vehicle being driven and/or followed. These results demonstrated the need to use separate models depending on movement state and vehicle type. The differences in parameter estimates confirmed in this paper highlight traffic safety and operational issues of mixed traffic operation on a single lane. The findings of this paper can assist transportation professionals to improve traffic simulation models used to evaluate the impact of different strategies on ameliorate safety and performance of highways. In addition, driver response time lag estimates can be used in roadway design to calculate important design parameters such as stopping sight distance on horizontal and vertical curves for both automobiles and trucks.
A constitutive mechanical model for gas hydrate bearing sediments incorporating inelastic mechanisms
Sánchez, Marcelo
2016-11-30
Gas hydrate bearing sediments (HBS) are natural soils formed in permafrost and sub-marine settings where the temperature and pressure conditions are such that gas hydrates are stable. If these conditions shift from the hydrate stability zone, hydrates dissociate and move from the solid to the gas phase. Hydrate dissociation is accompanied by significant changes in sediment structure and strongly affects its mechanical behavior (e.g., sediment stiffenss, strength and dilatancy). The mechanical behavior of HBS is very complex and its modeling poses great challenges. This paper presents a new geomechanical model for hydrate bearing sediments. The model incorporates the concept of partition stress, plus a number of inelastic mechanisms proposed to capture the complex behavior of this type of soil. This constitutive model is especially well suited to simulate the behavior of HBS upon dissociation. The model was applied and validated against experimental data from triaxial and oedometric tests conducted on manufactured and natural specimens involving different hydrate saturation, hydrate morphology, and confinement conditions. Particular attention was paid to model the HBS behavior during hydrate dissociation under loading. The model performance was highly satisfactory in all the cases studied. It managed to properly capture the main features of HBS mechanical behavior and it also assisted to interpret the behavior of this type of sediment under different loading and hydrate conditions.
Energy Technology Data Exchange (ETDEWEB)
Sullivan, P.; Eurek, K.; Margolis, R.
2014-07-01
Because solar power is a rapidly growing component of the electricity system, robust representations of solar technologies should be included in capacity-expansion models. This is a challenge because modeling the electricity system--and, in particular, modeling solar integration within that system--is a complex endeavor. This report highlights the major challenges of incorporating solar technologies into capacity-expansion models and shows examples of how specific models address those challenges. These challenges include modeling non-dispatchable technologies, determining which solar technologies to model, choosing a spatial resolution, incorporating a solar resource assessment, and accounting for solar generation variability and uncertainty.
Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.
Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A
2013-02-01
The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on
Li, Ping
2018-04-13
It is well known that graphene demonstrates spatial dispersion properties, i.e., its conductivity is nonlocal and a function of spectral wave number (momentum operator) q. In this paper, to account for effects of spatial dispersion on transmission of high speed signals along graphene nano-ribbon (GNR) interconnects, a discontinuous Galerkin time-domain (DGTD) algorithm is proposed. The atomically-thick GNR is modeled using a nonlocal transparent surface impedance boundary condition (SIBC) incorporated into the DGTD scheme. Since the conductivity is a complicated function of q (and one cannot find an analytical Fourier transform pair between q and spatial differential operators), an exact time domain SIBC model cannot be derived. To overcome this problem, the conductivity is approximated by its Taylor series in spectral domain under low-q assumption. This approach permits expressing the time domain SIBC in the form of a second-order partial differential equation (PDE) in current density and electric field intensity. To permit easy incorporation of this PDE with the DGTD algorithm, three auxiliary variables, which degenerate the second-order (temporal and spatial) differential operators to first-order ones, are introduced. Regarding to the temporal dispersion effects, the auxiliary differential equation (ADE) method is utilized to eliminates the expensive temporal convolutions. To demonstrate the applicability of the proposed scheme, numerical results, which involve characterization of spatial dispersion effects on the transfer impedance matrix of GNR interconnects, are presented.
Directory of Open Access Journals (Sweden)
Matti Stenroos
Full Text Available MEG/EEG source imaging is usually done using a three-shell (3-S or a simpler head model. Such models omit cerebrospinal fluid (CSF that strongly affects the volume currents. We present a four-compartment (4-C boundary-element (BEM model that incorporates the CSF and is computationally efficient and straightforward to build using freely available software. We propose a way for compensating the omission of CSF by decreasing the skull conductivity of the 3-S model, and study the robustness of the 4-C and 3-S models to errors in skull conductivity. We generated dense boundary meshes using MRI datasets and automated SimNIBS pipeline. Then, we built a dense 4-C reference model using Galerkin BEM, and 4-C and 3-S test models using coarser meshes and both Galerkin and collocation BEMs. We compared field topographies of cortical sources, applying various skull conductivities and fitting conductivities that minimized the relative error in 4-C and 3-S models. When the CSF was left out from the EEG model, our compensated, unbiased approach improved the accuracy of the 3-S model considerably compared to the conventional approach, where CSF is neglected without any compensation (mean relative error 40%. The error due to the omission of CSF was of the same order in MEG and compensated EEG. EEG has, however, large overall error due to uncertain skull conductivity. Our results show that a realistic 4-C MEG/EEG model can be implemented using standard tools and basic BEM, without excessive workload or computational burden. If the CSF is omitted, compensated skull conductivity should be used in EEG.
Analytic Hierarchy Process (AHP in Ranking Non-Parametric Stochastic Rainfall and Streamflow Models
Directory of Open Access Journals (Sweden)
Masengo Ilunga
2015-08-01
Full Text Available Analytic Hierarchy Process (AHP is used in the selection of categories of non-parametric stochastic models for hydrological data generation and its formulation is based on pairwise comparisons of models. These models or techniques are obtained from a recent study initiated by the Water Research Commission of South Africa (WRC and were compared predominantly based on their capability to extrapolate data beyond the range of historic hydrological data. The different categories of models involved in the selection process were: wavelet (A, reordering (B, K-nearest neighbor (C, kernel density (D and bootstrap (E. In the AHP formulation, criteria for the selection of techniques are: "ability for data to preserve historic characteristics", "ability to generate new hydrological data", "scope of applicability", "presence of negative data generated" and "user friendliness". The pairwise comparisons performed through AHP showed that the overall order of selection (ranking of models was D, C, A, B and C. The weights of these techniques were found to be 27.21%, 24.3 %, 22.15 %, 13.89 % and 11.80 % respectively. Hence, bootstrap category received the highest preference while nearest neighbor received the lowest preference when all selection criteria are taken into consideration.
A MULTI-RESOLUTION FUSION MODEL INCORPORATING COLOR AND ELEVATION FOR SEMANTIC SEGMENTATION
Directory of Open Access Journals (Sweden)
W. Zhang
2017-05-01
Full Text Available In recent years, the developments for Fully Convolutional Networks (FCN have led to great improvements for semantic segmentation in various applications including fused remote sensing data. There is, however, a lack of an in-depth study inside FCN models which would lead to an understanding of the contribution of individual layers to specific classes and their sensitivity to different types of input data. In this paper, we address this problem and propose a fusion model incorporating infrared imagery and Digital Surface Models (DSM for semantic segmentation. The goal is to utilize heterogeneous data more accurately and effectively in a single model instead of to assemble multiple models. First, the contribution and sensitivity of layers concerning the given classes are quantified by means of their recall in FCN. The contribution of different modalities on the pixel-wise prediction is then analyzed based on visualization. Finally, an optimized scheme for the fusion of layers with color and elevation information into a single FCN model is derived based on the analysis. Experiments are performed on the ISPRS Vaihingen 2D Semantic Labeling dataset. Comprehensive evaluations demonstrate the potential of the proposed approach.
Modelling and Simulation of a Manipulator with Stable Viscoelastic Grasping Incorporating Friction
Directory of Open Access Journals (Sweden)
A. Khurshid
2016-12-01
Full Text Available Design, dynamics and control of a humanoid robotic hand based on anthropological dimensions, with joint friction, is modelled, simulated and analysed in this paper by using computer aided design and multibody dynamic simulation. Combined joint friction model is incorporated in the joints. Experimental values of coefficient of friction of grease lubricated sliding contacts representative of manipulator joints are presented. Human fingers deform to the shape of the grasped object (enveloping grasp at the area of interaction. A mass-spring-damper model of the grasp is developed. The interaction of the viscoelastic gripper of the arm with objects is analysed by using Bond Graph modelling method. Simulations were conducted for several material parameters. These results of the simulation are then used to develop a prototype of the proposed gripper. Bond graph model is experimentally validated by using the prototype. The gripper is used to successfully transport soft and fragile objects. This paper provides information on optimisation of friction and its inclusion in both dynamic modelling and simulation to enhance mechanical efficiency.
Liu, S.; Ng, G. H. C.
2017-12-01
The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.
Givens, J.; Padowski, J.; Malek, K.; Guzman, C.; Boll, J.; Adam, J. C.; Witinok-Huber, R.
2017-12-01
In the face of climate change and multi-scalar governance objectives, achieving resilience of food-energy-water (FEW) systems requires interdisciplinary approaches. Through coordinated modeling and management efforts, we study "Innovations in the Food-Energy-Water Nexus (INFEWS)" through a case-study in the Columbia River Basin. Previous research on FEW system management and resilience includes some attention to social dynamics (e.g., economic, governance); however, more research is needed to better address social science perspectives. Decisions ultimately taken in this river basin would occur among stakeholders encompassing various institutional power structures including multiple U.S. states, tribal lands, and sovereign nations. The social science lens draws attention to the incompatibility between the engineering definition of resilience (i.e., return to equilibrium or a singular stable state) and the ecological and social system realities, more explicit in the ecological interpretation of resilience (i.e., the ability of a system to move into a different, possibly more resilient state). Social science perspectives include but are not limited to differing views on resilience as normative, system persistence versus transformation, and system boundary issues. To expand understanding of resilience and objectives for complex and dynamic systems, concepts related to inequality, heterogeneity, power, agency, trust, values, culture, history, conflict, and system feedbacks must be more tightly integrated into FEW research. We identify gaps in knowledge and data, and the value and complexity of incorporating social components and processes into systems models. We posit that socio-biophysical system resilience modeling would address important complex, dynamic social relationships, including non-linear dynamics of social interactions, to offer an improved understanding of sustainable management in FEW systems. Conceptual modeling that is presented in our study, represents
A Loudness Model for Time-Varying Sounds Incorporating Binaural Inhibition
Directory of Open Access Journals (Sweden)
Brian C. J. Moore
2016-12-01
Full Text Available This article describes a model of loudness for time-varying sounds that incorporates the concept of binaural inhibition, namely, that the signal applied to one ear can reduce the internal response to a signal at the other ear. For each ear, the model includes the following: a filter to allow for the effects of transfer of sound through the outer and middle ear; a short-term spectral analysis with greater frequency resolution at low than at high frequencies; calculation of an excitation pattern, representing the magnitudes of the outputs of the auditory filters as a function of center frequency; application of a compressive nonlinearity to the output of each auditory filter; and smoothing over time of the resulting instantaneous specific loudness pattern using an averaging process resembling an automatic gain control. The resulting short-term specific loudness patterns are used to calculate broadly tuned binaural inhibition functions, the amount of inhibition depending on the relative short-term specific loudness at the two ears. The inhibited specific loudness patterns are summed across frequency to give an estimate of the short-term loudness for each ear. The overall short-term loudness is calculated as the sum of the short-term loudness values for the two ears. The long-term loudness for each ear is calculated by smoothing the short-term loudness for that ear, again by a process resembling automatic gain control, and the overall loudness impression is obtained by summing the long-term loudness across ears. The predictions of the model are more accurate than those of an earlier model that did not incorporate binaural inhibition.
A model to incorporate organ deformation in the evaluation of dose/volume relationship
International Nuclear Information System (INIS)
Yan, D.; Jaffray, D.; Wong, J.; Brabbins, D.; Martinez, A. A.
1997-01-01
Purpose: Measurements of internal organ motion have demonstrated that daily organ deformation exists during the course of radiation treatment. However, a model to evaluate the resultant dose delivered to a daily deformed organ remains a difficult challenge. Current methods which model such organ deformation as rigid body motion in the dose calculation for treatment planning evaluation are incorrect and misleading. In this study, a new model for treatment planning evaluation is introduced which incorporates patient specific information of daily organ deformation and setup variation. The model was also used to retrospectively analyze the actual treatment data measured using daily CT scans for 5 patients with prostate treatment. Methods and Materials: The model assumes that for each patient, the organ of interest can be measured during the first few treatment days. First, the volume of each organ is delineated from each of the daily measurements and cumulated in a 3D bit-map. A tissue occupancy distribution is then constructed with the 50% isodensity representing the mean, or effective, organ volume. During the course of treatment, each voxel in the effective organ volume is assumed to move inside a local 3D neighborhood with a specific distribution function. The neighborhood and the distribution function are deduced from the positions and shapes of the organ in the first few measurements using the biomechanics model of viscoelastic body. For each voxel, the local distribution function is then convolved with the spatial dose distribution. The latter includes also the variation in dose due to daily setup error. As a result, the cumulative dose to the voxel incorporates the effects of daily setup variation and organ deformation. A ''variation adjusted'' dose volume histogram, aDVH, for the effective organ volume can then be constructed for the purpose of treatment evaluation and optimization. Up to 20 daily CT scans and daily portal images for 5 patients with prostate
Simulation of Forest Carbon Fluxes Using Model Incorporation and Data Assimilation
Directory of Open Access Journals (Sweden)
Min Yan
2016-07-01
Full Text Available This study improved simulation of forest carbon fluxes in the Changbai Mountains with a process-based model (Biome-BGC using incorporation and data assimilation. Firstly, the original remote sensing-based MODIS MOD_17 GPP (MOD_17 model was optimized using refined input data and biome-specific parameters. The key ecophysiological parameters of the Biome-BGC model were determined through the Extended Fourier Amplitude Sensitivity Test (EFAST sensitivity analysis. Then the optimized MOD_17 model was used to calibrate the Biome-BGC model by adjusting the sensitive ecophysiological parameters. Once the best match was found for the 10 selected forest plots for the 8-day GPP estimates from the optimized MOD_17 and from the Biome-BGC, the values of sensitive ecophysiological parameters were determined. The calibrated Biome-BGC model agreed better with the eddy covariance (EC measurements (R2 = 0.87, RMSE = 1.583 gC·m−2·d−1 than the original model did (R2 = 0.72, RMSE = 2.419 gC·m−2·d−1. To provide a best estimate of the true state of the model, the Ensemble Kalman Filter (EnKF was used to assimilate five years (of eight-day periods between 2003 and 2007 of Global LAnd Surface Satellite (GLASS LAI products into the calibrated Biome-BGC model. The results indicated that LAI simulated through the assimilated Biome-BGC agreed well with GLASS LAI. GPP performances obtained from the assimilated Biome-BGC were further improved and verified by EC measurements at the Changbai Mountains forest flux site (R2 = 0.92, RMSE = 1.261 gC·m−2·d−1.
Directory of Open Access Journals (Sweden)
Azusa Shimizu, MD
2014-03-01
Conclusions: These findings suggest that control-released bFGF incorporated in β-TCP can accelerate bone regeneration in the murine cranial defect model and may be promising for the clinical treatment of cranial defects.
Moreno-Amat, Elena; Rubiales, Juan Manuel; Morales-Molino, César; García-Amorena, Ignacio
2017-08-01
The increasing development of species distribution models (SDMs) using palaeodata has created new prospects to address questions of evolution, ecology and biogeography from wider perspectives. Palaeobotanical data provide information on the past distribution of taxa at a given time and place and its incorporation on modelling has contributed to advancing the SDM field. This has allowed, for example, to calibrate models under past climate conditions or to validate projected models calibrated on current species distributions. However, these data also bear certain shortcomings when used in SDMs that may hinder the resulting ecological outcomes and eventually lead to misleading conclusions. Palaeodata may not be equivalent to present data, but instead frequently exhibit limitations and biases regarding species representation, taxonomy and chronological control, and their inclusion in SDMs should be carefully assessed. The limitations of palaeobotanical data applied to SDM studies are infrequently discussed and often neglected in the modelling literature; thus, we argue for the more careful selection and control of these data. We encourage authors to use palaeobotanical data in their SDMs studies and for doing so, we propose some recommendations to improve the robustness, reliability and significance of palaeo-SDM analyses.
International Nuclear Information System (INIS)
Hedegaard, Karsten; Balyk, Olexandr
2013-01-01
Individual compression heat pumps constitute a potentially valuable resource in supporting wind power integration due to their economic competitiveness and possibilities for flexible operation. When analysing the system benefits of flexible heat pump operation, effects on investments should be taken into account. In this study, we present a model that facilitates analysing individual heat pumps and complementing heat storages in integration with the energy system, while optimising both investments and operation. The model incorporates thermal building dynamics and covers various heat storage options: passive heat storage in the building structure via radiator heating, active heat storage in concrete floors via floor heating, and use of thermal storage tanks for space heating and hot water. It is shown that the model is well qualified for analysing possibilities and system benefits of operating heat pumps flexibly. This includes prioritising heat pump operation for hours with low marginal electricity production costs, and peak load shaving resulting in a reduced need for peak and reserve capacity investments. - Highlights: • Model optimising heat pumps and heat storages in integration with the energy system. • Optimisation of both energy system investments and operation. • Heat storage in building structure and thermal storage tanks included. • Model well qualified for analysing system benefits of flexible heat pump operation. • Covers peak load shaving and operation prioritised for low electricity prices
A General Framework for Incorporating Stochastic Recovery in Structural Models of Credit Risk
Directory of Open Access Journals (Sweden)
Albert Cohen
2017-12-01
Full Text Available In this work, we introduce a general framework for incorporating stochastic recovery into structural models. The framework extends the approach to recovery modeling developed in Cohen and Costanzino (2015, 2017 and provides for a systematic way to include different recovery processes into a structural credit model. The key observation is a connection between the partial information gap between firm manager and the market that is captured via a distortion of the probability of default. This last feature is computed by what is essentially a Girsanov transformation and reflects untangling of the recovery process from the default probability. Our framework can be thought of as an extension of Ishizaka and Takaoka (2003 and, in the same spirit of their work, we provide several examples of the framework including bounded recovery and a jump-to-zero model. One of the nice features of our framework is that, given prices from any one-factor structural model, we provide a systematic way to compute corresponding prices with stochastic recovery. The framework also provides a way to analyze correlation between Probability of Default (PD and Loss Given Default (LGD, and term structure of recovery rates.
Howard, A. M.; Bernardes, S.; Nibbelink, N.; Biondi, L.; Presotto, A.; Fragaszy, D. M.; Madden, M.
2012-07-01
Movement patterns of bearded capuchin monkeys (Cebus (Sapajus) libidinosus) in northeastern Brazil are likely impacted by environmental features such as elevation, vegetation density, or vegetation type. Habitat preferences of these monkeys provide insights regarding the impact of environmental features on species ecology and the degree to which they incorporate these features in movement decisions. In order to evaluate environmental features influencing movement patterns and predict areas suitable for movement, we employed a maximum entropy modelling approach, using observation points along capuchin monkey daily routes as species presence points. We combined these presence points with spatial data on important environmental features from remotely sensed data on land cover and topography. A spectral mixing analysis procedure was used to generate fraction images that represent green vegetation, shade and soil of the study area. A Landsat Thematic Mapper scene of the area of study was geometrically and atmospherically corrected and used as input in a Minimum Noise Fraction (MNF) procedure and a linear spectral unmixing approach was used to generate the fraction images. These fraction images and elevation were the environmental layer inputs for our logistic MaxEnt model of capuchin movement. Our models' predictive power (test AUC) was 0.775. Areas of high elevation (>450 m) showed low probabilities of presence, and percent green vegetation was the greatest overall contributor to model AUC. This work has implications for predicting daily movement patterns of capuchins in our field site, as suitability values from our model may relate to habitat preference and facility of movement.
International Nuclear Information System (INIS)
Sutheerawatthana, Pitch; Minato, Takayuki
2010-01-01
The response of a social group is a missing element in the formal impact assessment model. Previous discussion of the involvement of social groups in an intervention has mainly focused on the formation of the intervention. This article discusses the involvement of social groups in a different way. A descriptive model is proposed by incorporating a social group's response into the concept of second- and higher-order effects. The model is developed based on a cause-effect relationship through the observation of phenomena in case studies. The model clarifies the process by which social groups interact with a lower-order effect and then generate a higher-order effect in an iterative manner. This study classifies social groups' responses into three forms-opposing, modifying, and advantage-taking action-and places them in six pathways. The model is expected to be used as an analytical tool for investigating and identifying impacts in the planning stage and as a framework for monitoring social groups' responses during the implementation stage of a policy, plan, program, or project (PPPPs).
Matthews, S.; Lovell, M.; Davies, S. J.; Pritchard, T.; Sirju, C.; Abdelkarim, A.
2012-12-01
Heterolithic or 'shaly' sandstone reservoirs constitute a significant proportion of hydrocarbon resources. Petroacoustic models (a combination of petrophysics and rock physics) enhance the ability to extract reservoir properties from seismic data, providing a connection between seismic and fine-scale rock properties. By incorporating sedimentological observations these models can be better constrained and improved. Petroacoustic modelling is complicated by the unpredictable effects of clay minerals and clay-sized particles on geophysical properties. Such effects are responsible for erroneous results when models developed for "clean" reservoirs - such as Gassmann's equation (Gassmann, 1951) - are applied to heterolithic sandstone reservoirs. Gassmann's equation is arguably the most popular petroacoustic modelling technique in the hydrocarbon industry and is used to model elastic effects of changing reservoir fluid saturations. Successful implementation of Gassmann's equation requires well-constrained drained rock frame properties, which in heterolithic sandstones are heavily influenced by reservoir sedimentology, particularly clay distribution. The prevalent approach to categorising clay distribution is based on the Thomas - Stieber model (Thomas & Stieber, 1975), this approach is inconsistent with current understanding of 'shaly sand' sedimentology and omits properties such as sorting and grain size. The novel approach presented here demonstrates that characterising reservoir sedimentology constitutes an important modelling phase. As well as incorporating sedimentological constraints, this novel approach also aims to improve drained frame moduli estimates through more careful consideration of Gassmann's model assumptions and limitations. A key assumption of Gassmann's equation is a pore space in total communication with movable fluids. This assumption is often violated by conventional applications in heterolithic sandstone reservoirs where effective porosity, which
Turner, Sean; Galelli, Stefano; Wilcox, Karen
2015-04-01
Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating
International Nuclear Information System (INIS)
Huang, Zhibin; Mayr, Nina A.; Lo, Simon S.; Wang, Jian Z.; Jia Guang; Yuh, William T. C.; Johnke, Roberta
2012-01-01
Purpose: It has been conventionally assumed that the repair rate for sublethal damage (SLD) remains constant during the entire radiation course. However, increasing evidence from animal studies suggest that this may not the case. Rather, it appears that the repair rate for radiation-induced SLD slows down with increasing time. Such a slowdown in repair would suggest that the exponential repair pattern would not necessarily accurately predict repair process. As a result, the purpose of this study was to investigate a new generalized linear-quadratic (LQ) model incorporating a repair pattern with reciprocal time. The new formulas were tested with published experimental data. Methods: The LQ model has been widely used in radiation therapy, and the parameter G in the surviving fraction represents the repair process of sublethal damage with T r as the repair half-time. When a reciprocal pattern of repair process was adopted, a closed form of G was derived analytically for arbitrary radiation schemes. The published animal data adopted to test the reciprocal formulas. Results: A generalized LQ model to describe the repair process in a reciprocal pattern was obtained. Subsequently, formulas for special cases were derived from this general form. The reciprocal model showed a better fit to the animal data than the exponential model, particularly for the ED50 data (reduced χ 2 min of 2.0 vs 4.3, p = 0.11 vs 0.006), with the following gLQ parameters: α/β = 2.6-4.8 Gy, T r = 3.2-3.9 h for rat feet skin, and α/β = 0.9 Gy, T r = 1.1 h for rat spinal cord. Conclusions: These results of repair process following a reciprocal time suggest that the generalized LQ model incorporating the reciprocal time of sublethal damage repair shows a better fit than the exponential repair model. These formulas can be used to analyze the experimental and clinical data, where a slowing-down repair process appears during the course of radiation therapy.
Huang, Zhibin; Mayr, Nina A; Lo, Simon S; Wang, Jian Z; Jia, Guang; Yuh, William T C; Johnke, Roberta
2012-01-01
It has been conventionally assumed that the repair rate for sublethal damage (SLD) remains constant during the entire radiation course. However, increasing evidence from animal studies suggest that this may not the case. Rather, it appears that the repair rate for radiation-induced SLD slows down with increasing time. Such a slowdown in repair would suggest that the exponential repair pattern would not necessarily accurately predict repair process. As a result, the purpose of this study was to investigate a new generalized linear-quadratic (LQ) model incorporating a repair pattern with reciprocal time. The new formulas were tested with published experimental data. The LQ model has been widely used in radiation therapy, and the parameter G in the surviving fraction represents the repair process of sublethal damage with T(r) as the repair half-time. When a reciprocal pattern of repair process was adopted, a closed form of G was derived analytically for arbitrary radiation schemes. The published animal data adopted to test the reciprocal formulas. A generalized LQ model to describe the repair process in a reciprocal pattern was obtained. Subsequently, formulas for special cases were derived from this general form. The reciprocal model showed a better fit to the animal data than the exponential model, particularly for the ED50 data (reduced χ(2) (min) of 2.0 vs 4.3, p = 0.11 vs 0.006), with the following gLQ parameters: α/β = 2.6-4.8 Gy, T(r) = 3.2-3.9 h for rat feet skin, and α/β = 0.9 Gy, T(r) = 1.1 h for rat spinal cord. These results of repair process following a reciprocal time suggest that the generalized LQ model incorporating the reciprocal time of sublethal damage repair shows a better fit than the exponential repair model. These formulas can be used to analyze the experimental and clinical data, where a slowing-down repair process appears during the course of radiation therapy.
Wu, Xifeng; Ye, Yuanqing; Barcenas, Carlos H; Chow, Wong-Ho; Meng, Qing H; Chavez-MacGregor, Mariana; Hildebrandt, Michelle A T; Zhao, Hua; Gu, Xiangjun; Deng, Yang; Wagar, Elizabeth; Esteva, Francisco J; Tripathy, Debu; Hortobagyi, Gabriel N
2017-07-01
In this study, we developed integrative, personalized prognostic models for breast cancer recurrence and overall survival (OS) that consider receptor subtypes, epidemiological data, quality of life (QoL), and treatment. A total of 15 314 women with stage I to III invasive primary breast cancer treated at The University of Texas MD Anderson Cancer Center between 1997 and 2012 were used to generate prognostic models by Cox regression analysis in a two-stage study. Model performance was assessed by calculating the area under the curve (AUC) and calibration analysis and compared with Nottingham Prognostic Index (NPI) and PREDICT. Host characteristics were assessed for 10 809 women as the discovery population (median follow-up = 6.09 years, 1144 recurrence and 1627 deaths) and 4505 women as the validation population (median follow-up = 7.95 years, 684 recurrence and 1095 deaths). In addition to the known clinical/pathological variables, the model for recurrence included alcohol consumption while the model for OS included smoking status and physical component summary score. The AUCs for recurrence and OS were 0.813 and 0.810 in the discovery and 0.807 and 0.803 in the validation, respectively, compared with AUCs of 0.761 and 0.753 in discovery and 0.777 and 0.751 in validation for NPI. Our model further showed better calibration compared with PREDICT. We also developed race-specific and receptor subtype-specific models with comparable AUCs. Racial disparity was evident in the distributions of many risk factors and clinical presentation of the disease. Our integrative prognostic models for breast cancer exhibit high discriminatory accuracy and excellent calibration and are the first to incorporate receptor subtype and epidemiological and QoL data. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Stefan Fürtinger
2014-11-01
Full Text Available Assessing brain activity during complex voluntary motor behaviors that require the recruitment of multiple neural sites is a field of active research. Our current knowledge is primarily based on human brain imaging studies that have clear limitations in terms of temporal and spatial resolution. We developed a physiologically informed non-linear multi-compartment stochastic neural model to simulate functional brain activity coupled with neurotransmitter release during complex voluntary behavior, such as speech production. Due to its state-dependent modulation of neural firing, dopaminergic neurotransmission plays a key role in the organization of functional brain circuits controlling speech and language and thus has been incorporated in our neural population model. A rigorous mathematical proof establishing existence and uniqueness of solutions to the proposed model as well as a computationally efficient strategy to numerically approximate these solutions are presented. Simulated brain activity during the resting state and sentence production was analyzed using functional network connectivity, and graph theoretical techniques were employed to highlight differences between the two conditions. We demonstrate that our model successfully reproduces characteristic changes seen in empirical data between the resting state and speech production, and dopaminergic neurotransmission evokes pronounced changes in modeled functional connectivity by acting on the underlying biological stochastic neural model. Specifically, model and data networks in both speech and rest conditions share task-specific network features: both the simulated and empirical functional connectivity networks show an increase in nodal influence and segregation in speech over the resting state. These commonalities confirm that dopamine is a key neuromodulator of the functional connectome of speech control. Based on reproducible characteristic aspects of empirical data, we suggest a number
Incorporation of caffeine into a quantitative model of fatigue and sleep.
Puckeridge, M; Fulcher, B D; Phillips, A J K; Robinson, P A
2011-03-21
A recent physiologically based model of human sleep is extended to incorporate the effects of caffeine on sleep-wake timing and fatigue. The model includes the sleep-active neurons of the hypothalamic ventrolateral preoptic area (VLPO), the wake-active monoaminergic brainstem populations (MA), their interactions with cholinergic/orexinergic (ACh/Orx) input to MA, and circadian and homeostatic drives. We model two effects of caffeine on the brain due to competitive antagonism of adenosine (Ad): (i) a reduction in the homeostatic drive and (ii) an increase in cholinergic activity. By comparing the model output to experimental data, constraints are determined on the parameters that describe the action of caffeine on the brain. In accord with experiment, the ranges of these parameters imply significant variability in caffeine sensitivity between individuals, with caffeine's effectiveness in reducing fatigue being highly dependent on an individual's tolerance, and past caffeine and sleep history. Although there are wide individual differences in caffeine sensitivity and thus in parameter values, once the model is calibrated for an individual it can be used to make quantitative predictions for that individual. A number of applications of the model are examined, using exemplar parameter values, including: (i) quantitative estimation of the sleep loss and the delay to sleep onset after taking caffeine for various doses and times; (ii) an analysis of the system's stable states showing that the wake state during sleep deprivation is stabilized after taking caffeine; and (iii) comparing model output successfully to experimental values of subjective fatigue reported in a total sleep deprivation study examining the reduction of fatigue with caffeine. This model provides a framework for quantitatively assessing optimal strategies for using caffeine, on an individual basis, to maintain performance during sleep deprivation. Copyright © 2011 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Zhuo eSu
2014-04-01
Full Text Available Phylogenetic inference can be improved by the development and use of better models for inference given the data available, or by gathering more appropriate data given the potential inferences to be made. Numerous studies have demonstrated the crucial importance of selecting a best-fit model to conducting accurate phylogenetic inference given a data set, explicitly revealing how model choice affects the results of phylogenetic inferences. However, the importance of specifying a correct model of evolution for predictions of the best data to be gathered has never been examined. Here, we extend analyses of phylogenetic signal and noise that predict the potential to resolve nodes in a phylogeny to incorporate all time-reversible Markov models of nucleotide substitution. Extending previous results on the canonical four-taxon tree, our theory yields an analytical method that uses estimates of the rates of evolution and the model of molecular evolution to predict the distribution of signal, noise, and polytomy. We applied our methods to a study of 29 taxa of the yeast genus Candida and allied members to predict the power of five markers, COX2, ACT1, RPB1, RPB2, and D1/D2 LSU, to resolve a poorly supported backbone node corresponding to a clade of haploid Candida species, as well as nineteen other nodes that are reasonably short and at least moderately deep in the consensus tree. The use of simple, unrealistic models that did not take into account transition/transversion rate differences led to some discrepancies in predictions, but overall our results demonstrate that predictions of signal and noise in phylogenetics are fairly robust to model specification.
Effect of 5-fluorouracil incorporation into pre-mRNA on RNA splicing in vitro
Energy Technology Data Exchange (ETDEWEB)
Doong, S.L.
1988-01-01
5-Fluorouracil(FUra) has been proven useful in the chemotherapy of a number of cancers. The mechanism underlying its cytotoxicity is controversial. We are interested in studying the FUra effect on the fidelity of the pre-mRNA splicing process. ({sup 32}P)-labeled human {beta}-globin pre-mRNA containing the first two exons and the first intervening sequence was synthesized in the presence of UTP, FUTP, or both. The appearance of a new minor spliced product was dependent on both the pH of the splicing reaction and the extent of FUra incorporation into pre-mRNA. At least 84% substitution of U by FUra was required to observe the presence of the abnormal splicing pathway. The new spliced product was sequenced and found to contain an additional 20 bases derived from the 3{prime} end of the intervening sequence. Nearest neighbor analysis, RNase T{sub 1} fingerprinting, and short primer extension experiments were carried out to assess the extent of transcription infidelity induced by FUra. Site directed mutagenesis was performed to determine the sequence(s) of FUra substitution which contribute to missplicing in vitro.
Gustafsson, A.; Wörman, A.
2009-04-01
According to recent studies, the volumetric error of the predicted size of the spring flood in Sweden can be as large as 20%. A significant part of this error originates from simplifications in the spatial and hydrodynamic description of watercourse networks, as well as statistical problems to give proper weight to extreme flows. Possible ways to improve current hydrological modelling practises is by making models more adapted to varying flow conditions as well as by increasing the coupling between model parameters and physical catchment characteristics. This study formulates a methodology based in hydrodynamical/hydraulic theory to investigate how river network characteristics vary with flow stage and how to transfer this information to compartmental hydrologic models such as the HBV/HYPE models. This is particularly important during extreme flows when a significant portion of the water flows outside the normal stream channels. The aim is to combine knowledge about the hydrodynamics and hydro-morphology of watercourse networks to improve the predictions of peak flows. HYPE is a semi-distributed conceptual compartmental hydrological model which is currently being developed at the SMHI as a successor to the HBV model. The model (HYPE) is thought to be better adapted to varying flow conditions by using the dynamical response functions derived by the methodology described here. The distribution of residence times within the watercourse network - and how these depend on flow stage is analysed. This information is then incorporated into the response functions of the HYPE model, i.e. the compartmental model receives a dynamic transformation function relating river discharge to storativity within the sub-catchment. This response function hence reflects the topologic and hydromorphologic characteristics of the watercourse network as well as flow stage. Seven subcatchments in Rönne River basin (1900 km2) are studied to show how this approach can improve the prediction of
Incorporation of defects into the central atoms model of a metallic glass
International Nuclear Information System (INIS)
Lass, Eric A.; Zhu Aiwu; Shiflet, G.J.; Joseph Poon, S.
2011-01-01
The central atoms model (CAM) of a metallic glass is extended to incorporate thermodynamically stable defects, similar to vacancies in a crystalline solid, within the amorphous structure. A bond deficiency (BD), which is the proposed defect present in all metallic glasses, is introduced into the CAM equations. Like vacancies in a crystalline solid, BDs are thermodynamically stable entities because of the increase in entropy associated with their creation, and there is an equilibrium concentration present in the glassy phase. When applied to Cu-Zr and Ni-Zr binary metallic glasses, the concentration of thermally induced BDs surrounding Zr atoms reaches a relatively constant value at the glass transition temperature, regardless of composition within a given glass system. Using this 'critical' defect concentration, the predicted temperatures at which the glass transition is expected to occur are in good agreement with the experimentally determined glass transition temperatures for both alloy systems.
Hawkins, Roland B
2018-01-01
An expression for the surviving fraction of a replicating population of cells exposed to permanently incorporated radionuclide is derived from the microdosimetric-kinetic model. It includes dependency on total implant dose, linear energy transfer (LET), decay rate of the radionuclide, the repair rate of potentially lethal lesions in DNA and the volume doubling time of the target population. This is used to obtain an expression for the biologically effective dose ( BED α / β ) based on the minimum survival achieved by the implant that is equivalent to, and can be compared and combined with, the BED α / β calculated for a fractionated course of radiation treatment. Approximate relationships are presented that are useful in the calculation of BED α / β for alpha- or beta-emitting radionuclides with half-life significantly greater than, or nearly equal to, the approximately 1-h repair half-life of radiation-induced potentially lethal lesions.
Žukovič, M.; Semjan, M.
2018-04-01
Magnetic and magnetocaloric properties of geometrically frustrated antiferromagnetic Ising (IA) and ferromagnetic spin ice (SI) models on a nanocluster with a 'Star of David' topology, including next-nearest-neighbor (NNN) interactions, are studied by an exact enumeration. In an external field applied in characteristic directions of the respective models, depending on the NNN interaction sign and magnitude, the ground state magnetization of the IA model is found to display up to three intermediate plateaus at fractional values of the saturation magnetization, while the SI model shows only one zero-magnetization plateau and only for the antiferromagnetic NNN coupling. A giant magnetocaloric effect is revealed in the IA model with the NNN interaction either absent or equal to the nearest-neighbor coupling. The latter is characterized by abrupt isothermal entropy changes at low temperatures and infinitely fast adiabatic temperature variations for specific entropy values in the processes when the magnetic field either vanishes or tends to the critical values related to the magnetization jumps.
Ma, Songling; Hwang, Sungbo; Lee, Sehan; Acree, William E; No, Kyoung Tai
2018-03-30
To describe the physically realistic solvation free energy surface of a molecule in a solvent, a generalized version of the solvation free energy density (G-SFED) calculation method has been developed. In the G-SFED model, the contribution from the hydrogen bond (HB) between a solute and a solvent to the solvation free energy was calculated as the product of the acidity of the donor and the basicity of the acceptor of an HB pair. The acidity and basicity parameters of a solute were derived using the summation of acidities and basicities of the respective acidic and basic functional groups of the solute, and that of the solvent was experimentally determined. Although the contribution of HBs to the solvation free energy could be evenly distributed to grid points on the surface of a molecule, the G-SFED model was still inadequate to describe the angle dependency of the HB of a solute with a polarizable continuum solvent. To overcome this shortcoming of the G-SFED model, the contribution of HBs was formulated using the geometric parameters of the grid points described in the HB coordinate system of the solute. We propose an HB angle dependency incorporated into the G-SFED model, i.e., the G-SFED-HB model, where the angular-dependent acidity and basicity densities are defined and parametrized with experimental data. The G-SFED-HB model was then applied to calculate the solvation free energies of organic molecules in water, various alcohols and ethers, and the log P values of diverse organic molecules, including peptides and a protein. Both the G-SFED model and the G-SFED-HB model reproduced the experimental solvation free energies with similar accuracy, whereas the distributions of the SFED on the molecular surface calculated by the G-SFED and G-SFED-HB models were quite different, especially for molecules having HB donors or acceptors. Since the angle dependency of HBs was included in the G-SFED-HB model, the SFED distribution of the G-SFED-HB model is well described
T.A. Arentze (Theo); B.G.C. Dellaert (Benedict); C.G. Chorus (Casper)
2013-01-01
textabstractWe introduce an extension of the discrete choice model to take into account individuals’ mental representation of a choice problem. We argue that, especially in daily activity and travel choices, the activated needs of an individual have an influence on the benefits he or she pursues in
Barnard, P. A.; Arellano, A. F.
2011-12-01
Data assimilation has emerged as an integral part of numerical weather prediction (NWP). More recently, atmospheric chemistry processes have been incorporated into NWP models to provide forecasts and guidance on air quality. There is, however, a unique opportunity within this coupled system to investigate the additional benefit of constraining model dynamics and physics due to chemistry. Several studies have reported the strong interaction between chemistry and meteorology through radiation, transport, emission, and cloud processes. To examine its importance to NWP, we conduct an ensemble-based sensitivity analysis of meteorological fields to the chemical and aerosol fields within the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) and the Data Assimilation Research Testbed (DART) framework. In particular, we examine the sensitivity of the forecasts of surface temperature and related dynamical fields to the initial conditions of dust and aerosol concentrations in the model over the continental United States within the summer 2008 time period. We use an ensemble of meteorological and chemical/aerosol predictions within WRF-Chem/DART to calculate the sensitivities. This approach is similar to recent ensemble-based sensitivity studies in NWP. The use of an ensemble prediction is appealing because the analysis does not require the adjoint of the model, which to a certain extent becomes a limitation due to the rapidly evolving models and the increasing number of different observations. Here, we introduce this approach as applied to atmospheric chemistry. We also show our initial results of the calculated sensitivities from joint assimilation experiments using a combination of conventional meteorological observations from the National Centers for Environmental Prediction, retrievals of aerosol optical depth from NASA's Moderate Resolution Imaging Spectroradiometer, and retrievals of carbon monoxide from NASA's Measurements of Pollution in the
Taylor, Andrew T; Papeş, Monica; Long, James M
2018-02-01
Fluvial fishes face increased imperilment from anthropogenic activities, but the specific factors contributing most to range declines are often poorly understood. For example, the range of the fluvial-specialist shoal bass (Micropterus cataractae) continues to decrease, yet how perceived threats have contributed to range loss is largely unknown. We used species distribution models to determine which factors contributed most to shoal bass range loss. We estimated a potential distribution based on natural abiotic factors and a series of currently occupied distributions that incorporated variables characterizing land cover, non-native species, and river fragmentation intensity (no fragmentation, dams only, and dams and large impoundments). We allowed interspecific relationships between non-native congeners and shoal bass to vary across fragmentation intensities. Results from the potential distribution model estimated shoal bass presence throughout much of their native basin, whereas models of currently occupied distribution showed that range loss increased as fragmentation intensified. Response curves from models of currently occupied distribution indicated a potential interaction between fragmentation intensity and the relationship between shoal bass and non-native congeners, wherein non-natives may be favored at the highest fragmentation intensity. Response curves also suggested that >100 km of interconnected, free-flowing stream fragments were necessary to support shoal bass presence. Model evaluation, including an independent validation, suggested that models had favorable predictive and discriminative abilities. Similar approaches that use readily available, diverse, geospatial data sets may deliver insights into the biology and conservation needs of other fluvial species facing similar threats. © 2017 Society for Conservation Biology.
Energy Technology Data Exchange (ETDEWEB)
Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua; Alfonsi, Andrea; Askin Guler; Tunc Aldemir
2014-11-01
Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper represents an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation
Energy Technology Data Exchange (ETDEWEB)
Ma, Jie; Wang, Bo [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Zhao, Shunli [Research Institute, Baoshan Iron & Steel Co., Ltd, Shanghai 201900 (China); Wu, Guangxin [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Zhang, Jieyu, E-mail: zjy6162@staff.shu.edu.cn [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Yang, Zhiliang [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China)
2016-05-25
We have extended the dendritic growth model first proposed by Boettinger, Coriell and Trivedi (here termed EBCT) for microstructure simulations of rapidly solidified non-dilute alloys. The temperature-dependent distribution coefficient, obtained from calculations of phase equilibria, and the continuous growth model (CGM) were adopted in the present EBCT model to describe the solute trapping behaviors. The temperature dependence of the physical properties, which were not used in previous dendritic growth models, were also considered in the present EBCT model. These extensions allow the present EBCT model to be used for microstructure simulations of non-dilute alloys. The comparison of the present EBCT model with the BCT model proves that the considerations of the distribution coefficient and physical properties are necessary for microstructure simulations, especially for small particles with high undercoolings. Finally, the EBCT model was incorporated into the cellular automaton-finite element (CAFE) model to simulate microstructures of gas-atomized ASP30 high speed steel particles that were then compared with experimental results. Both the simulated and experimental results reveal that a columnar dendritic microstructure preferentially forms in small particles and an equiaxed microstructure forms otherwise. The applications of the present EBCT model provide a convenient way to predict the microstructure of non-dilute alloys. - Highlights: • A dendritic growth model was developed considering non-equilibrium distribution coefficient. • The physical properties with temperature dependence were considered in the extended model. • The extended model can be used to non-dilute alloys and the extensions are necessary in small particles. • Microstructure of ASP30 steel was investigated using the present model and verified by experiment.
Incorporation of a Wind Generator Model into a Dynamic Power Flow Analysis
Directory of Open Access Journals (Sweden)
Angeles-Camacho C.
2011-07-01
Full Text Available Wind energy is nowadays one of the most cost-effective and practical options for electric generation from renewable resources. However, increased penetration of wind generation causes the power networks to be more depend on, and vulnerable to, the varying wind speed. Modeling is a tool which can provide valuable information about the interaction between wind farms and the power network to which they are connected. This paper develops a realistic characterization of a wind generator. The wind generator model is incorporated into an algorithm to investigate its contribution to the stability of the power network in the time domain. The tool obtained is termed dynamic power flow. The wind generator model takes on account the wind speed and the reactive power consumption by induction generators. Dynamic power flow analysis is carried-out using real wind data at 10-minute time intervals collected for one meteorological station. The generation injected at one point into the network provides active power locally and is found to reduce global power losses. However, the power supplied is time-varying and causes fluctuations in voltage magnitude and power fl ows in transmission lines.
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Directory of Open Access Journals (Sweden)
Stuart Bartlett
2017-08-01
Full Text Available The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as a means of simulating chemical phenomena, as interest in self-organization processes increased. This paper will present a widely-used and versatile lattice Boltzmann model that can simultaneously incorporate fluid dynamics, heat transfer, buoyancy-driven convection, passive scalar advection, chemical reactions and enthalpy changes. All of these effects interact in a physically accurate framework that is simple to code and readily parallelizable. As well as a complete description of the model equations, several example systems will be presented in order to demonstrate the accuracy and versatility of the method. New simulations, which analyzed the effect of a reversible reaction on the transport properties of a convecting fluid, will also be described in detail. This extra chemical degree of freedom was utilized by the system to augment its net heat flux. The numerical method outlined in this paper can be readily deployed for a vast range of complex flow problems, spanning a variety of scientific disciplines.
Projecting cancer incidence using age-period-cohort models incorporating restricted cubic splines.
Rutherford, Mark J; Thompson, John R; Lambert, Paul C
2012-11-05
Age-period-cohort models provide a useful method for modeling incidence and mortality rates. There is great interest in estimating the rates of disease at given future time-points in order that plans can be made for the provision of the required future services. In the setting of using age-period-cohort models incorporating restricted cubic splines, a new technique for projecting incidence is proposed. The new technique projects the period and cohort terms linearly from 10 years within the range of the available data in order to give projections that are based on recent trends. The method is validated via a comparison with existing methods in the setting of Finnish cancer registry data. The reasons for the improvements seen for the newly proposed method are twofold. Firstly, improvements are seen due to the finer splitting of the timescale to give a more continuous estimate of the incidence rate. Secondly, the new method uses more recent trends to dictate the future projections than previously proposed methods.
Incorporating Human-like Walking Variability in an HZD-Based Bipedal Model.
Martin, Anne E; Gregg, Robert D
2016-08-01
Predictive simulations of human walking could be used to investigate a wide range of questions. Promising moderately complex models have been developed using the robotics control technique hybrid zero dynamics (HZD). Existing simulations of human walking only consider the mean motion, so they cannot be used to investigate fall risk, which is correlated with variability. This work determines how to incorporate human-like variability into an HZD-based healthy human model to generate a more realistic gait. The key challenge is determining how to combine the existing mathematical description of variability with the dynamic model so that the biped is still able to walk without falling. To do so, the commanded motion is augmented with a sinusoidal variability function and a polynomial correction function. The variability function captures the variation in joint angles while the correction function prevents the variability function from growing uncontrollably. The necessity of the correction function and the improvements with a reduction of stance ankle variability are demonstrated via simulations. The variability in temporal measures is shown to be similar to experimental values.
Mills, Kyle; Tamblyn, Isaac
2018-03-01
We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4 ×4 Ising model. Using its success at this task, we motivate the study of the larger 8 ×8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration.
International Nuclear Information System (INIS)
Musho, M.K.; Kozak, J.J.
1984-01-01
A method is presented for calculating exactly the relative width (sigma 2 )/sup 1/2// , the skewness γ 1 , and the kurtosis γ 2 characterizing the probability distribution function for three random-walk models of diffusion-controlled processes. For processes in which a diffusing coreactant A reacts irreversibly with a target molecule B situated at a reaction center, three models are considered. The first is the traditional one of an unbiased, nearest-neighbor random walk on a d-dimensional periodic/confining lattice with traps; the second involves the consideration of unbiased, non-nearest-neigh bor (i.e., variable-step length) walks on the same d-dimensional lattice; and, the third deals with the case of a biased, nearest-neighbor walk on a d-dimensional lattice (wherein a walker experiences a potential centered at the deep trap site of the lattice). Our method, which has been described in detail elsewhere [P.A. Politowicz and J. J. Kozak, Phys. Rev. B 28, 5549 (1983)] is based on the use of group theoretic arguments within the framework of the theory of finite Markov processes
Bellmore, J. Ryan; Benjamin, Joseph R.; Newsom, Michael; Bountry, Jennifer A.; Dombroski, Daniel
2017-01-01
Restoration is frequently aimed at the recovery of target species, but also influences the larger food web in which these species participate. Effects of restoration on this broader network of organisms can influence target species both directly and indirectly via changes in energy flow through food webs. To help incorporate these complexities into river restoration planning we constructed a model that links river food web dynamics to in-stream physical habitat and riparian vegetation conditions. We present an application of the model to the Methow River, Washington (USA), a location of on-going restoration aimed at recovering salmon. Three restoration strategies were simulated: riparian vegetation restoration, nutrient augmentation via salmon carcass addition, and side-channel reconnection. We also added populations of nonnative aquatic snails and fish to the modeled food web to explore how changes in food web structure mediate responses to restoration. Simulations suggest that side-channel reconnection may be a better strategy than carcass addition and vegetation planting for improving conditions for salmon in this river segment. However, modeled responses were strongly sensitive to changes in the structure of the food web. The addition of nonnative snails and fish modified pathways of energy through the food web, which negated restoration improvements. This finding illustrates that forecasting responses to restoration may require accounting for the structure of food webs, and that changes in this structure—as might be expected with the spread of invasive species—could compromise restoration outcomes. Unlike habitat-based approaches to restoration assessment that focus on the direct effects of physical habitat conditions on single species of interest, our approach dynamically links the success of target organisms to the success of competitors, predators, and prey. By elucidating the direct and indirect pathways by which restoration affects target species
Bellerby, Tim
2014-05-01
Model Integration System (MIST) is open-source environmental modelling programming language that directly incorporates data parallelism. The language is designed to enable straightforward programming structures, such as nested loops and conditional statements to be directly translated into sequences of whole-array (or more generally whole data-structure) operations. MIST thus enables the programmer to use well-understood constructs, directly relating to the mathematical structure of the model, without having to explicitly vectorize code or worry about details of parallelization. A range of common modelling operations are supported by dedicated language structures operating on cell neighbourhoods rather than individual cells (e.g.: the 3x3 local neighbourhood needed to implement an averaging image filter can be simply accessed from within a simple loop traversing all image pixels). This facility hides details of inter-process communication behind more mathematically relevant descriptions of model dynamics. The MIST automatic vectorization/parallelization process serves both to distribute work among available nodes and separately to control storage requirements for intermediate expressions - enabling operations on very large domains for which memory availability may be an issue. MIST is designed to facilitate efficient interpreter based implementations. A prototype open source interpreter is available, coded in standard FORTRAN 95, with tools to rapidly integrate existing FORTRAN 77 or 95 code libraries. The language is formally specified and thus not limited to FORTRAN implementation or to an interpreter-based approach. A MIST to FORTRAN compiler is under development and volunteers are sought to create an ANSI-C implementation. Parallel processing is currently implemented using OpenMP. However, parallelization code is fully modularised and could be replaced with implementations using other libraries. GPU implementation is potentially possible.
Bellmore, J Ryan; Benjamin, Joseph R; Newsom, Michael; Bountry, Jennifer A; Dombroski, Daniel
2017-04-01
Restoration is frequently aimed at the recovery of target species, but also influences the larger food web in which these species participate. Effects of restoration on this broader network of organisms can influence target species both directly and indirectly via changes in energy flow through food webs. To help incorporate these complexities into river restoration planning, we constructed a model that links river food web dynamics to in-stream physical habitat and riparian vegetation conditions. We present an application of the model to the Methow River, Washington, USA, a location of on-going restoration aimed at recovering salmon. Three restoration strategies were simulated: riparian vegetation restoration, nutrient augmentation via salmon carcass addition, and side channel reconnection. We also added populations of nonnative aquatic snails and fish to the modeled food web to explore how changes in food web structure mediate responses to restoration. Simulations suggest that side channel reconnection may be a better strategy than carcass addition and vegetation planting for improving conditions for salmon in this river segment. However, modeled responses were strongly sensitive to changes in the structure of the food web. The addition of nonnative snails and fish modified pathways of energy through the food web, which negated restoration improvements. This finding illustrates that forecasting responses to restoration may require accounting for the structure of food webs, and that changes in this structure, as might be expected with the spread of invasive species, could compromise restoration outcomes. Unlike habitat-based approaches to restoration assessment that focus on the direct effects of physical habitat conditions on single species of interest, our approach dynamically links the success of target organisms to the success of competitors, predators, and prey. By elucidating the direct and indirect pathways by which restoration affects target species
A land use regression model incorporating data on industrial point source pollution.
Chen, Li; Wang, Yuming; Li, Peiwu; Ji, Yaqin; Kong, Shaofei; Li, Zhiyong; Bai, Zhipeng
2012-01-01
Advancing the understanding of the spatial aspects of air pollution in the city regional environment is an area where improved methods can be of great benefit to exposure assessment and policy support. We created land use regression (LUR) models for SO2, NO2 and PM10 for Tianjin, China. Traffic volumes, road networks, land use data, population density, meteorological conditions, physical conditions and satellite-derived greenness, brightness and wetness were used for predicting SO2, NO2 and PM10 concentrations. We incorporated data on industrial point sources to improve LUR model performance. In order to consider the impact of different sources, we calculated the PSIndex, LSIndex and area of different land use types (agricultural land, industrial land, commercial land, residential land, green space and water area) within different buffer radii (1 to 20 km). This method makes up for the lack of consideration of source impact based on the LUR model. Remote sensing-derived variables were significantly correlated with gaseous pollutant concentrations such as SO2 and NO2. R2 values of the multiple linear regression equations for SO2, NO2 and PM10 were 0.78, 0.89 and 0.84, respectively, and the RMSE values were 0.32, 0.18 and 0.21, respectively. Model predictions at validation monitoring sites went well with predictions generally within 15% of measured values. Compared to the relationship between dependent variables and simple variables (such as traffic variables or meteorological condition variables), the relationship between dependent variables and integrated variables was more consistent with a linear relationship. Such integration has a discernable influence on both the overall model prediction and health effects assessment on the spatial distribution of air pollution in the city region.
Dzul, Maria C.; Yackulic, Charles B.; Korman, Josh
2017-01-01
Autonomous passive integrated transponder (PIT) tag antenna systems continuously detect individually marked organisms at one or more fixed points over long time periods. Estimating abundance using data from autonomous antennae can be challenging, because these systems do not detect unmarked individuals. Here we pair PIT antennae data from a tributary with mark-recapture sampling data in a mainstem river to estimate the number of fish moving from the mainstem to the tributary. We then use our model to estimate abundance of non-native rainbow trout Oncorhynchus mykiss that move from the Colorado River to the Little Colorado River (LCR), the latter of which is important spawning and rearing habitat for federally-endangered humpback chub Gila cypha. We estimate 226 rainbow trout (95% CI: 127-370) entered the LCR from October 2013-April 2014. We discuss the challenges of incorporating detections from autonomous PIT antenna systems into mark-recapture population models, particularly in regards to using information about spatial location to estimate movement and detection probabilities.
A systematic procedure for the incorporation of common cause events into risk and reliability models
International Nuclear Information System (INIS)
Fleming, K.N.; Mosleh, A.; Deremer, R.K.
1986-01-01
Common cause events are an important class of dependent events with respect to their contribution to system unavailability and to plant risk. Unfortunately, these events have not been treated with any king of consistency in applied risk studies over the past decade. Many probabilistic risk assessments (PRA) have not included these events at all, and those that have did not employ the kind of systematic procedures that are needed to achieve consistency, accuracy, and credibility in this area of PRA methodology. In this paper, the authors report on the progress recently made in the development of a systematic approach for incorporating common cause events into applied risk and reliability evaluations. This approach takes advantage of experience from recently completed PRAs and is the result of a project, sponsored by the Electric Power Research Institute (EPRI), in which procedures for dependent events analysis are being developed. Described in this paper is a general framework for system-level common cause failure (CCF) analysis and its application to a three-train auxiliary feedwater system. Within this general framework, three parametric CCF models are compared, including the basic parameter (BP), multiple Greek letter (MGL), and binominal failure rate (BFR) models. Pitfalls of not following the recommended procedure are discussed, and some old issues, such as the benefits of redundancy and diversity, are reexamined. (orig.)
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye
2014-01-01
This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye
2014-09-01
This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.
Directory of Open Access Journals (Sweden)
Yanhua Jiang
2014-09-01
Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.
Ground state properties of a spin chain within Heisenberg model with a single lacking spin site
International Nuclear Information System (INIS)
Mebrouki, M.
2011-01-01
The ground state and first excited state energies of an antiferromagnetic spin-1/2 chain with and without a single lacking spin site are computed using exact diagonalization method, within the Heisenberg model. In order to keep both parts of a spin chain with a lacking site connected, next nearest neighbors interactions are then introduced. Also, the Density Matrix Renormalization Group (DMRG) method is used, to investigate ground state energies of large system sizes; which permits us to inquire about the effect of large system sizes on energies. Other quantum quantities such as fidelity and correlation functions are also studied and compared in both cases. - Research highlights: → In this paper we compute ground state and first excited state energies of a spin chain with and without a lacking spin site. The next nearest neighbors are introduced with the antiferromagnetic Heisenberg spin-half. → Exact diagonalization is used for small systems, where DMRG method is used to compute energies for large systems. Other quantities like quantum fidelity and correlation are also computed. → Results are presented in figures with comments. → E 0 /N is computed in a function of N for several values of J 2 and for both systems. First excited energies are also investigated.
Phase transitions in an Ising model for monolayers of coadsorbed atoms
International Nuclear Information System (INIS)
Lee, H.H.; Landau, D.P.
1979-01-01
A Monte Carlo method is used to study a simple S=1 Ising (lattice-gas) model appropriate for monolayers composed of two kinds of atoms on cubic metal substrates H = K/sub nn/ Σ/sub nn/ S 2 /sub i/zS 2 /sub j/z + J/sub nnn/ Σ/sub nnn/ S/sub i/zS/sub j/z + Δ Σ/sub i/ S 2 /sub i/z (where nn denotes nearest-neighbor and nnn next-nearest-neighbor pairs). The phase diagram is determined over a wide range of Δ and T for K/sub nn//J/sub nnn/=1/4. For small (or negative) Δ we find an antiferromagnetic 2 x 1 ordered phase separated from the disordered state by a line of second-order phase transitions. The 2 x 1 phase is separated by a line of first-order transitions from a c (2 x 2) phase which appears for larger Δ. The 2 x 1 and c (2 x 2) phases become simultaneously critical at a bicritical point and the phase boundary of the c (2 x 2) → disordered transition shows a tricritical point
Directory of Open Access Journals (Sweden)
Wenzhi Wang
2016-07-01
Full Text Available Modeling the random fiber distribution of a fiber-reinforced composite is of great importance for studying the progressive failure behavior of the material on the micro scale. In this paper, we develop a new algorithm for generating random representative volume elements (RVEs with statistical equivalent fiber distribution against the actual material microstructure. The realistic statistical data is utilized as inputs of the new method, which is archived through implementation of the probability equations. Extensive statistical analysis is conducted to examine the capability of the proposed method and to compare it with existing methods. It is found that the proposed method presents a good match with experimental results in all aspects including the nearest neighbor distance, nearest neighbor orientation, Ripley’s K function, and the radial distribution function. Finite element analysis is presented to predict the effective elastic properties of a carbon/epoxy composite, to validate the generated random representative volume elements, and to provide insights of the effect of fiber distribution on the elastic properties. The present algorithm is shown to be highly accurate and can be used to generate statistically equivalent RVEs for not only fiber-reinforced composites but also other materials such as foam materials and particle-reinforced composites.
Modeling ready biodegradability of fragrance materials.
Ceriani, Lidia; Papa, Ester; Kovarich, Simona; Boethling, Robert; Gramatica, Paola
2015-06-01
In the present study, quantitative structure activity relationships were developed for predicting ready biodegradability of approximately 200 heterogeneous fragrance materials. Two classification methods, classification and regression tree (CART) and k-nearest neighbors (kNN), were applied to perform the modeling. The models were validated with multiple external prediction sets, and the structural applicability domain was verified by the leverage approach. The best models had good sensitivity (internal ≥80%; external ≥68%), specificity (internal ≥80%; external 73%), and overall accuracy (≥75%). Results from the comparison with BIOWIN global models, based on group contribution method, show that specific models developed in the present study perform better in prediction than BIOWIN6, in particular for the correct classification of not readily biodegradable fragrance materials. © 2015 SETAC.
Ahmad, Khurshid; Waris, Muhammad; Hayat, Maqsood
2016-06-01
Mitochondrion is the key organelle of eukaryotic cell, which provides energy for cellular activities. Submitochondrial locations of proteins play crucial role in understanding different biological processes such as energy metabolism, program cell death, and ionic homeostasis. Prediction of submitochondrial locations through conventional methods are expensive and time consuming because of the large number of protein sequences generated in the last few decades. Therefore, it is intensively desired to establish an automated model for identification of submitochondrial locations of proteins. In this regard, the current study is initiated to develop a fast, reliable, and accurate computational model. Various feature extraction methods such as dipeptide composition (DPC), Split Amino Acid Composition, and Composition and Translation were utilized. In order to overcome the issue of biasness, oversampling technique SMOTE was applied to balance the datasets. Several classification learners including K-Nearest Neighbor, Probabilistic Neural Network, and support vector machine (SVM) are used. Jackknife test is applied to assess the performance of classification algorithms using two benchmark datasets. Among various classification algorithms, SVM achieved the highest success rates in conjunction with the condensed feature space of DPC, which are 95.20 % accuracy on dataset SML3-317 and 95.11 % on dataset SML3-983. The empirical results revealed that our proposed model obtained the highest results so far in the literatures. It is anticipated that our proposed model might be useful for future studies.
Paulsen, H.; Ilyina, T.; Six, K. D.
2016-02-01
Marine nitrogen fixers play a fundamental role in the oceanic nitrogen and carbon cycles by providing a major source of `new' nitrogen to the euphotic zone that supports biological carbon export and sequestration. Furthermore, nitrogen fixers may regionally have a direct impact on ocean physics and hence the climate system as they form extensive surface mats which can increase light absorption and surface albedo and reduce the momentum input by wind. Resulting alterations in temperature and stratification may feed back on nitrogen fixers' growth itself.We incorporate nitrogen fixers as a prognostic 3D tracer in the ocean biogeochemical component (HAMOCC) of the Max Planck Institute Earth system model and assess for the first time the impact of related bio-physical feedbacks on biogeochemistry and the climate system.The model successfully reproduces recent estimates of global nitrogen fixation rates, as well as the observed distribution of nitrogen fixers, covering large parts of the tropical and subtropical oceans. First results indicate that including bio-physical feedbacks has considerable effects on the upper ocean physics in this region. Light absorption by nitrogen fixers leads locally to surface heating, subsurface cooling, and mixed layer depth shoaling in the subtropical gyres. As a result, equatorial upwelling is increased, leading to surface cooling at the equator. This signal is damped by the effect of the reduced wind stress due to the presence of cyanobacteria mats, which causes a reduction in the wind-driven circulation, and hence a reduction in equatorial upwelling. The increase in surface albedo due to nitrogen fixers has only inconsiderable effects. The response of nitrogen fixers' growth to the alterations in temperature and stratification varies regionally. Simulations with the fully coupled Earth system model are in progress to assess the implications of the biologically induced changes in upper ocean physics for the global climate system.
Directory of Open Access Journals (Sweden)
Wen-Jeng Huang
2016-02-01
Full Text Available We develop a folding boundary element model in a medium containing a fault and elastic layers to show that anticlines growing over slipping reverse faults can be significantly amplified by mechanical layering buckling under horizontal shortening. Previous studies suggested that folds over blind reverse faults grow primarily during deformation increments associated with slips on the fault during and immediately after earthquakes. Under this assumption, the potential for earthquakes on blind faults can be determined directly from fold geometry because the amount of slip on the fault can be estimated directly from the fold geometry using the solution for a dislocation in an elastic half-space. Studies that assume folds grown solely by slip on a fault may therefore significantly overestimate fault slip. Our boundary element technique demonstrates that the fold amplitude produced in a medium containing a fault and elastic layers with free slip and subjected to layer-parallel shortening can grow to more than twice the fold amplitude produced in homogeneous media without mechanical layering under the same amount of shortening. In addition, the fold wavelengths produced by the combined fault slip and buckling mechanisms may be narrower than folds produced by fault slip in an elastic half space by a factor of two. We also show that subsurface fold geometry of the Kettleman Hills Anticline in Central California inferred from seismic reflection image is consistent with a model that incorporates layer buckling over a dipping, blind reverse fault and the coseismic uplift pattern produced during a 1985 earthquake centered over the anticline forelimb is predicted by the model.
Taatgen, Niels A.; de Weerd, Harmen; Reitter, David; Ritter, Frank
2016-01-01
We present a Swift re-implementation of the ACT-R cognitive architecture, which can be used to quickly build iOS Apps that incorporate an ACT-R model as a core feature. We discuss how this implementation can be used in an example model, and explore the breadth of possibilities by presenting six Apps
Jung, Jae Yup
2013-01-01
This study tested a newly developed model of the cognitive decision-making processes of senior high school students related to university entry. The model incorporated variables derived from motivation theory (i.e. expectancy-value theory and the theory of reasoned action), literature on cultural orientation and occupational considerations. A…
Peter J. Gould; Constance A. Harrington; Bradley J. St Clair
2011-01-01
Models to predict budburst and other phenological events in plants are needed to forecast how climate change may impact ecosystems and for the development of mitigation strategies. Differences among genotypes are important to predicting phenological events in species that show strong clinal variation in adaptive traits. We present a model that incorporates the effects...
Stevens, Andrew W.; Gelfenbaum, Guy; Elias, Edwin; Jones, Craig
2008-01-01
lab with Sedflume, an apparatus for measuring sediment erosion-parameters. In this report, we present results of the characterization of fine-grained sediment erodibility within Capitol Lake. The erodibility data were incorporated into the previously developed hydrodynamic and sediment transport model. Model simulations using the measured erodibility parameters were conducted to provide more robust estimates of the overall magnitudes and spatial patterns of sediment transport resulting from restoration of the Deschutes Estuary.
A diagnostic model incorporating P50 sensory gating and neuropsychological tests for schizophrenia.
Directory of Open Access Journals (Sweden)
Jia-Chi Shan
Full Text Available OBJECTIVES: Endophenotypes in schizophrenia research is a contemporary approach to studying this heterogeneous mental illness, and several candidate neurophysiological markers (e.g. P50 sensory gating and neuropsychological tests (e.g. Continuous Performance Test (CPT and Wisconsin Card Sorting Test (WCST have been proposed. However, the clinical utility of a single marker appears to be limited. In the present study, we aimed to construct a diagnostic model incorporating P50 sensory gating with other neuropsychological tests in order to improve the clinical utility. METHODS: We recruited clinically stable outpatients meeting DSM-IV criteria of schizophrenia and age- and gender-matched healthy controls. Participants underwent P50 sensory gating experimental sessions and batteries of neuropsychological tests, including CPT, WCST and Wechsler Adult Intelligence Scale Third Edition (WAIS-III. RESULTS: A total of 106 schizophrenia patients and 74 healthy controls were enrolled. Compared with healthy controls, the patient group had significantly a larger S2 amplitude, and thus poorer P50 gating ratio (gating ratio = S2/S1. In addition, schizophrenia patients had a poorer performance on neuropsychological tests. We then developed a diagnostic model by using multivariable logistic regression analysis to differentiate patients from healthy controls. The final model included the following covariates: abnormal P50 gating (defined as P50 gating ratio >0.4, three subscales derived from the WAIS-III (Arithmetic, Block Design, and Performance IQ, sensitivity index from CPT and smoking status. This model had an adequate accuracy (concordant percentage = 90.4%; c-statistic = 0.904; Hosmer-Lemeshow Goodness-of-Fit Test, p = 0.64>0.05. CONCLUSION: To the best of our knowledge, this is the largest study to date using P50 sensory gating in subjects of Chinese ethnicity and the first to use P50 sensory gating along with other neuropsychological tests
International Nuclear Information System (INIS)
Galan, S.F.; Mosleh, A.; Izquierdo, J.M.
2007-01-01
The ω-factor approach is a method that explicitly incorporates organizational factors into Probabilistic safety assessment of nuclear power plants. Bayesian networks (BNs) are the underlying formalism used in this approach. They have a structural part formed by a graph whose nodes represent organizational variables, and a parametric part that consists of conditional probabilities, each of them quantifying organizational influences between one variable and its parents in the graph. The aim of this paper is twofold. First, we discuss some important limitations of current procedures in the ω-factor approach for either assessing conditional probabilities from experts or estimating them from data. We illustrate the discussion with an example that uses data from Licensee Events Reports of nuclear power plants for the estimation task. Second, we introduce significant improvements in the way BNs for the ω-factor approach can be constructed, so that parameter acquisition becomes easier and more intuitive. The improvements are based on the use of noisy-OR gates as model of multicausal interaction between each BN node and its parents
Energy Technology Data Exchange (ETDEWEB)
Galan, S.F. [Dpto. de Inteligencia Artificial, E.T.S.I. Informatica (UNED), Juan del Rosal, 16, 28040 Madrid (Spain)]. E-mail: seve@dia.uned.es; Mosleh, A. [2100A Marie Mount Hall, Materials and Nuclear Engineering Department, University of Maryland, College Park, MD 20742 (United States)]. E-mail: mosleh@umd.edu; Izquierdo, J.M. [Area de Modelado y Simulacion, Consejo de Seguridad Nuclear, Justo Dorado, 11, 28040 Madrid (Spain)]. E-mail: jmir@csn.es
2007-08-15
The {omega}-factor approach is a method that explicitly incorporates organizational factors into Probabilistic safety assessment of nuclear power plants. Bayesian networks (BNs) are the underlying formalism used in this approach. They have a structural part formed by a graph whose nodes represent organizational variables, and a parametric part that consists of conditional probabilities, each of them quantifying organizational influences between one variable and its parents in the graph. The aim of this paper is twofold. First, we discuss some important limitations of current procedures in the {omega}-factor approach for either assessing conditional probabilities from experts or estimating them from data. We illustrate the discussion with an example that uses data from Licensee Events Reports of nuclear power plants for the estimation task. Second, we introduce significant improvements in the way BNs for the {omega}-factor approach can be constructed, so that parameter acquisition becomes easier and more intuitive. The improvements are based on the use of noisy-OR gates as model of multicausal interaction between each BN node and its parents.
The Sznajd Model with Team Work
Li, Hong-Jun; Lin, Lu-Zi; Sun, He; He, Ming-Feng
In 2000, Sznajd-weron and Sznajd introduced a model for the simulation of a closed democratic community with a two-party system, and it is found that a closed community has to evolve either to a dictatorship or a stalemate state. In this paper, we continued to study on this model. All the neighboring individuals holding the same opinion is defined as a team, which will influence its nearest neighbor's decision and realize the opinion evolution. After some time-steps, a steady state appeared and the stalemate state in original model is eliminated. Moreover, the demand of time-steps has decreased dramatically. In addition, we also analyzed the effect of the various dispersal degree of the initial opinion on the opinion converging at the probability of one steady state. Finally we analyzed the effect of noise on convergence and found that the ability of anti-noise was increased about 1000 times compared with Sznajd model.
Le Pichon, A.; Ceranna, L.
2011-12-01
To monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), a dedicated International Monitoring System (IMS) is being deployed. Recent global scale observations recorded by this network confirm that its detection capability is highly variable in space and time. Previous studies estimated the radiated source energy from remote observations using empirical yield-scaling relations which account for the along-path stratospheric winds. Although the empirical wind correction reduces the variance in the explosive energy versus pressure relationship, strong variability remains in the yield estimate. Today, numerical modelling techniques provide a basis to better understand the role of different factors describing the source and the atmosphere that influence propagation predictions. In this study, the effects of the source frequency and the stratospheric wind speed are simulated. In order to characterize fine-scale atmospheric structures which are excluded from the current atmospheric specifications, model predictions are further enhanced by the addition of perturbation terms. Thus, a theoretical attenuation relation is developed from massive numerical simulations using the Parabolic Equation method. Compared with previous studies, our approach provides a more realistic physical description of infrasound propagation. We obtain a new relation combining a near-field and far-field term which account for the effects of both geometrical spreading and dissipation on the pressure wave attenuation. By incorporating real ambient infrasound noise at the receivers which significantly limits the ability to detect and identify signals of interest, the minimum detectable source amplitude can be derived in a broad frequency range. Empirical relations between the source spectrum and the yield of explosions are used to infer detection thresholds in tons of TNT equivalent. In the context of the future verification of the CTBT, the obtained attenuation relation quantifies
Clarke; Bell; Hobbs; George
1999-07-01
/ This paper synthesizes results of research into the impact that major faults have on dryland salinity and the development of revegetation treatments in the wheatbelt of Western Australia. Currently, landscape planning does not routinely incorporate geology, but this research shows that faults can have a dramatic impact on land and stream salinization and on the effectiveness of revegetation treatments, and evidence exists that other geological features can have a similar influence. This research shows that faults can be identified from airborne magnetic data, they can be assigned a characteristic hydraulic conductivity based on simple borehole tests, and four other geological features that are expected to affect land and stream salinity could be identified in airborne geophysical data. A geological theme map could then be created to which characteristic hydraulic conductivities could be assigned for use in computer groundwater models to improve prediction of the effectiveness of revegetation treatments and thus enhance the landscape planning process. The work highlights the difficulties of using standard sampling and statistical techniques to investigate regional phenomena and presents an integrated approach combining small-scale sampling with broad-scale observations to provide input into a modeling exercise. It is suggested that such approaches are vital if landscape- and regional-scale processes are to be understood and managed. The way in which the problem is perceived (holistically or piecemeal) affects the way treatments are designed and their effectiveness: past approaches have failed to integrate the various scales and processes involved. Effective solutions require an integrated holistic response.KEY WORDS: Dryland salinity; Geology; Landscape; Revegetation integrationhttp://link.springer-ny.com/link/service/journals/00267/bibs/24n1p99.html
International Nuclear Information System (INIS)
Kulik, D.A.
2005-01-01
Full text of publication follows: Computer-aided surface complexation models (SCM) tend to replace the classic adsorption isotherm (AI) analysis in describing mineral-water interface reactions such as radionuclide sorption onto (hydr) oxides and clays. Any site-binding SCM based on the mole balance of surface sites, in fact, reproduces the (competitive) Langmuir isotherm, optionally amended with electrostatic Coulomb's non-ideal term. In most SCM implementations, it is difficult to incorporate real-surface phenomena (site heterogeneity, lateral interactions, surface condensation) described in classic AI approaches other than Langmuir's. Thermodynamic relations between SCMs and AIs that remained obscure in the past have been recently clarified using new definitions of standard and reference states of surface species [1,2]. On this basis, a method for separating the Langmuir AI into ideal (linear) and non-ideal parts [2] was applied to multi-dentate Langmuir, Frumkin, and BET isotherms. The aim of this work was to obtain the surface activity coefficient terms that make the SCM site mole balance constraints obsolete and, in this way, extend thermodynamic SCMs to cover sorption phenomena described by the respective AIs. The multi-dentate Langmuir term accounts for the site saturation with n-dentate surface species, as illustrated on modeling bi-dentate U VI complexes on goethite or SiO 2 surfaces. The Frumkin term corrects for the lateral interactions of the mono-dentate surface species; in particular, it has the same form as the Coulombic term of the constant-capacitance EDL combined with the Langmuir term. The BET term (three parameters) accounts for more than a monolayer adsorption up to the surface condensation; it can potentially describe the surface precipitation of nickel and other cations on hydroxides and clay minerals. All three non-ideal terms (in GEM SCMs implementation [1,2]) by now are used for non-competing surface species only. Upon 'surface dilution
An integrated modeling approach to age invariant face recognition
Alvi, Fahad Bashir; Pears, Russel
2015-03-01
This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.
Superconductivity in a generalized Hubbard model
Arrachea, Liliana; Aligia, A. A.
1997-02-01
We consider a Hubbard model in the square lattice, with a generalized hopping between nearest-neighbor sites for spin up (down), which depends on the total occupation nb of spin down (up) electrons on both sites. We call the hopping parameters tAA, tAB, and tBB for nb = 0, 1 or 2 respectively. Using the Hartree-Fock and Bardeen-Cooper-Schrieffer mean-field approximations to decouple the two-body and three-body interactions, we find that the model exhibits extended s-wave superconductivity in the electron-hole symmetric case tAB > tAA = tBB for small values of the Coulomb repulsion U or small band fillings. For moderate values of U, the antiferromagnetic normal (AFN) state has lower energy. The translationally invariant d-wave superconducting state has always larger energy than the AFN state.
J. Ryan Bellmore; Joseph R. Benjamin; Michael Newsom; Jennifer A. Bountry; Daniel Dombroski
2017-01-01
Restoration is frequently aimed at the recovery of target species, but also influences the larger food web in which these species participate. Effects of restoration on this broader network of organisms can influence target species both directly and indirectly via changes in energy flow through food webs. To help incorporate these complexities into river restoration...
Incorporation of the radioprotective molecule cysteamine in model membranes: an NMR study
International Nuclear Information System (INIS)
Laval, J.D.; Debouzy, J.C.; Fauvelle, F.; Viret, J.; Fatome, M.
1995-01-01
The incorporation of the cysteamine molecule in small unilamellar vesicles was studies using proton NMR technics. A linear inclusion of the radioprotective molecule was firstly observed by increasing the Cysteamine/phospholipid molar ratios, followed by a saturation for the highest ratios. Such results may be adapted to a new galenic form study. (authors). 6 refs., 2 figs
76 FR 66617 - Airworthiness Directives; Erickson Air-Crane Incorporated Model S-64F Helicopters
2011-10-27
...-026-AD; Amendment 39-16835; AD 2011-21-12] RIN 2120-AA64 Airworthiness Directives; Erickson Air-Crane.... SUMMARY: We are adopting a new airworthiness directive (AD) for the Erickson Air-Crane (Erickson Air-Crane..., 2011. ADDRESSES: For service information identified in this AD, contact Erickson Air-Crane Incorporated...
Strategies for Incorporating Women-Specific Sexuality Education into Addiction Treatment Models
James, Raven
2007-01-01
This paper advocates for the incorporation of a women-specific sexuality curriculum in the addiction treatment process to aid in sexual healing and provide for aftercare issues. Sexuality in addiction treatment modalities is often approached from a sex-negative stance, or that of sexual victimization. Sexual issues are viewed as addictive in and…
Modeling and knowledge acquisition processes using case-based inference
Directory of Open Access Journals (Sweden)
Ameneh Khadivar
2017-03-01
Full Text Available The method of acquisition and presentation of the organizational Process Knowledge has considered by many KM researches. In this research a model for process knowledge acquisition and presentation has been presented by using the approach of Case Base Reasoning. The validation of the presented model was evaluated by conducting an expert panel. Then a software has been developed based on the presented model and implemented in Eghtesad Novin Bank of Iran. In this company, based on the stages of the presented model, first the knowledge intensive processes has been identified, then the Process Knowledge was stored in a knowledge base in the format of problem/solution/consequent .The retrieval of the knowledge was done based on the similarity of the nearest neighbor algorithm. For validating of the implemented system, results of the system has compared by the results of the decision making of the expert of the process.
Stripe order from the perspective of the Hubbard model
Energy Technology Data Exchange (ETDEWEB)
Devereaux, Thomas Peter
2018-03-01
A microscopic understanding of the strongly correlated physics of the cuprates must account for the translational and rotational symmetry breaking that is present across all cuprate families, commonly in the form of stripes. Here we investigate emergence of stripes in the Hubbard model, a minimal model believed to be relevant to the cuprate superconductors, using determinant quantum Monte Carlo (DQMC) simulations at finite temperatures and density matrix renormalization group (DMRG) ground state calculations. By varying temperature, doping, and model parameters, we characterize the extent of stripes throughout the phase diagram of the Hubbard model. Our results show that including the often neglected next-nearest-neighbor hopping leads to the absence of spin incommensurability upon electron-doping and nearly half-filled stripes upon hole-doping. The similarities of these findings to experimental results on both electron and hole-doped cuprate families support a unified description across a large portion of the cuprate phase diagram.
Bezbaruah, Achintya N; Zhang, Tian C
2009-01-01
It has been long established that plants play major roles in a treatment wetland. However, the role of plants has not been incorporated into wetland models. This study tries to incorporate wetland plants into a biochemical oxygen demand (BOD) model so that the relative contributions of the aerobic and anaerobic processes to meeting BOD can be quantitatively determined. The classical dissolved oxygen (DO) deficit model has been modified to simulate the DO curve for a field subsurface flow constructed wetland (SFCW) treating municipal wastewater. Sensitivities of model parameters have been analyzed. Based on the model it is predicted that in the SFCW under study about 64% BOD are degraded through aerobic routes and 36% is degraded anaerobically. While not exhaustive, this preliminary work should serve as a pointer for further research in wetland model development and to determine the values of some of the parameters used in the modified DO deficit and associated BOD model. It should be noted that nitrogen cycle and effects of temperature have not been addressed in these models for simplicity of model formulation. This paper should be read with this caveat in mind.
Gómez-Puig, Marta; Singh, Manish Kumar; Sosvilla Rivero, Simón, 1961-
2018-01-01
This paper highlights the role of multilateral creditors (i.e., the ECB, IMF, ESM etc.) and their preferred creditor status in explaining the sovereign default risk of peripheral euro area (EA) countries. Incorporating lessons from sovereign debt crises in general, and from the Greek debt restructuring in particular, we define the priority structure of sovereigns' creditors that is most relevant for peripheral EA countries in severe crisis episodes. This new priority structure of creditors, t...
A molecular-thermodynamic model for polyelectrolyte solutions
Energy Technology Data Exchange (ETDEWEB)
Jiang, J.; Liu, H.; Hu, Y. [Thermodynamics Research Laboratory, East China University of Science and Technology, Shanghai 200237 (China); Prausnitz, J.M. [Department of Chemical Engineering, University of California, Berkeley, and Chemical Sciences Division, Lawrence Berkeley Laboratory, University of California, Berkeley, California 94720 (United States)
1998-01-01
Polyelectrolyte solutions are modeled as freely tangent-jointed, charged hard-sphere chains and corresponding counterions in a continuum medium with permitivity {var_epsilon}. By adopting the sticky-point model, the Helmholtz function for polyelectrolyte solutions is derived through the r-particle cavity-correlation function (CCF) for chains of sticky, charged hard spheres. The r-CCF is approximated by a product of effective nearest-neighbor two-particle CCFs; these are determined from the hypernetted-chain and mean-spherical closures (HNC/MSA) inside and outside the hard core, respectively, for the integral equation theory for electrolytes. The colligative properties are given as explicit functions of a scaling parameter {Gamma} that can be estimated by a simple iteration procedure. Osmotic pressures, osmotic coefficients, and activity coefficients are calculated for model solutions with various chain lengths. They are in good agreement with molecular simulation and experimental results. {copyright} {ital 1998 American Institute of Physics.}
Fuzzy Temporal Logic Based Railway Passenger Flow Forecast Model
Dou, Fei; Jia, Limin; Wang, Li; Xu, Jie; Huang, Yakun
2014-01-01
Passenger flow forecast is of essential importance to the organization of railway transportation and is one of the most important basics for the decision-making on transportation pattern and train operation planning. Passenger flow of high-speed railway features the quasi-periodic variations in a short time and complex nonlinear fluctuation because of existence of many influencing factors. In this study, a fuzzy temporal logic based passenger flow forecast model (FTLPFFM) is presented based on fuzzy logic relationship recognition techniques that predicts the short-term passenger flow for high-speed railway, and the forecast accuracy is also significantly improved. An applied case that uses the real-world data illustrates the precision and accuracy of FTLPFFM. For this applied case, the proposed model performs better than the k-nearest neighbor (KNN) and autoregressive integrated moving average (ARIMA) models. PMID:25431586
A MATLAB GUI to study Ising model phase transition
Thornton, Curtislee; Datta, Trinanjan
We have created a MATLAB based graphical user interface (GUI) that simulates the single spin flip Metropolis Monte Carlo algorithm. The GUI has the capability to study temperature and external magnetic field dependence of magnetization, susceptibility, and equilibration behavior of the nearest-neighbor square lattice Ising model. Since the Ising model is a canonical system to study phase transition, the GUI can be used both for teaching and research purposes. The presence of a Monte Carlo code in a GUI format allows easy visualization of the simulation in real time and provides an attractive way to teach the concept of thermal phase transition and critical phenomena. We will also discuss the GUI implementation to study phase transition in a classical spin ice model on the pyrochlore lattice.
International Nuclear Information System (INIS)
Jennissen, J.J.
1981-01-01
The mathematical/empirical model developed in this paper helps to determine the incorporated radioactivity from the measured photometric values and the exposure time T. Possible errors of autoradiography due to the exposure time or the preparation are taken into consideration by the empirical model. It is shown that the error of appr. 400% appearing in the sole comparison of the measured photometric values can be corrected. The model is valid for neuroanatomy as optical nerves, i.e. neuroanatomical material, were used to develop it. Its application also to the other sections of the central nervous system seems to be justified due to the reduction of errors thus achieved. (orig.) [de
Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward
2017-08-01
Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Roman Bauer
Full Text Available Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computational model to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert differential susceptibility throughout the population. Overall, our model supports the hypothesis that glioma is caused by randomly-occurring oncogenic mutations within the neural stem cell population. Based on this model, we assess the influence of the (experimentally indicated decrease in the number of neural stem cells and increase of cell division rate during aging. Our model provides multiple testable predictions, and suggests that different temporal sequences of oncogenic mutations can lead to tumorigenesis. Finally, we conclude that four or five oncogenic mutations are sufficient for the formation of glioma.
Science and Technology Text Mining Basic Concepts
National Research Council Canada - National Science Library
Losiewicz, Paul
2003-01-01
...). It then presents some of the most widely used data and text mining techniques, including clustering and classification methods, such as nearest neighbor, relational learning models, and genetic...
Trifonova, N; Duplisea, D; Kenny, A; Maxwell, D; Tucker, A
2014-01-01
In this study, dynamic Bayesian networks have been applied to predict future biomass of geographically different but functionally equivalent fish species. A latent variable is incorporated to model functional collapse, where the underlying food web structure dramatically changes irrevocably (known as a regime shift). We examined if the use of a hidden variable can reflect changes in the trophic dynamics of the system and also whether the inclusion of recognised statistical metrics would impro...
CSIR Research Space (South Africa)
Blanchard, R
2012-10-01
Full Text Available % of biodiversity importance. Anticipating potential biodiversity confl icts for future biofuel crops in South Africa: Incorporating land cover information with Species Distribution Models R BLANCHARD1, DR P O?FARRELL1 AND PROF. D RICHARDSON2 1CSIR Natural... Resources and the Environment, PO Box 320, Stellenbosch, 7599, South Africa 2Centre for Invasion Biology, Department of Botany and Zoology, Stellenbosch University, Private Bag X1, Matieland 7602, South Africa Email: rblanchard@csir.co.za ? www...
Hoyos, David; Mariel, Petr; Hess, Stephane
2015-02-01
Environmental economists are increasingly interested in better understanding how people cognitively organise their beliefs and attitudes towards environmental change in order to identify key motives and barriers that stimulate or prevent action. In this paper, we explore the utility of a commonly used psychometric scale, the awareness of consequences (AC) scale, in order to better understand stated choices. The main contribution of the paper is that it provides a novel approach to incorporate attitudinal information into discrete choice models for environmental valuation: firstly, environmental attitudes are incorporated using a reinterpretation of the classical AC scale recently proposed by Ryan and Spash (2012); and, secondly, attitudinal data is incorporated as latent variables under a hybrid choice modelling framework. This novel approach is applied to data from a survey conducted in the Basque Country (Spain) in 2008 aimed at valuing land-use policies in a Natura 2000 Network site. The results are relevant to policy-making because choice models that are able to accommodate underlying environmental attitudes may help in designing more effective environmental policies. Copyright © 2014 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Sullivan, T.J.
1992-09-01
A project was initiated in March, 1992 to (1) incorporate a rigorous organic acid representation, based on empirical data and geochemical considerations, into the MAGIC model of acidification response, and (2) test the revised model using three sets of independent data. After six months of performance, the project is on schedule and the majority of the tasks outlined for Year 1 have been successfully completed. Major accomplishments to data include development of the organic acid modeling approach, using data from the Adirondack Lakes Survey Corporation (ALSC), and coupling the organic acid model with MAGIC for chemical hindcast comparisons. The incorporation of an organic acid representation into MAGIC can account for much of the discrepancy earlier observed between MAGIC hindcasts and paleolimnological reconstructions of preindustrial pH and alkalinity for 33 statistically-selected Adirondack lakes. Additional work is on-going for model calibration and testing with data from two whole-catchment artificial acidification projects. Results obtained thus far are being prepared as manuscripts for submission to the peer-reviewed scientific literature.
Topological order in an exactly solvable 3D spin model
International Nuclear Information System (INIS)
Bravyi, Sergey; Leemhuis, Bernhard; Terhal, Barbara M.
2011-01-01
Research highlights: RHtriangle We study exactly solvable spin model with six-qubit nearest neighbor interactions on a 3D face centered cubic lattice. RHtriangle The ground space of the model exhibits topological quantum order. RHtriangle Elementary excitations can be geometrically described as the corners of rectangular-shaped membranes. RHtriangle The ground space can encode 4g qubits where g is the greatest common divisor of the lattice dimensions. RHtriangle Logical operators acting on the encoded qubits are described in terms of closed strings and closed membranes. - Abstract: We study a 3D generalization of the toric code model introduced recently by Chamon. This is an exactly solvable spin model with six-qubit nearest-neighbor interactions on an FCC lattice whose ground space exhibits topological quantum order. The elementary excitations of this model which we call monopoles can be geometrically described as the corners of rectangular-shaped membranes. We prove that the creation of an isolated monopole separated from other monopoles by a distance R requires an operator acting on Ω(R 2 ) qubits. Composite particles that consist of two monopoles (dipoles) and four monopoles (quadrupoles) can be described as end-points of strings. The peculiar feature of the model is that dipole-type strings are rigid, that is, such strings must be aligned with face-diagonals of the lattice. For periodic boundary conditions the ground space can encode 4g qubits where g is the greatest common divisor of the lattice dimensions. We describe a complete set of logical operators acting on the encoded qubits in terms of closed strings and closed membranes.
The transverse spin-1 Ising model with random interactions
Energy Technology Data Exchange (ETDEWEB)
Bouziane, Touria [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco)], E-mail: touria582004@yahoo.fr; Saber, Mohammed [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco); Dpto. Fisica Aplicada I, EUPDS (EUPDS), Plaza Europa, 1, San Sebastian 20018 (Spain)
2009-01-15
The phase diagrams of the transverse spin-1 Ising model with random interactions are investigated using a new technique in the effective field theory that employs a probability distribution within the framework of the single-site cluster theory based on the use of exact Ising spin identities. A model is adopted in which the nearest-neighbor exchange couplings are independent random variables distributed according to the law P(J{sub ij})=p{delta}(J{sub ij}-J)+(1-p){delta}(J{sub ij}-{alpha}J). General formulae, applicable to lattices with coordination number N, are given. Numerical results are presented for a simple cubic lattice. The possible reentrant phenomenon displayed by the system due to the competitive effects between exchange interactions occurs for the appropriate range of the parameter {alpha}.
Directory of Open Access Journals (Sweden)
Conor P. McGowan
2017-10-01
Full Text Available Assessment of a species' status is a key part of management decision making for endangered and threatened species under the U.S. Endangered Species Act. Predicting the future state of the species is an essential part of species status assessment, and projection models can play an important role in developing predictions. We built a stochastic simulation model that incorporated parametric and environmental uncertainty to predict the probable future status of the Sonoran desert tortoise in the southwestern United States and North Central Mexico. Sonoran desert tortoise was a Candidate species for listing under the Endangered Species Act, and decision makers wanted to use model predictions in their decision making process. The model accounted for future habitat loss and possible effects of climate change induced droughts to predict future population growth rates, abundances, and quasi-extinction probabilities. Our model predicts that the population will likely decline over the next few decades, but there is very low probability of quasi-extinction less than 75 years into the future. Increases in drought frequency and intensity may increase extinction risk for the species. Our model helped decision makers predict and characterize uncertainty about the future status of the species in their listing decision. We incorporated complex ecological processes (e.g., climate change effects on tortoises in transparent and explicit ways tailored to support decision making processes related to endangered species.
Goodsman, Devin W; Aukema, Brian H; McDowell, Nate G; Middleton, Richard S; Xu, Chonggang
2018-01-01
Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle ( Dendroctonus ponderosae Hopkins), an insect that kills mature pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.
He, Yujie; Yang, Jinyan; Zhuang, Qianlai; Harden, Jennifer W.; McGuire, A. David; Liu, Yaling; Wang, Gangsheng; Gu, Lianhong
2015-01-01
Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO2 efflux (RH) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = −0.43 to −0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.
Simplex network modeling for press-molded ceramic bodies incorporated with granite waste
International Nuclear Information System (INIS)
Pedroti, L.G.; Vieira, C.M.F.; Alexandre, J.; Monteiro, S.N.; Xavier, G.C.
2012-01-01
Extrusion of a clay body is the most commonly applied process in the ceramic industries for manufacturing structural block. Nowadays, the assembly of such blocks through a fitting system that facilitates the final mounting is gaining attention owing to the saving in material and reducing in the cost of the building construction. In this work, the ideal composition of clay bodies incorporated with granite powder waste was investigated for the production of press-molded ceramic blocks. An experimental design was applied to determine the optimum properties and microstructures involving not only the precursors compositions but also the press and temperature conditions. Press load from 15 ton and temperatures from 850 to 1050°C were considered. The results indicated that varying mechanical strength of 2 MPa to 20 MPa and varying water absorption of 19% to 30%. (author)
2016-03-31
Table of Contents 1. INTRODUCTION...Traditional Photospheric Magnetic Flux Synoptic Maps .........................................................1 2.2 Photospheric Flux Transport Models...4 3. WH model evolved synoptic map (latitude vs. longitude
Incorporating Pass-Phrase Dependent Background Models for Text-Dependent Speaker Verification
Sarkar, A. K.; Tan, Zheng-Hua
2016-01-01
In this paper, we propose pass-phrase dependent background models (PBMs) for text-dependent (TD) speaker verification (SV) to integrate the pass-phrase identification process into the conventional TD-SV system, where a PBM is derived from a text-independent background model through adaptation using the utterances of a particular pass-phrase. During training, pass-phrase specific target speaker models are derived from the particular PBM using the training data for the respective target model. ...
A Mass Balance Model for Designing Green Roof Systems that Incorporate a Cistern for Re-Use
Directory of Open Access Journals (Sweden)
Manoj Chopra
2012-11-01
Full Text Available Green roofs, which have been used for several decades in many parts of the world, offer a unique and sustainable approach to stormwater management. Within this paper, evidence is presented on water retention for an irrigated green roof system. The presented green roof design results in a water retention volume on site. A first principle mass balance computer model is introduced to assist with the design of these green roof systems which incorporate a cistern to capture and reuse runoff waters for irrigation of the green roof. The model is used to estimate yearly stormwater retention volume for different cistern storage volumes. Additionally, the Blaney and Criddle equation is evaluated for estimation of monthly evapotranspiration rates for irrigated systems and incorporated into the model. This is done so evapotranspiration rates can be calculated for regions where historical data does not exist, allowing the model to be used anywhere historical weather data are available. This model is developed and discussed within this paper as well as compared to experimental results.
Using stochastic models to incorporate spatial and temporal variability [Exercise 14
Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke
2003-01-01
To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...
Brunel, T.P.A.
2015-01-01
This report presents a framework to model density dependent growth for the North East Atlantic mackerel. The model used is the classical von Bertalanffy equation, but modified so that growth is reduced when stock size increases. The model developed was able to reproduce quite closely the trends in
Ahmad, Jamal; Javed, Faisal; Hayat, Maqsood
2017-05-01
Golgi is one of the core proteins of a cell, constitutes in both plants and animals, which is involved in protein synthesis. Golgi is responsible for receiving and processing the macromolecules and trafficking of newly processed protein to its intended destination. Dysfunction in Golgi protein is expected to cause many neurodegenerative and inherited diseases that may be cured well if they are detected effectively and timely. Golgi protein is categorized into two parts cis-Golgi and trans-Golgi. The identification of Golgi protein via direct method is very hard due to limited available recognized structures. Therefore, the researchers divert their attention toward the sequences from structures. However, owing to technological advancement, exploration of huge amount of sequences was reported in the databases. So recognition of large amount of unprocessed data using conventional methods is very difficult. Therefore, the concept of intelligence was incorporated with computational model. Intelligence based computational model obtained reasonable results, but the gap of improvement is still under consideration. In this regard, an intelligent automatic recognition model is developed in order to enhance the true classification rate of sub-Golgi proteins. In this approach, discrete and evolutionary feature extraction methods are applied on the benchmark Golgi protein datasets to excerpt salient, propound and variant numerical descriptors. After that, an oversampling technique Syntactic Minority over Sampling Technique is employed to balance the data. Hybrid spaces are also generated with combination of these feature spaces. Further, Fisher feature selection method is utilized to reduce the extra noisy and redundant features from feature vector. Finally, k-nearest neighbor algorithm is used as learning hypothesis. Three distinct cross validation tests are used to examine the stability and efficiency of the proposed model. The predicted outcomes of proposed model are better
Interaction of a single mode field cavity with the 1D XY model: Energy spectrum
International Nuclear Information System (INIS)
Tonchev, H; Donkov, A A; Chamati, H
2016-01-01
In this work we use the fundamental in quantum optics Jaynes-Cummings model to study the response of spin 1/2chain to a single mode of a laser light falling on one of the spins, a focused interaction model between the light and the spin chain. For the spin-spin interaction along the chain we use the XY model. We report here the exact analytical results, obtained with the help of a computer algebra system, for the energy spectrum in this model for chains of up to 4 spins with nearest neighbors interactions, either for open or cyclic chain configurations. Varying the sign and magnitude of the spin exchange coupling relative to the light-spin interaction we have investigated both cases of ferromagnetic or antiferromagnetic spin chains. (paper)
Directory of Open Access Journals (Sweden)
Natalya Pya
2016-02-01
Full Text Available Background: Measurements of tree heights and diameters are essential in forest assessment and modelling. Tree heights are used for estimating timber volume, site index and other important variables related to forest growth and yield, succession and carbon budget models. However, the diameter at breast height (dbh can be more accurately obtained and at lower cost, than total tree height. Hence, generalized height-diameter (h-d models that predict tree height from dbh, age and other covariates are needed. For a more flexible but biologically plausible estimation of covariate effects we use shape constrained generalized additive models as an extension of existing h-d model approaches. We use causal site parameters such as index of aridity to enhance the generality and causality of the models and to enable predictions under projected changeable climatic conditions. Methods: We develop unconstrained generalized additive models (GAM and shape constrained generalized additive models (SCAM for investigating the possible effects of tree-specific parameters such as tree age, relative diameter at breast height, and site-specific parameters such as index of aridity and sum of daily mean temperature during vegetation period, on the h-d relationship of forests in Lower Saxony, Germany. Results: Some of the derived effects, e.g. effects of age, index of aridity and sum of daily mean temperature have significantly non-linear pattern. The need for using SCAM results from the fact that some of the model effects show partially implausible patterns especially at the boundaries of data ranges. The derived model predicts monotonically increasing levels of tree height with increasing age and temperature sum and decreasing aridity and social rank of a tree within a stand. The definition of constraints leads only to marginal or minor decline in the model statistics like AIC. An observed structured spatial trend in tree height is modelled via 2-dimensional surface
Bestley, Sophie; Jonsen, Ian D; Hindell, Mark A; Guinet, Christophe; Charrassin, Jean-Benoît
2013-01-07
A fundamental goal in animal ecology is to quantify how environmental (and other) factors influence individual movement, as this is key to understanding responsiveness of populations to future change. However, quantitative interpretation of individual-based telemetry data is hampered by the complexity of, and error within, these multi-dimensional data. Here, we present an integrative hierarchical Bayesian state-space modelling approach where, for the first time, the mechanistic process model for the movement state of animals directly incorporates both environmental and other behavioural information, and observation and process model parameters are estimated within a single model. When applied to a migratory marine predator, the southern elephant seal (Mirounga leonina), we find the switch from directed to resident movement state was associated with colder water temperatures, relatively short dive bottom time and rapid descent rates. The approach presented here can have widespread utility for quantifying movement-behaviour (diving or other)-environment relationships across species and systems.
Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa
2013-05-10
We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.
Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun
2018-03-15
Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.
Directory of Open Access Journals (Sweden)
Rogier Westerhoff
2018-01-01
Full Text Available A nationwide model of groundwater recharge for New Zealand (NGRM, as described in this paper, demonstrated the benefits of satellite data and global models to improve the spatial definition of recharge and the estimation of recharge uncertainty. NGRM was inspired by the global-scale WaterGAP model but with the key development of rainfall recharge calculation on scales relevant to national- and catchment-scale studies (i.e., a 1 km × 1 km cell size and a monthly timestep in the period 2000–2014 provided by satellite data (i.e., MODIS-derived evapotranspiration, AET and vegetation in combination with national datasets of rainfall, elevation, soil and geology. The resulting nationwide model calculates groundwater recharge estimates, including their uncertainty, consistent across the country, which makes the model unique compared to all other New Zealand estimates targeted towards groundwater recharge. At the national scale, NGRM estimated an average recharge of 2500 m 3 /s, or 298 mm/year, with a model uncertainty of 17%. Those results were similar to the WaterGAP model, but the improved input data resulted in better spatial characteristics of recharge estimates. Multiple uncertainty analyses led to these main conclusions: the NGRM model could give valuable initial estimates in data-sparse areas, since it compared well to most ground-observed lysimeter data and local recharge models; and the nationwide input data of rainfall and geology caused the largest uncertainty in the model equation, which revealed that the satellite data could improve spatial characteristics without significantly increasing the uncertainty. Clearly the increasing volume and availability of large-scale satellite data is creating more opportunities for the application of national-scale models at the catchment, and smaller, scales. This should result in improved utility of these models including provision of initial estimates in data-sparse areas. Topics for future
International Nuclear Information System (INIS)
Ohi, Takao; Miyahara, Kaname; Naito, Morimasa
1996-01-01
The migration of radionuclide through bentonite was analyzed by alternative models considering the precipitation caused by decay-chain ingrowth. In the realistic model, the temporal and spacial isotopic ratio in bentonite was taken into account for determining the shared solubility for each radionuclide. The release rate of radionuclide from the outer surface of bentonite to surrounding rock is generally lower in such realistic analysis considering precipitation in bentonite than calculated by the model neglecting precipitation. This result shows the model not considering such effects is mostly conservative for the safety assessment
Directory of Open Access Journals (Sweden)
Dirk Temme
2008-12-01
Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.
National Research Council Canada - National Science Library
Skormin, Victor A
2005-01-01
.... It is proposed to enhance the system performance through development of an optimal comptroller extending the conventional state-variable and model reference control laws, with dynamic programming...
Brunner, Martin; Lüdtke, Oliver; Trautwein, Ulrich
2008-01-01
The internal/external frame of reference model (I/E model; Marsh, 1986 ) is a highly influential model of self-concept formation, which predicts that domain-specific abilities have positive effects on academic self-concepts in the corresponding domain and negative effects across domains. Investigations of the I/E model do not typically incorporate general cognitive ability or general academic self-concept. This article investigates alternative measurement models for domain-specific and domain-general cognitive abilities and academic self-concepts within an extended I/E model framework using representative data from 25,301 9th-grade students. Empirical support was found for the external validity of a new measurement model for academic self-concepts with respect to key student characteristics (gender, school satisfaction, educational aspirations, domain-specific interests, grades). Moreover, the basic predictions of the I/E model were confirmed, and the new extension of the traditional I/E model permitted meaningful relations to be drawn between domain-general cognitive ability and domain-general academic self-concept as well as between the domain-specific elements of the model.
Anti-ferromagnetic Heisenberg model on bilayer honeycomb
International Nuclear Information System (INIS)
Shoja, M.; Shahbazi, F.
2012-01-01
Recent experiment on spin-3/2 bilayer honeycomb lattice antiferromagnet Bi 3 Mn 4 O 12 (NO 3 ) shows a spin liquid behavior down to very low temperatures. This behavior can be ascribed to the frustration effect due to competitions between first and second nearest neighbour's antiferromagnet interaction. Motivated by the experiment, we study J 1 -J 2 Antiferromagnet Heisenberg model, using Mean field Theory. This calculation shows highly degenerate ground state. We also calculate the effect of second nearest neighbor through z direction and show these neighbors also increase frustration in these systems. Because of these degenerate ground state in these systems, spins can't find any ground state to be freeze in low temperatures. This behavior shows a novel spin liquid state down to very low temperatures.
Anisotropic Heisenberg model for a semi-infinite crystal
International Nuclear Information System (INIS)
Queiroz, C.A.
1985-11-01
A semi-infinite Heisenberg model with exchange interactions between nearest and next-nearest neighbors in a simple cubic lattice. The free surface from the other layers of magnetic ions, by choosing a single ion uniaxial anisotropy in the surface (Ds) different from the anisotropy in the other layers (D). Using the Green function formalism, the behavior of magnetization as a function of the temperature for each layer, as well as the spectrum localized magnons for several values of ratio Ds/D for surface magnetization. Above this critical ratio, a ferromagnetic surface layer is obtained white the other layers are already in the paramagnetic phase. In this situation the critical temperature of surface becomes larger than the critical temperature of the bulk. (Author) [pt
Walsh, Daniel P.; Norton, Andrew S.; Storm, Daniel J.; Van Deelen, Timothy R.; Heisy, Dennis M.
2018-01-01
Implicit and explicit use of expert knowledge to inform ecological analyses is becoming increasingly common because it often represents the sole source of information in many circumstances. Thus, there is a need to develop statistical methods that explicitly incorporate expert knowledge, and can successfully leverage this information while properly accounting for associated uncertainty during analysis. Studies of cause-specific mortality provide an example of implicit use of expert knowledge when causes-of-death are uncertain and assigned based on the observer's knowledge of the most likely cause. To explicitly incorporate this use of expert knowledge and the associated uncertainty, we developed a statistical model for estimating cause-specific mortality using a data augmentation approach within a Bayesian hierarchical framework. Specifically, for each mortality event, we elicited the observer's belief of cause-of-death by having them specify the probability that the death was due to each potential cause. These probabilities were then used as prior predictive values within our framework. This hierarchical framework permitted a simple and rigorous estimation method that was easily modified to include covariate effects and regularizing terms. Although applied to survival analysis, this method can be extended to any event-time analysis with multiple event types, for which there is uncertainty regarding the true outcome. We conducted simulations to determine how our framework compared to traditional approaches that use expert knowledge implicitly and assume that cause-of-death is specified accurately. Simulation results supported the inclusion of observer uncertainty in cause-of-death assignment in modeling of cause-specific mortality to improve model performance and inference. Finally, we applied the statistical model we developed and a traditional method to cause-specific survival data for white-tailed deer, and compared results. We demonstrate that model selection
Barrios, J. M.; Verstraeten, W. W.; Farifteh, J.; Maes, P.; Aerts, J. M.; Coppin, P.
2012-04-01
Lyme borreliosis (LB) is the most common tick-borne disease in Europe and incidence growth has been reported in several European countries during the last decade. LB is caused by the bacterium Borrelia burgdorferi and the main vector of this pathogen in Europe is the tick Ixodes ricinus. LB incidence and spatial spread is greatly dependent on environmental conditions impacting habitat, demography and trophic interactions of ticks and the wide range of organisms ticks parasite. The landscape configuration is also a major determinant of tick habitat conditions and -very important- of the fashion and intensity of human interaction with vegetated areas, i.e. human exposure to the pathogen. Hence, spatial notions as distance and adjacency between urban and vegetated environments are related to human exposure to tick bites and, thus, to risk. This work tested the adequacy of a gravity model setting to model the observed spatio-temporal pattern of LB as a function of location and size of urban and vegetated areas and the seasonal and annual change in the vegetation dynamics as expressed by MODIS NDVI. Opting for this approach implies an analogy with Newton's law of universal gravitation in which the attraction forces between two bodies are directly proportional to the bodies mass and inversely proportional to distance. Similar implementations have proven useful in fields like trade modeling, health care service planning, disease mapping among other. In our implementation, the size of human settlements and vegetated systems and the distance separating these landscape elements are considered the 'bodies'; and the 'attraction' between them is an indicator of exposure to pathogen. A novel element of this implementation is the incorporation of NDVI to account for the seasonal and annual variation in risk. The importance of incorporating this indicator of vegetation activity resides in the fact that alterations of LB incidence pattern observed the last decade have been ascribed
Fernández, Estibalitz; Rodríguez, Gelen; Hostachy, Sarah; Clède, Sylvain; Cócera, Mercedes; Sandt, Christophe; Lambert, François; de la Maza, Alfonso; Policar, Clotilde; López, Olga
2015-07-01
A rhenium tris-carbonyl derivative (fac-[Re(CO)3Cl(2-(1-dodecyl-1H-1,2,3,triazol-4-yl)-pyridine)]) was incorporated into phospholipid assemblies, called bicosomes, and the penetration of this molecule into skin was monitored using Fourier-transform infrared microspectroscopy (FTIR). To evaluate the capacity of bicosomes to promote the penetration of this derivative, the skin penetration of the Re(CO)3 derivative dissolved in dimethyl sulfoxide (DMSO), a typical enhancer, was also studied. Dynamic light scattering results (DLS) showed an increase in the size of the bicosomes with the incorporation of the Re(CO)3 derivative, and the FTIR microspectroscopy showed that the Re(CO)3 derivative incorporated in bicosomes penetrated deeper into the skin than when dissolved in DMSO. When this molecule was applied on the skin using the bicosomes, 60% of the Re(CO)3 derivative was retained in the stratum corneum (SC) and 40% reached the epidermis (Epi). Otherwise, the application of this molecule via DMSO resulted in 95% of the Re(CO)3 derivative being in the SC and only 5% reaching the Epi. Using a Re(CO)3 derivative with a dodecyl-chain as a model molecule, it was possible to determine the distribution of molecules with similar physicochemical characteristics in the skin using bicosomes. This fact makes these nanostructures promising vehicles for the application of lipophilic molecules inside the skin. Copyright © 2015 Elsevier B.V. All rights reserved.
Incorporating additional tree and environmental variables in a lodgepole pine stem profile model
John C. Byrne
1993-01-01
A new variable-form segmented stem profile model is developed for lodgepole pine (Pinus contorta) trees from the northern Rocky Mountains of the United States. I improved estimates of stem diameter by predicting two of the model coefficients with linear equations using a measure of tree form, defined as a ratio of dbh and total height. Additional improvements were...
Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency
Su, Shiyang
2017-01-01
With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs has been recognized and the literature of modeling times has been growing during the last few decades. Previous studies have tried to formulate models and theories to…
Becky K. Kerns; Miles A. Hemstrom; David Conklin; Gabriel I. Yospin; Bart Johnson; Dominique Bachelet; Scott Bridgham
2012-01-01
Understanding landscape vegetation dynamics often involves the use of scientifically-based modeling tools that are capable of testing alternative management scenarios given complex ecological, management, and social conditions. State-and-transition simulation model (STSM) frameworks and software such as PATH and VDDT are commonly used tools that simulate how landscapes...
LINKING MICROBES TO CLIMATE: INCORPORATING MICROBIAL ACTIVITY INTO CLIMATE MODELS COLLOQUIUM
Energy Technology Data Exchange (ETDEWEB)
DeLong, Edward; Harwood, Caroline; Reid, Ann
2011-01-01
This report explains the connection between microbes and climate, discusses in general terms what modeling is and how it applied to climate, and discusses the need for knowledge in microbial physiology, evolution, and ecology to contribute to the determination of fluxes and rates in climate models. It recommends with a multi-pronged approach to address the gaps.
Building out a Measurement Model to Incorporate Complexities of Testing in the Language Domain
Wilson, Mark; Moore, Stephen
2011-01-01
This paper provides a summary of a novel and integrated way to think about the item response models (most often used in measurement applications in social science areas such as psychology, education, and especially testing of various kinds) from the viewpoint of the statistical theory of generalized linear and nonlinear mixed models. In addition,…
Lowe, James; Carter, Merilyn; Cooper, Tom
2018-01-01
Mathematical models are conceptual processes that use mathematics to describe, explain, and/or predict the behaviour of complex systems. This article is written for teachers of mathematics in the junior secondary years (including out-of-field teachers of mathematics) who may be unfamiliar with mathematical modelling, to explain the steps involved…
A LabVIEW model incorporating an open-loop arterial impedance and a closed-loop circulatory system.
Cole, R T; Lucas, C L; Cascio, W E; Johnson, T A
2005-11-01
While numerous computer models exist for the circulatory system, many are limited in scope, contain unwanted features or incorporate complex components specific to unique experimental situations. Our purpose was to develop a basic, yet multifaceted, computer model of the left heart and systemic circulation in LabVIEW having universal appeal without sacrificing crucial physiologic features. The program we developed employs Windkessel-type impedance models in several open-loop configurations and a closed-loop model coupling a lumped impedance and ventricular pressure source. The open-loop impedance models demonstrate afterload effects on arbitrary aortic pressure/flow inputs. The closed-loop model catalogs the major circulatory waveforms with changes in afterload, preload, and left heart properties. Our model provides an avenue for expanding the use of the ventricular equations through closed-loop coupling that includes a basic coronary circuit. Tested values used for the afterload components and the effects of afterload parameter changes on various waveforms are consistent with published data. We conclude that this model offers the ability to alter several circulatory factors and digitally catalog the most salient features of the pressure/flow waveforms employing a user-friendly platform. These features make the model a useful instructional tool for students as well as a simple experimental tool for cardiovascular research.
Incorporating Protein Biosynthesis into the Saccharomyces cerevisiae Genome-scale Metabolic Model
DEFF Research Database (Denmark)
Olivares Hernandez, Roberto
Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been......, translation initiation, translation elongation, translation termination, translation elongation, and mRNA decay. Considering these information from the mechanisms of transcription and translation, we will include this stoichiometric reactions into the genome scale model for S. Cerevisiae to obtain the first...
Kucharik, C.
2004-12-01
At the scale of individual fields, crop models have long been used to examine the interactions between soils, vegetation, the atmosphere and human management, using varied levels of numerical sophistication. While previous efforts have contributed significantly towards the advancement of modeling tools, the models themselves are not typically applied across larger continental scales due to a lack of crucial data. Furthermore, many times crop models are used to study a single quantity, process, or cycle in isolation, limiting their value in considering the important tradeoffs between competing ecosystem services such as food production, water quality, and sequestered carbon. In response to the need for a more integrated agricultural modeling approach across the continental scale, an updated agricultural version of a dynamic biosphere model (IBIS) now integrates representations of land-surface physics and soil physics, canopy physiology, terrestrial carbon and nitrogen balance, crop phenology, solute transport, and farm management into a single framework. This version of the IBIS model (Agro-IBIS) uses a short 20 to 60-minute timestep to simulate the rapid exchange of energy, carbon, water, and momentum between soils, vegetative canopies, and the atmosphere. The model can be driven either by site-specific meteorological data or by gridded climate datasets. Mechanistic crop models for corn, soybean, and wheat use physiologically-based representations of leaf photosynthesis, stomatal conductance, and plant respiration. Model validation has been performed using a variety of temporal scale data collected at the following spatial scales: (1) the precision-agriculture scale (5 m), (2) the individual field experiment scale (AmeriFlux), and (3) regional and continental scales using annual USDA county-level yield data and monthly satellite (AVHRR) observations of vegetation characteristics at 0.5 degree resolution. To date, the model has been used with great success to
Alzheimer's disease: analysis of a mathematical model incorporating the role of prions
Helal, Mohamed; Hingant, Erwan; Pujo-Menjouet, Laurent; Webb, Glenn F.
2013-01-01
We introduce a mathematical model of the in vivo progression of Alzheimer's disease with focus on the role of prions in memory impairment. Our model consists of differential equations that describe the dynamic formation of {\\beta}-amyloid plaques based on the concentrations of A{\\beta} oligomers, PrPC proteins, and the A{\\beta}-x-PrPC complex, which are hypothesized to be responsible for synaptic toxicity. We prove the well-posedness of the model and provided stability results for its unique ...
Incorporating Floating Surface Objects into a Fully Dispersive Surface Wave Model
2016-04-19
Decimation and Interpolation (PDI) Method was dded to NHWAVE by Shi et al. (2015) , who confirmed that the dy- amic pressure can be modeled accurately... cluster Farber located at he University of Delaware. Using 48 cores, it took about 8 h for a imulation of 10 0 0 s. The 10 m water depth was selected to re... decimation and interpolation (PDI) method for a baroclinic non-hydrostatic model. Ocean Mod. 96, 265–279 . 26 M.D. Orzech et al. / Ocean Modelling 102 (2016
On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics
Directory of Open Access Journals (Sweden)
Jiang Gangyi
2010-01-01
Full Text Available The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.
DEFF Research Database (Denmark)
Sanchez, Benjamin J.; Zhang, Xi-Cheng; Nilsson, Avlant
2017-01-01
, which act as limitations on metabolic fluxes, are not taken into account. Here, we present GECKO, a method that enhances a GEM to account for enzymes as part of reactions, thereby ensuring that each metabolic flux does not exceed its maximum capacity, equal to the product of the enzyme's abundance...... and turnover number. We applied GECKO to a Saccharomyces cerevisiae GEM and demonstrated that the new model could correctly describe phenotypes that the previous model could not, particularly under high enzymatic pressure conditions, such as yeast growing on different carbon sources in excess, coping...... with stress, or overexpressing a specific pathway. GECKO also allows to directly integrate quantitative proteomics data; by doing so, we significantly reduced flux variability of the model, in over 60% of metabolic reactions. Additionally, the model gives insight into the distribution of enzyme usage between...
National Research Council Canada - National Science Library
Parker, Anthony
2002-01-01
A new variant of the nonlinear kinematic hardening model is proposed that accommodates both nonlinear and linear strain hardening during initial tensile loading and reduced elastic modulus during initial load reversal...
Forecasting energy consumption using a grey model improved by incorporating genetic programming
International Nuclear Information System (INIS)
Lee, Yi-Shian; Tong, Lee-Ing
2011-01-01
Energy consumption is an important economic index, which reflects the industrial development of a city or a country. Forecasting energy consumption by conventional statistical methods usually requires the making of assumptions such as the normal distribution of energy consumption data or on a large sample size. However, the data collected on energy consumption are often very few or non-normal. Since a grey forecasting model, based on grey theory, can be constructed for at least four data points or ambiguity data, it can be adopted to forecast energy consumption. In some cases, however, a grey forecasting model may yield large forecasting errors. To minimize such errors, this study develops an improved grey forecasting model, which combines residual modification with genetic programming sign estimation. Finally, a real case of Chinese energy consumption is considered to demonstrate the effectiveness of the proposed forecasting model.
Bifurcation analysis and phase diagram of a spin-string model with buckled states.
Ruiz-Garcia, M; Bonilla, L L; Prados, A
2017-12-01
We analyze a one-dimensional spin-string model, in which string oscillators are linearly coupled to their two nearest neighbors and to Ising spins representing internal degrees of freedom. String-spin coupling induces a long-range ferromagnetic interaction among spins that competes with a spin-spin antiferromagnetic coupling. As a consequence, the complex phase diagram of the system exhibits different flat rippled and buckled states, with first or second order transition lines between states. This complexity translates to the two-dimensional version of the model, whose numerical solution has been recently used to explain qualitatively the rippled to buckled transition observed in scanning tunneling microscopy experiments with suspended graphene sheets. Here we describe in detail the phase diagram of the simpler one-dimensional model and phase stability using bifurcation theory. This gives additional insight into the physical mechanisms underlying the different phases and the behavior observed in experiments.
International Nuclear Information System (INIS)
Lopez Carvajal, Jaime; Branch Bedoya, John Willian
2005-01-01
The automatic classification of objects is a very interesting approach under several problem domains. This paper outlines some results obtained under different classification models to categorize textural patterns of minerals using real digital images. The data set used was characterized by a small size and noise presence. The implemented models were the Bayesian classifier, Neural Network (2-5-1), support vector machine, decision tree and 3-nearest neighbors. The results after applying crossed validation show that the Bayesian model (84%) proved better predictive capacity than the others, mainly due to its noise robustness behavior. The neuronal network (68%) and the SVM (67%) gave promising results, because they could be improved increasing the data amount used, while the decision tree (55%) and K-NN (54%) did not seem to be adequate for this problem, because of their sensibility to noise
Magnetic properties of Fe–Al for quenched diluted spin-1 Ising model
Energy Technology Data Exchange (ETDEWEB)
Freitas, A.S. [Departamento de Física, Universidade Federal de Sergipe, 49100-000, São Cristovão, SE (Brazil); Coordenadoria de Física, Instituto Federal de Sergipe, 49400-000 Lagarto, SE (Brazil); Albuquerque, Douglas F. de, E-mail: douglas@ufs.br [Departamento de Física, Universidade Federal de Sergipe, 49100-000, São Cristovão, SE (Brazil); Departamento de Matemática, Universidade Federal de Sergipe, 49100-000, São Cristovão, SE (Brazil); Fittipaldi, I.P. [Representação Regional do Ministério da Ciência, Tecnologia e Inovação no Nordeste - ReNE, 50740-540 Recife, PE (Brazil); Moreno, N.O. [Departamento de Física, Universidade Federal de Sergipe, 49100-000, São Cristovão, SE (Brazil)
2014-08-01
We study the phase diagram of Fe{sub 1−q}Al{sub q} alloys via the quenched site diluted spin-1 ferromagnetic Ising model by employing effective field theory. One suggests a new approach to exchange interaction between nearest neighbors of Fe that depends on the powers of the Al (q) instead of the linear dependence proposed in other papers. In such model we propose the same kind of the exchange interaction in which the iron–nickel alloys obtain an excellent theoretical description of the experimental data of the T–q phase diagram for all Al concentration q. - Highlights: • We apply the quenched Ising model spin-1 to study the properties of Fe–Al. • We employ the EFT and suggest a new approach to ferromagnetic coupling. • The new probability distribution is considered. • The phase diagram is obtained for all values of q in T–q plane.
Magnetic properties of Fe–Al for quenched diluted spin-1 Ising model
International Nuclear Information System (INIS)
Freitas, A.S.; Albuquerque, Douglas F. de; Fittipaldi, I.P.; Moreno, N.O.
2014-01-01
We study the phase diagram of Fe 1−q Al q alloys via the quenched site diluted spin-1 ferromagnetic Ising model by employing effective field theory. One suggests a new approach to exchange interaction between nearest neighbors of Fe that depends on the powers of the Al (q) instead of the linear dependence proposed in other papers. In such model we propose the same kind of the exchange interaction in which the iron–nickel alloys obtain an excellent theoretical description of the experimental data of the T–q phase diagram for all Al concentration q. - Highlights: • We apply the quenched Ising model spin-1 to study the properties of Fe–Al. • We employ the EFT and suggest a new approach to ferromagnetic coupling. • The new probability distribution is considered. • The phase diagram is obtained for all values of q in T–q plane
Degenerate and chiral states in the extended Heisenberg model on the kagome lattice
Gómez Albarracín, F. A.; Pujol, P.
2018-03-01
We present a study of the low-temperature phases of the antiferromagnetic extended classical Heisenberg model on the kagome lattice, up to third-nearest neighbors. First, we focus on the degenerate lines in the boundaries of the well-known staggered chiral phases. These boundaries have either semiextensive or extensive degeneracy, and we discuss the partial selection of states by thermal fluctuations. Then, we study the model under an external magnetic field on these lines and in the staggered chiral phases. We pay particular attention to the highly frustrated point, where the three exchange couplings are equal. We show that this point can be mapped to a model with spin-liquid behavior and nonzero chirality. Finally, we explore the effect of Dzyaloshinskii-Moriya (DM) interactions in two ways: a homogeneous and a staggered DM interaction. In both cases, there is a rich low-temperature phase diagram, with different spontaneously broken symmetries and nontrivial chiral phases.
Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons
Directory of Open Access Journals (Sweden)
F. Serhan Daniş
2017-10-01
Full Text Available We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC method for tracking.
Modeling of the shape of infrared stimulated luminescence signals in feldspars
DEFF Research Database (Denmark)
Pagonis, Vasilis; Jain, Mayank; Murray, Andrew S.
2012-01-01
This paper presents a new empirical model describing infrared (IR) stimulation phenomena in feldspars. In the model electrons from the ground state of an electron trap are raised by infrared optical stimulation to the excited state, and subsequently recombine with a nearest-neighbor hole via...... a tunneling process, leading to the emission of light. The model explains the experimentally observed existence of two distinct time intervals in the luminescence intensity; a rapid initial decay of the signal followed by a much slower gradual decay of the signal with time.The initial fast decay region...... corresponds to a fast rate of recombination processes taking place along the infrared stimulated luminescence (IRSL) curves. The subsequent decay of the simulated IRSL signal is characterized by a much slower recombination rate, which can be described by a power-law type of equation.Several simulations...
Observation of spatial charge and spin correlations in the 2D Fermi-Hubbard model.
Cheuk, Lawrence W; Nichols, Matthew A; Lawrence, Katherine R; Okan, Melih; Zhang, Hao; Khatami, Ehsan; Trivedi, Nandini; Paiva, Thereza; Rigol, Marcos; Zwierlein, Martin W
2016-09-16
Strong electron correlations lie at the origin of high-temperature superconductivity. Its essence is believed to be captured by the Fermi-Hubbard model of repulsively interacting fermions on a lattice. Here we report on the site-resolved observation of charge and spin correlations in the two-dimensional (2D) Fermi-Hubbard model realized with ultracold atoms. Antiferromagnetic spin correlations are maximal at half-filling and weaken monotonically upon doping. At large doping, nearest-neighbor correlations between singly charged sites are negative, revealing the formation of a correlation hole, the suppressed probability of finding two fermions near each other. As the doping is reduced, the correlations become positive, signaling strong bunching of doublons and holes, in agreement with numerical calculations. The dynamics of the doublon-hole correlations should play an important role for transport in the Fermi-Hubbard model. Copyright © 2016, American Association for the Advancement of Science.
3D Hilbert Space Filling Curves in 3D City Modeling for Faster Spatial Queries
DEFF Research Database (Denmark)
Ujang, Uznir; Antón Castro, Francesc/François; Azri, Suhaibah
2014-01-01
The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using...... objects. In this research, the authors propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA......) or Hilbert mappings, in this research, they extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested for single object, nearest neighbor and range search queries using a CityGML dataset of 1,000 building blocks and the results...
Limit sets for natural extensions of Schelling’s segregation model
Singh, Abhinav; Vainchtein, Dmitri; Weiss, Howard
2011-07-01
Thomas Schelling developed an influential demographic model that illustrated how, even with relatively mild assumptions on each individual's nearest neighbor preferences, an integrated city would likely unravel to a segregated city, even if all individuals prefer integration. Individuals in Schelling's model cities are divided into two groups of equal number and each individual is "happy" or "unhappy" when the number of similar neighbors cross a simple threshold. In this manuscript we consider natural extensions of Schelling's original model to allow the two groups have different sizes and to allow different notions of happiness of an individual. We observe that differences in aggregation patterns of majority and minority groups are highly sensitive to the happiness threshold; for low threshold, the differences are small, and when the threshold is raised, striking new patterns emerge. We also observe that when individuals strongly prefer to live in integrated neighborhoods, the final states exhibit a new tessellated-like structure.
Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons.
Daniş, F Serhan; Cemgil, Ali Taylan
2017-10-29
We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking.
Using recurrent neural network models for early detection of heart failure onset.
Choi, Edward; Schuetz, Andy; Stewart, Walter F; Sun, Jimeng
2017-03-01
We explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality. Data were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches. Using a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP). Deep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje
2017-09-01
Despite an important role the aerosols play in all stages of cloud lifecycle, their representation in numerical weather prediction models is often rather crude. This paper investigates the effects the explicit versus implicit inclusion of aerosols in a microphysics parameterization scheme in Weather Research and Forecasting (WRF) - Advanced Research WRF (WRF-ARW) model has on cloud dynamics and microphysics. The testbed selected for this study is a severe mesoscale convective system with supercells that struck west and central parts of Serbia in the afternoon of July 21, 2014. Numerical products of two model runs, i.e. one with aerosols explicitly (WRF-AE) included and another with aerosols implicitly (WRF-AI) assumed, are compared against precipitation measurements from surface network of rain gauges, as well as against radar and satellite observations. The WRF-AE model accurately captured the transportation of dust from the north Africa over the Mediterranean and to the Balkan region. On smaller scales, both models displaced the locations of clouds situated above west and central Serbia towards southeast and under-predicted the maximum values of composite radar reflectivity. Similar to satellite images, WRF-AE shows the mesoscale convective system as a merged cluster of cumulonimbus clouds. Both models over-predicted the precipitation amounts; WRF-AE over-predictions are particularly pronounced in the zones of light rain, while WRF-AI gave larger outliers. Unlike WRF-AI, the WRF-AE approach enables the modelling of time evolution and influx of aerosols into the cloud which could be of practical importance in weather forecasting and weather modification. Several likely causes for discrepancies between models and observations are discussed and prospects for further research in this field are outlined.
Luna, Byron Quan; Remaître, Alexandre; van Asch, Theo; Malet, Jean-Philippe; van Westen, Cees
2010-05-01
Estimating the magnitude and the intensity of rapid landslides like debris flows is fundamental to evaluate quantitatively the hazard in a specific location. Intensity varies through the travelled course of the flow and can be described by physical features such as deposited volume, velocities, height of the flow, impact forces and pressures. Dynamic run-out models are able to characterize the distribution of the material, its intensity and define the zone where the elements will experience an impact. These models can provide valuable inputs for vulnerability and risk calculations. However, most dynamic run-out models assume a constant volume during the motion of the flow, ignoring the important role of material entrained along its path. Consequently, they neglect that the increase of volume enhances the mobility of the flow and can significantly influence the size of the potential impact area. An appropriate erosion mechanism needs to be established in the analyses of debris flows that will improve the results of dynamic modeling and consequently the quantitative evaluation of risk. The objective is to present and test a simple 1D debris flow model with a material entrainment concept based on limit equilibrium considerations and the generation of excess pore water pressure through undrained loading of the in situ bed material. The debris flow propagation model is based on a one dimensional finite difference solution of a depth-averaged form of the Navier-Stokes equations of fluid motions. The flow is treated as a laminar one phase material, which behavior is controlled by a visco-plastic Coulomb-Bingham rheology. The model parameters are evaluated and the model performance is tested on a debris flow event that occurred in 2003 in the Faucon torrent (Southern French Alps).
Worby, Colin J.
2013-01-01
Healthcare-associated infections (HCAIs) remain a problem worldwide, and can cause severe illness and death. The increasing level of antibiotic resistance among bacteria that cause HCAIs limits infection treatment options, and is a major concern. Statistical modelling is a vital tool in developing an understanding of HCAI transmission dynamics. In this thesis, stochastic epidemic models are developed and used with the aim of investigating methicillin-resistant Staphylococcus aureus (MRSA) tra...
Creating a process for incorporating epidemiological modelling into outbreak management decisions.
Akselrod, Hana; Mercon, Monica; Kirkeby Risoe, Petter; Schlegelmilch, Jeffrey; McGovern, Joanne; Bogucki, Sandy
2012-01-01
Modern computational models of infectious diseases greatly enhance our ability to understand new infectious threats and assess the effects of different interventions. The recently-released CDC Framework for Preventing Infectious Diseases calls for increased use of predictive modelling of epidemic emergence for public health preparedness. Currently, the utility of these technologies in preparedness and response to outbreaks is limited by gaps between modelling output and information requirements for incident management. The authors propose an operational structure that will facilitate integration of modelling capabilities into action planning for outbreak management, using the Incident Command System (ICS) and Synchronization Matrix framework. It is designed to be adaptable and scalable for use by state and local planners under the National Response Framework (NRF) and Emergency Support Function #8 (ESF-8). Specific epidemiological modelling requirements are described, and integrated with the core processes for public health emergency decision support. These methods can be used in checklist format to align prospective or real-time modelling output with anticipated decision points, and guide strategic situational assessments at the community level. It is anticipated that formalising these processes will facilitate translation of the CDC's policy guidance from theory to practice during public health emergencies involving infectious outbreaks.
Enhanced stability of car-following model upon incorporation of short-term driving memory
Liu, Da-Wei; Shi, Zhong-Ke; Ai, Wen-Huan
2017-06-01
Based on the full velocity difference model, a new car-following model is developed to investigate the effect of short-term driving memory on traffic flow in this paper. Short-term driving memory is introduced as the influence factor of driver's anticipation behavior. The stability condition of the newly developed model is derived and the modified Korteweg-de Vries (mKdV) equation is constructed to describe the traffic behavior near the critical point. Via numerical method, evolution of a small perturbation is investigated firstly. The results show that the improvement of this new car-following model over the previous ones lies in the fact that the new model can improve the traffic stability. Starting and breaking processes of vehicles in the signalized intersection are also investigated. The numerical simulations illustrate that the new model can successfully describe the driver's anticipation behavior, and that the efficiency and safety of the vehicles passing through the signalized intersection are improved by considering short-term driving memory.
International Nuclear Information System (INIS)
Stubbs, J.B.
1992-01-01
As part of the revision by the International Commission on Radiological Protection (ICRP) of its report on Reference Man, an extensive review of the literature regarding anatomy and morphology of the gastrointestinal (GI) tract has been completed. Data on age- and gender-dependent GI physiology and motility may be included in the proposed ICRP report. A new mathematical model describing the transit of substances through the GI tract as well as the absorption and secretion of material in the GI tract has been developed. This mathematical description of GI tract kinetics utilizes more physiologically accurate transit processes than the mathematically simple, but nonphysiological, GI tract model that was used in ICRP Report 30. The proposed model uses a combination of zero- and first-order kinetics to describe motility. Some of the physiological parameters that the new model accounts for include sex, age, pathophysiological condition and meal phase (solid versus liquid). A computer algorithm, written in BASIC, based on this new model has been derived and results are compared to those of the ICRP-30 model
Incorporating rainfall uncertainty in a SWAT model: the river Zenne basin (Belgium) case study
Tolessa Leta, Olkeba; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2013-04-01
The European Union Water Framework Directive (EU-WFD) called its member countries to achieve a good ecological status for all inland and coastal water bodies by 2015. According to recent studies, the river Zenne (Belgium) is far from this objective. Therefore, an interuniversity and multidisciplinary project "Towards a Good Ecological Status in the river Zenne (GESZ)" was launched to evaluate the effects of wastewater management plans on the river. In this project, different models have been developed and integrated using the Open Modelling Interface (OpenMI). The hydrologic, semi-distributed Soil and Water Assessment Tool (SWAT) is hereby used as one of the model components in the integrated modelling chain in order to model the upland catchment processes. The assessment of the uncertainty of SWAT is an essential aspect of the decision making process, in order to design robust management strategies that take the predicted uncertainties into account. Model uncertainty stems from the uncertainties on the model parameters, the input data (e.g, rainfall), the calibration data (e.g., stream flows) and on the model structure itself. The objective of this paper is to assess the first three sources of uncertainty in a SWAT model of the river Zenne basin. For the assessment of rainfall measurement uncertainty, first, we identified independent rainfall periods, based on the daily precipitation and stream flow observations and using the Water Engineering Time Series PROcessing tool (WETSPRO). Secondly, we assigned a rainfall multiplier parameter for each of the independent rainfall periods, which serves as a multiplicative input error corruption. Finally, we treated these multipliers as latent parameters in the model optimization and uncertainty analysis (UA). For parameter uncertainty assessment, due to the high number of parameters of the SWAT model, first, we screened out its most sensitive parameters using the Latin Hypercube One-factor-At-a-Time (LH-OAT) technique
Incorporation of the time aspect into the liability-threshold model for case-control-family data
DEFF Research Database (Denmark)
Cederkvist, Luise; Holst, Klaus K.; Andersen, Klaus K.
2017-01-01
Familial aggregation and the role of genetic and environmental factors can be investigated through family studies analysed using the liability-threshold model. The liability-threshold model ignores the timing of events including the age of disease onset and right censoring, which can lead...... to estimates that are difficult to interpret and are potentially biased. We incorporate the time aspect into the liability-threshold model for case-control-family data following the same approach that has been applied in the twin setting. Thus, the data are considered as arising from a competing risks setting...... and inverse probability of censoring weights are used to adjust for right censoring. In the case-control-family setting, recognising the existence of competing events is highly relevant to the sampling of control probands. Because of the presence of multiple family members who may be censored at different...
International Nuclear Information System (INIS)
Vasilev, V.; Doncheva, B.
1989-01-01
A model is presented for irradiation calculation of human foetus during weeks 8-15 of the intrauterine development, when the mother chronically incorporates iodine 131. This period is critical for the nervous system of the foetus. Compared to some other author's models, the method proposed eliminates some uncertainties and takes into account the changes in the activity of mother's thyroid in time. The model is built on the base of data from 131 I-kinetics of pregnant women and experimental mice. A formula is proposed for total foetus irradiation calculation including: the internal γ and β irradiation; the external γ and β irradiation from the mother as a whole; and the external γ irradiation from the mother's thyroid
Lynch, K. A.; Clayton, R.; Roberts, T. M.; Hampton, D. L.; Conde, M.; Zettergren, M. D.; Burleigh, M.; Samara, M.; Michell, R.; Grubbs, G. A., II; Lessard, M.; Hysell, D. L.; Varney, R. H.; Reimer, A.
2017-12-01
The NASA auroral sounding rocket mission Isinglass was launched from Poker Flat Alaska in winter 2017. This mission consists of two separate multi-payload sounding rockets, over an array of groundbased observations, including radars and filtered cameras. The science goal is to collect two case studies, in two different auroral events, of the gradient scale sizes of auroral disturbances in the ionosphere. Data from the in situ payloads and the groundbased observations will be synthesized and fed into an ionospheric model, and the results will be studied to learn about which scale sizes of ionospheric structuring have significance for magnetosphere-ionosphere auroral coupling. The in situ instrumentation includes thermal ion sensors (at 5 points on the second flight), thermal electron sensors (at 2 points), DC magnetic fields (2 point), DC electric fields (one point, plus the 4 low-resource thermal ion RPA observations of drift on the second flight), and an auroral precipitation sensor (one point). The groundbased array includes filtered auroral imagers, the PFISR and SuperDarn radars, a coherent scatter radar, and a Fabry-Perot interferometer array. The ionospheric model to be used is a 3d electrostatic model including the effects of ionospheric chemistry. One observational and modelling goal for the mission is to move both observations and models of auroral arc systems into the third (along-arc) dimension. Modern assimilative tools combined with multipoint but low-resource observations allow a new view of the auroral ionosphere, that should allow us to learn more about the auroral zone as a coupled system. Conjugate case studies such as the Isinglass rocket flights allow for a test of the models' intepretation by comparing to in situ data. We aim to develop and improve ionospheric models to the point where they can be used to interpret remote sensing data with confidence without the checkpoint of in situ comparison.
Yuan, Kai; Knoop, Victor L.; Hoogendoorn, Serge P.
2017-01-01
On freeways, congestion always leads to capacity drop. This means the queue discharge rate is lower than the pre-queue capacity. Our recent research findings indicate that the queue discharge rate increases with the speed in congestion, that is the capacity drop is strongly correlated with the congestion state. Incorporating this varying capacity drop into a kinematic wave model is essential for assessing consequences of control strategies. However, to the best of authors' knowledge, no such a model exists. This paper fills the research gap by presenting a Lagrangian kinematic wave model. "Lagrangian" denotes that the new model is solved in Lagrangian coordinates. The new model can give capacity drops accompanying both of stop-and-go waves (on homogeneous freeway section) and standing queues (at nodes) in a network. The new model can be applied in a network operation. In this Lagrangian kinematic wave model, the queue discharge rate (or the capacity drop) is a function of vehicular speed in traffic jams. Four case studies on links as well as at lane-drop and on-ramp nodes show that the Lagrangian kinematic wave model can give capacity drops well, consistent with empirical observations.
Directory of Open Access Journals (Sweden)
Long Cheng
2014-01-01
Full Text Available Most traditional mode choice models are based on the principle of random utility maximization derived from econometric theory. Alternatively, mode choice modeling can be regarded as a pattern recognition problem reflected from the explanatory variables of determining the choices between alternatives. The paper applies the knowledge discovery technique of rough sets theory to model travel mode choices incorporating household and individual sociodemographics and travel information, and to identify the significance of each attribute. The study uses the detailed travel diary survey data of Changxing county which contains information on both household and individual travel behaviors for model estimation and evaluation. The knowledge is presented in the form of easily understood IF-THEN statements or rules which reveal how each attribute influences mode choice behavior. These rules are then used to predict travel mode choices from information held about previously unseen individuals and the classification performance is assessed. The rough sets model shows high robustness and good predictive ability. The most significant condition attributes identified to determine travel mode choices are gender, distance, household annual income, and occupation. Comparative evaluation with the MNL model also proves that the rough sets model gives superior prediction accuracy and coverage on travel mode choice modeling.
Assmus, Frauke; Houston, J Brian; Galetin, Aleksandra
2017-11-15
The prediction of tissue-to-plasma water partition coefficients (Kpu) from in vitro and in silico data using the tissue-composition based model (Rodgers & Rowland, J Pharm Sci. 2005, 94(6):1237-48.) is well established. However, distribution of basic drugs, in particular into lysosome-rich lung tissue, tends to be under-predicted by this approach. The aim of this study was to develop an extended mechanistic model for the prediction of Kpu which accounts for lysosomal sequestration and the contribution of different cell types in the tissue of interest. The extended model is based on compound-specific physicochemical properties and tissue composition data to describe drug ionization, distribution into tissue water and drug binding to neutral lipids, neutral phospholipids and acidic phospholipids in tissues, including lysosomes. Physiological data on the types of cells contributing to lung, kidney and liver, their lysosomal content and lysosomal pH were collated from the literature. The predictive power of the extended mechanistic model was evaluated using a dataset of 28 basic drugs (pK a ≥7.8, 17 β-blockers, 11 structurally diverse drugs) for which experimentally determined Kpu data in rat tissue have been reported. Accounting for the lysosomal sequestration in the extended mechanistic model improved the accuracy of Kpu predictions in lung compared to the original Rodgers model (56% drugs within 2-fold or 88% within 3-fold of observed values). Reduction in the extent of Kpu under-prediction was also evident in liver and kidney. However, consideration of lysosomal sequestration increased the occurrence of over-predictions, yielding overall comparable model performances for kidney and liver, with 68% and 54% of Kpu values within 2-fold error, respectively. High lysosomal concentration ratios relative to cytosol (>1000-fold) were predicted for the drugs investigated; the extent differed depending on the lysosomal pH and concentration of acidic phospholipids among
Energy Technology Data Exchange (ETDEWEB)
He, Yujie [Purdue Univ., West Lafayette, IN (United States). Dept. of Earth, Atmospheric, and Planetary Sciences; Yang, Jinyan [Univ. of Georgia, Athens, GA (United States). Warnell School of Forestry and Natural Resources; Northeast Forestry Univ., Harbin (China). Center for Ecological Research; Zhuang, Qianlai [Purdue Univ., West Lafayette, IN (United States). Dept. of Earth, Atmospheric, and Planetary Sciences; Purdue Univ., West Lafayette, IN (United States). Dept. of Agronomy; Harden, Jennifer W. [U.S. Geological Survey, Menlo Park, CA (United States); McGuire, Anthony D. [Alaska Cooperative Fish and Wildlife Research Unit, U.S. Geological Survey, Univ. of Alaska, Fairbanks, AK (United States). U.S. Geological Survey, Alaska Cooperative Fish and Wildlife Research Unit; Liu, Yaling [Purdue Univ., West Lafayette, IN (United States). Dept. of Earth, Atmospheric, and Planetary Sciences; Wang, Gangsheng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Climate Change Science Inst. and Environmental Sciences Division; Gu, Lianhong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Environmental Sciences Division
2015-11-20
Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here in this study we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO_{2} efflux (R_{H}) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (<2% of soil organic carbon) and soil R_{H} (7.5 ± 2.4 PgCyr^{-1}). Spatial correlation analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4-0.6) in the simulated spatial pattern of soil R_{H} with both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = -0.43 to -0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.
Dominant Height Model for Site Classification of Eucalyptus grandis Incorporating Climatic Variables
Directory of Open Access Journals (Sweden)
José Roberto Soares Scolforo
2013-01-01
Full Text Available This study tested the effects of inserting climatic variables in Eucalyptus grandis as covariables of a dominant height model, which for site index classification is usually related to age alone. Dominant height values ranging from 1 to 12 years of age located in the Southeast region of Brazil were used, as well as data from 19 automatic meteorological stations from the area. The Chapman-Richards model was chosen to represent dominant height as a function of age. To include the environmental variables a modifier was included in the asymptote of the model. The asymptote was chosen since this parameter is responsible for the maximum value which the dominant height can reach. Of the four environmental variables most responsible for database variation, the two with the highest correlation to the mean annual increment in dominant height (mean monthly precipitation and temperature were selected to compose the asymptote modifier. Model validation showed a gain in precision of 33% (reduction of the standard error of estimate when climatic variables were inserted in the model. Possible applications of the method include the estimation of site capacity in regions lacking any planting history, as well as updating forest inventory data based on past climate regimes.
Ultrasonically assisted drilling: A finite-element model incorporating acoustic softening effects
Phadnis, V. A.; Roy, A.; Silberschmidt, V. V.
2013-07-01
Ultrasonically assisted drilling (UAD) is a novel machining technique suitable for drilling in hard-to-machine quasi-brittle materials such as carbon fibre reinforced polymer composites (CFRP). UAD has been shown to possess several advantages compared to conventional drilling (CD), including reduced thrust forces, diminished burr formation at drill exit and an overall improvement in roundness and surface finish of the drilled hole. Recently, our in-house experiments of UAD in CFRP composites demonstrated remarkable reductions in thrust-force and torque measurements (average force reductions in excess of 80%) when compared to CD with the same machining parameters. In this study, a 3D finite-element model of drilling in CFRP is developed. In order to model acoustic (ultrasonic) softening effects, a phenomenological model, which accounts for ultrasonically induced plastic strain, was implemented in ABAQUS/Explicit. The model also accounts for dynamic frictional effects, which also contribute to the overall improved machining characteristics in UAD. The model is validated with experimental findings, where an excellent correlation between the reduced thrust force and torque magnitude was achieved.
Development of a prototype mesoscale computer model incorporating treatment of topography
International Nuclear Information System (INIS)
Apsimon, H.; Kitson, K.; Fawcett, M.; Goddard, A.J.H.
1984-01-01
Models are available for simulating dispersal of accidental releases, using mass-consistent wind-fields and accounting for site-specific topography. These techniques were examined critically to see if they might be improved, and to assess their limitations. An improved model, windfield adjusted for topography (WAFT), was developed (with advantages over MATHEW used in the Atmospheric Release Advisory Capability - ARAC system). To simulate dispersion in the windfields produced by WAFT and calculate time integrated air concentrations and dry and wet deposition the TOMCATS model was developed. It treats the release as an assembly of pseudo-particles using Monte Carlo techniques to simulate turbulent displacements. It allows for larger eddy effects in the horizontal turbulence spectrum. Wet deposition is calculated using inhomogeneous rainfields evolving in time and space. The models were assessed, applying them to hypothetical releases in complex terrain, using typical data applicable in accident conditions, and undertaking sensitivity studies. One finds considerable uncertainty in results produced by these models. Although useful for post-facto analysis, such limitations cast doubt on their advantages, relative to simpler techniques, during an actual emergency
Modeling & Informatics at Vertex Pharmaceuticals Incorporated: our philosophy for sustained impact.
McGaughey, Georgia; Patrick Walters, W
2017-03-01
Molecular modelers and informaticians have the unique opportunity to integrate cross-functional data using a myriad of tools, methods and visuals to generate information. Using their drug discovery expertise, information is transformed to knowledge that impacts drug discovery. These insights are often times formulated locally and then applied more broadly, which influence the discovery of new medicines. This is particularly true in an organization where the members are exposed to projects throughout an organization, such as in the case of the global Modeling & Informatics group at Vertex Pharmaceuticals. From its inception, Vertex has been a leader in the development and use of computational methods for drug discovery. In this paper, we describe the Modeling & Informatics group at Vertex and the underlying philosophy, which has driven this team to sustain impact on the discovery of first-in-class transformative medicines.
DEFF Research Database (Denmark)
Boegh, E; Gjetterman, B; Abrahamsen, P
2007-01-01
impact on CO2 sequestration processes complicates the modeling of carbon cycle feedbacks to climate change. The use of remote sensing constitutes a valuable data source to quantify and investigate impacts of bulk leaf N contents, however information on the vertical leaf N distribution and its...... relation to photosynthetic (Rubisco) capacity should also be known to quantify leaf N impacts on canopy photosynthesis. In this study, impacts of the amount and vertical distribution of leaf N contents on canopy photosynthesis were investigated by combining field measurements and photosynthesis modelling....... While most canopy photosynthesis models assume an exponential vertical profile of leaf N contents in the canopy, the field measurements showed that well-fertilized fields may have a uniform or exponential profile, and senescent canopies have reduced levels of N contents in upper leaves. The sensitivity...
DEFF Research Database (Denmark)
Bøgh, E.; Gjettermann, Birgitte; Abrahamsen, Per
2007-01-01
relation to photosynthetic (Rubisco) capacity should also be known to quantify leaf N impacts on canopy photosynthesis. In this study, impacts of the amount and vertical distribution of leaf N contents on canopy photosynthesis were investigated by combining field measurements and photosynthesis modelling....... While most canopy photosynthesis models assume an exponential vertical profile of leaf N contents in the canopy, the field measurements showed that well-fertilized fields may have a uniform or exponential profile, and senescent canopies have reduced levels of N contents in upper leaves. The sensitivity...... of simulated canopy photosynthesis to the different (observed) N profiles was examined using a multi-layer sun/shade biochemically based photosynthesis model and found to be important; ie. for a well-fertilized barley field, the use of exponential instead of uniform vertical N profiles increased the annual...
Kim, Moon-Jo; Jeong, Hye-Jin; Park, Ju-Won; Hong, Sung-Tae; Han, Heung Nam
2018-01-01
An empirical expression describing the electroplastic deformation behavior is suggested based on the Johnson-Cook (JC) model by adding several functions to consider both thermal and athermal electric current effects. Tensile deformation behaviors are carried out for an AZ31 magnesium alloy and an Al-Mg-Si alloy under pulsed electric current at various current densities with a fixed duration of electric current. To describe the flow curves under electric current, a modified JC model is proposed to take the electric current effect into account. Phenomenological descriptions of the adopted parameters in the equation are made. The modified JC model suggested in the present study is capable of describing the tensile deformation behaviors under pulsed electric current reasonably well.
A Transient 3D-CFD Model Incorporating Biological Processes for Use in Tissue Engineering
DEFF Research Database (Denmark)
Krühne, Ulrich; Wendt, D.; Martin, I.
2010-01-01
are considered in the model. In a variation of the model the growth of the biomass is influenced by the fluid dynamic induced shear stress level, which the cells are exposed to. In parallel an experimental growth of stem cells has been performed in a 3D perfusion reactor system and the culturing has been stopped...... after 2, 8 and 13 days. The development of the cells is compared to the simulated growth of cells and it is attempted to draw a conclusion about the impact of the shear stress on the cell growth. Keyword: Computational fluid dynamics (CFD),Micro pores,Scaffold,Bioreactor,Fluid structure interaction...
Enlarged symmetry algebras of spin chains, loop models, and S-matrices
International Nuclear Information System (INIS)
Read, N.; Saleur, H.
2007-01-01
The symmetry algebras of certain families of quantum spin chains are considered in detail. The simplest examples possess m states per site (m>=2), with nearest-neighbor interactions with U(m) symmetry, under which the sites transform alternately along the chain in the fundamental m and its conjugate representation m-bar. We find that these spin chains, even with arbitrary coefficients of these interactions, have a symmetry algebra A m much larger than U(m), which implies that the energy eigenstates fall into sectors that for open chains (i.e., free boundary conditions) can be labeled by j=0,1,...,L, for the 2L-site chain such that the degeneracies of all eigenvalues in the jth sector are generically the same and increase rapidly with j. For large j, these degeneracies are much larger than those that would be expected from the U(m) symmetry alone. The enlarged symmetry algebra A m (2L) consists of operators that commute in this space of states with the Temperley-Lieb algebra that is generated by the set of nearest-neighbor interaction terms; A m (2L) is not a Yangian. There are similar results for supersymmetric chains with gl(m+n|n) symmetry of nearest-neighbor interactions, and a richer representation structure for closed chains (i.e., periodic boundary conditions). The symmetries also apply to the loop models that can be obtained from the spin chains in a spacetime or transfer matrix picture. In the loop language, the symmetries arise because the loops cannot cross. We further define tensor products of representations (for the open chains) by joining chains end to end. The fusion rules for decomposing the tensor product of representations labeled j 1 and j 2 take the same form as the Clebsch-Gordan series for SU(2). This and other structures turn the symmetry algebra A m into a ribbon Hopf algebra, and we show that this is 'Morita equivalent' to the quantum group U q (sl 2 ) for m=q+q -1 . The open-chain results are extended to the cases vertical bar m vertical
Smallegange, I.M.; Caswell, H.; Toorians, M.E.M.; de Roos, A.M.
1. Integral projection models (IPMs) provide a powerful approach to investigate ecological and rapid evolutionary change in quantitative life-history characteristics and population dynamics. IPMs are constructed from functions that describe the demographic rates – survival, growth and reproduction –
Ammonia volatilization from treatment lagoons varies widely with the total ammonia concentration, pH, temperature, suspended solids, atmospheric ammonia concentration above the water surface, and wind speed. Ammonia emissions were estimated with a process-based mechanistic model integrating ammonia ...
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
van Oort, N.; Brands, Ties; de Romph, E.; Aceves Flores, J.
2014-01-01
Nowadays, transport demand models do not explicitly evaluate the impacts of service reliability of transit. Service reliability of transit systems is adversely experienced by users, as it causes additional travel time and unsecure arrival times. Because of this, travelers are likely to perceive a
A model of driver steering control incorporating the driver's sensing of steering torque
Kim, Namho; Cole, David J.
2011-10-01
Steering feel, or steering torque feedback, is widely regarded as an important aspect of the handling quality of a vehicle. Despite this, there is little theoretical understanding of its role. This paper describes an initial attempt to model the role of steering torque feedback arising from lateral tyre forces. The path-following control of a nonlinear vehicle model is implemented using a time-varying model predictive controller. A series of Kalman filters are used to represent the driver's ability to generate estimates of the system states from noisy sensory measurements, including the steering torque. It is found that under constant road friction conditions, the steering torque feedback reduces path-following errors provided the friction is sufficiently high to prevent frequent saturation of the tyres. When the driver model is extended to allow identification of, and adaptation to, a varying friction condition, it is found that the steering torque assists in the accurate identification of the friction condition. The simulation results give insight into the role of steering torque feedback arising from lateral tyre forces. The paper concludes with recommendations for further work.
Pierce, David M; Unterberger, Michael J; Trobin, Werner; Ricken, Tim; Holzapfel, Gerhard A
2016-02-01
The remarkable mechanical properties of cartilage derive from an interplay of isotropically distributed, densely packed and negatively charged proteoglycans; a highly anisotropic and inhomogeneously oriented fiber network of collagens; and an interstitial electrolytic fluid. We propose a new 3D finite strain constitutive model capable of simultaneously addressing both solid (reinforcement) and fluid (permeability) dependence of the tissue's mechanical response on the patient-specific collagen fiber network. To represent fiber reinforcement, we integrate the strain energies of single collagen fibers-weighted by an orientation distribution function (ODF) defined over a unit sphere-over the distributed fiber orientations in 3D. We define the anisotropic intrinsic permeability of the tissue with a structure tensor based again on the integration of the local ODF over all spatial fiber orientations. By design, our modeling formulation accepts structural data on patient-specific collagen fiber networks as determined via diffusion tensor MRI. We implement our new model in 3D large strain finite elements and study the distributions of interstitial fluid pressure, fluid pressure load support and shear stress within a cartilage sample under indentation. Results show that the fiber network dramatically increases interstitial fluid pressure and focuses it near the surface. Inhomogeneity in the tissue's composition also increases fluid pressure and reduces shear stress in the solid. Finally, a biphasic neo-Hookean material model, as is available in commercial finite element codes, does not capture important features of the intra-tissue response, e.g., distributions of interstitial fluid pressure and principal shear stress.
Thermal regimes are a critical factor in models predicting effects of watershed management activities on fish habitat suitability. We have assembled a database of lotic temperature time series across New England (> 7000 station-year combinations) from state and Federal data s...
DEFF Research Database (Denmark)
Iglesias, J. E.; Sabuncu, M. R.; Van Leemput, Koen
2012-01-01
Many successful segmentation algorithms are based on Bayesian models in which prior anatomical knowledge is combined with the available image information. However, these methods typically have many free parameters that are estimated to obtain point estimates only, whereas a faithful Bayesian anal...
Teaching Note--Incorporating Journal Clubs into Social Work Education: An Exploratory Model
Moore, Megan; Fawley-King, Kya; Stone, Susan I.; Accomazzo, Sarah M.
2013-01-01
This article outlines the implementation of a journal club for master's and doctoral social work students interested in mental health practice. It defines educational journal clubs and discusses the history of journal clubs in medical education and the applicability of the model to social work education. The feasibility of implementing…
Subramaniam, Maithreyi; Hanafi, Jaffri; Putih, Abu Talib
2016-01-01
This study adopted 30 first year graphic design students' artwork, with critical analysis using Feldman's model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students' performances in their critical ability.…