Kinetic Models for Topological Nearest-Neighbor Interactions
Blanchet, Adrien; Degond, Pierre
2017-12-01
We consider systems of agents interacting through topological interactions. These have been shown to play an important part in animal and human behavior. Precisely, the system consists of a finite number of particles characterized by their positions and velocities. At random times a randomly chosen particle, the follower, adopts the velocity of its closest neighbor, the leader. We study the limit of a system size going to infinity and, under the assumption of propagation of chaos, show that the limit kinetic equation is a non-standard spatial diffusion equation for the particle distribution function. We also study the case wherein the particles interact with their K closest neighbors and show that the corresponding kinetic equation is the same. Finally, we prove that these models can be seen as a singular limit of the smooth rank-based model previously studied in Blanchet and Degond (J Stat Phys 163:41-60, 2016). The proofs are based on a combinatorial interpretation of the rank as well as some concentration of measure arguments.
Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2009-01-01
Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....
Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting
Zhang, Ningning; Lin, Aijing; Shang, Pengjian
2017-07-01
In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.
Pini, Maria Gloria; Rettori, Angelo
1993-08-01
The thermodynamical properties of an alternating spin (S,s) one-dimensional (1D) Ising model with competing nearest- and next-nearest-neighbor interactions are exactly calculated using a transfer-matrix technique. In contrast to the case S=s=1/2, previously investigated by Harada, the alternation of different spins (S≠s) along the chain is found to give rise to two-peaked static structure factors, signaling the coexistence of different short-range-order configurations. The relevance of our calculations with regard to recent experimental data by Gatteschi et al. in quasi-1D molecular magnetic materials, R (hfac)3 NITEt (R=Gd, Tb, Dy, Ho, Er, . . .), is discussed; hfac is hexafluoro-acetylacetonate and NlTEt is 2-Ethyl-4,4,5,5-tetramethyl-4,5-dihydro-1H-imidazolyl-1-oxyl-3-oxide.
Rivas, Elena; Lang, Raymond; Eddy, Sean R
2012-02-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.
Chin, Wen Cheong; Lee, Min Cherng; Yap, Grace Lee Ching
2016-01-01
High frequency financial data modelling has become one of the important research areas in the field of financial econometrics. However, the possible structural break in volatile financial time series often trigger inconsistency issue in volatility estimation. In this study, we propose a structural break heavy-tailed heterogeneous autoregressive (HAR) volatility econometric model with the enhancement of jump-robust estimators. The breakpoints in the volatility are captured by dummy variables after the detection by Bai-Perron sequential multi breakpoints procedure. In order to further deal with possible abrupt jump in the volatility, the jump-robust volatility estimators are composed by using the nearest neighbor truncation approach, namely the minimum and median realized volatility. Under the structural break improvements in both the models and volatility estimators, the empirical findings show that the modified HAR model provides the best performing in-sample and out-of-sample forecast evaluations as compared with the standard HAR models. Accurate volatility forecasts have direct influential to the application of risk management and investment portfolio analysis.
Third nearest neighbor parameterized tight binding model for graphene nano-ribbons
Directory of Open Access Journals (Sweden)
Van-Truong Tran
2017-07-01
Full Text Available The existing tight binding models can very well reproduce the ab initio band structure of a 2D graphene sheet. For graphene nano-ribbons (GNRs, the current sets of tight binding parameters can successfully describe the semi-conducting behavior of all armchair GNRs. However, they are still failing in reproducing accurately the slope of the bands that is directly associated with the group velocity and the effective mass of electrons. In this work, both density functional theory and tight binding calculations were performed and a new set of tight binding parameters up to the third nearest neighbors including overlap terms is introduced. The results obtained with this model offer excellent agreement with the predictions of the density functional theory in most cases of ribbon structures, even in the high-energy region. Moreover, this set can induce electron-hole asymmetry as manifested in results from density functional theory. Relevant outcomes are also achieved for armchair ribbons of various widths as well as for zigzag structures, thus opening a route for multi-scale atomistic simulation of large systems that cannot be considered using density functional theory.
Jay M. Ver Hoef; Hailemariam Temesgen; Sergio Gómez
2013-01-01
Forest surveys provide critical information for many diverse interests. Data are often collected from samples, and from these samples, maps of resources and estimates of aerial totals or averages are required. In this paper, two approaches for mapping and estimating totals; the spatial linear model (SLM) and k-NN (k-Nearest Neighbor) are compared, theoretically,...
Jurčišinová, E.; Jurčišin, M.
2018-02-01
The influence of the next-nearest-neighbor interaction on the properties of the geometrically frustrated antiferromagnetic systems is investigated in the framework of the exactly solvable antiferromagnetic spin- 1 / 2 Ising model in the external magnetic field on the square-kagome recursive lattice, where the next-nearest-neighbor interaction is supposed between sites within each elementary square of the lattice. The thermodynamic properties of the model are investigated in detail and it is shown that the competition between the nearest-neighbor antiferromagnetic interaction and the next-nearest-neighbor ferromagnetic interaction changes properties of the single-point ground states but does not change the frustrated character of the basic model. On the other hand, the presence of the antiferromagnetic next-nearest-neighbor interaction leads to the enhancement of the frustration effects with the formation of additional plateau and single-point ground states at low temperatures. Exact expressions for magnetizations and residual entropies of all ground states of the model are found. It is shown that the model exhibits various ground states with the same value of magnetization but different macroscopic degeneracies as well as the ground states with different values of magnetization but the same value of the residual entropy. The specific heat capacity is investigated and it is shown that the model exhibits the Schottky-type anomaly behavior in the vicinity of each single-point ground state value of the magnetic field. The formation of the field-induced double-peak structure of the specific heat capacity at low temperatures is demonstrated and it is shown that its very existence is directly related to the presence of highly macroscopically degenerated single-point ground states in the model.
International Nuclear Information System (INIS)
Gong, Longyan; Feng, Yan; Ding, Yougen
2017-01-01
Highlights: • Quasiperiodic lattice models with next-nearest-neighbor hopping are studied. • Shannon information entropies are used to reflect state localization properties. • Phase diagrams are obtained for the inverse bronze and golden means, respectively. • Our studies present a more complete picture than existing works. - Abstract: We explore the reduced relative Shannon information entropies SR for a quasiperiodic lattice model with nearest- and next-nearest-neighbor hopping, where an irrational number is in the mathematical expression of incommensurate on-site potentials. Based on SR, we respectively unveil the phase diagrams for two irrationalities, i.e., the inverse bronze mean and the inverse golden mean. The corresponding phase diagrams include regions of purely localized phase, purely delocalized phase, pure critical phase, and regions with mobility edges. The boundaries of different regions depend on the values of irrational number. These studies present a more complete picture than existing works.
Energy Technology Data Exchange (ETDEWEB)
Gong, Longyan, E-mail: lygong@njupt.edu.cn [Information Physics Research Center and Department of Applied Physics, Nanjing University of Posts and Telecommunications, Nanjing, 210003 (China); Institute of Signal Processing and Transmission, Nanjing University of Posts and Telecommunications, Nanjing, 210003 (China); National Laboratory of Solid State Microstructures, Nanjing University, Nanjing 210093 (China); Feng, Yan; Ding, Yougen [Information Physics Research Center and Department of Applied Physics, Nanjing University of Posts and Telecommunications, Nanjing, 210003 (China); Institute of Signal Processing and Transmission, Nanjing University of Posts and Telecommunications, Nanjing, 210003 (China)
2017-02-12
Highlights: • Quasiperiodic lattice models with next-nearest-neighbor hopping are studied. • Shannon information entropies are used to reflect state localization properties. • Phase diagrams are obtained for the inverse bronze and golden means, respectively. • Our studies present a more complete picture than existing works. - Abstract: We explore the reduced relative Shannon information entropies SR for a quasiperiodic lattice model with nearest- and next-nearest-neighbor hopping, where an irrational number is in the mathematical expression of incommensurate on-site potentials. Based on SR, we respectively unveil the phase diagrams for two irrationalities, i.e., the inverse bronze mean and the inverse golden mean. The corresponding phase diagrams include regions of purely localized phase, purely delocalized phase, pure critical phase, and regions with mobility edges. The boundaries of different regions depend on the values of irrational number. These studies present a more complete picture than existing works.
International Nuclear Information System (INIS)
Wang, L.F.; Bai, L.Y.
2013-01-01
To improve the precision of quantitative structure-activity relationship (QSAR) modeling for aromatic carboxylic acid derivatives insect repellent, a novel nonlinear combination forecast model was proposed integrating support vector regression (SVR) and K-nearest neighbor (KNN): Firstly, search optimal kernel function and nonlinearly select molecular descriptors by the rule of minimum MSE value using SVR. Secondly, illuminate the effects of all descriptors on biological activity by multi-round enforcement resistance-selection. Thirdly, construct the sub-models with predicted values of different KNN. Then, get the optimal kernel and corresponding retained sub-models through subtle selection. Finally, make prediction with leave-one-out (LOO) method in the basis of reserved sub-models. Compared with previous widely used models, our work shows significant improvement in modeling performance, which demonstrates the superiority of the present combination forecast model. (author)
International Nuclear Information System (INIS)
Lin, J.; Bartal, Y.; Uhrig, R.E.
1995-01-01
The importance of automatic diagnostic systems for nuclear power plants (NPPs) has been discussed in numerous studies, and various such systems have been proposed. None of those systems were designed to predict the severity of the diagnosed scenario. A classification and severity prediction system for NPP transients is developed. The system is based on nearest neighbors modeling, which is optimized using genetic algorithms. The optimization process is used to determine the most important variables for each of the transient types analyzed. An enhanced version of the genetic algorithms is used in which a local downhill search is performed to further increase the accuracy achieved. The genetic algorithms search was implemented on a massively parallel supercomputer, the KSR1-64, to perform the analysis in a reasonable time. The data for this study were supplied by the high-fidelity simulator of the San Onofre unit 1 pressurized water reactor
Some Observations about the Nearest-Neighbor Model of the Error Threshold
International Nuclear Information System (INIS)
Gerrish, Philip J.
2009-01-01
I explore some aspects of the 'error threshold' - a critical mutation rate above which a population is nonviable. The phase transition that occurs as mutation rate crosses this threshold has been shown to be mathematically equivalent to the loss of ferromagnetism that occurs as temperature exceeds the Curie point. I will describe some refinements and new results based on the simplest of these mutation models, will discuss the commonly unperceived robustness of this simple model, and I will show some preliminary results comparing qualitative predictions with simulations of finite populations adapting at high mutation rates. I will talk about how these qualitative predictions are relevant to biomedical science and will discuss how my colleagues and I are looking for phase-transition signatures in real populations of Escherichia coli that go extinct as a result of excessive mutation.
Quasi-phases and pseudo-transitions in one-dimensional models with nearest neighbor interactions
de Souza, S. M.; Rojas, Onofre
2018-01-01
There are some particular one-dimensional models, such as the Ising-Heisenberg spin models with a variety of chain structures, which exhibit unexpected behaviors quite similar to the first and second order phase transition, which could be confused naively with an authentic phase transition. Through the analysis of the first derivative of free energy, such as entropy, magnetization, and internal energy, a "sudden" jump that closely resembles a first-order phase transition at finite temperature occurs. However, by analyzing the second derivative of free energy, such as specific heat and magnetic susceptibility at finite temperature, it behaves quite similarly to a second-order phase transition exhibiting an astonishingly sharp and fine peak. The correlation length also confirms the evidence of this pseudo-transition temperature, where a sharp peak occurs at the pseudo-critical temperature. We also present the necessary conditions for the emergence of these quasi-phases and pseudo-transitions.
Hole motion in the t-J and Hubbard models: Effect of a next-nearest-neighbor hopping
International Nuclear Information System (INIS)
Gagliano, E.; Bacci, S.; Dagotto, E.
1990-01-01
Using exact diagonalization techniques, we study one dynamical hole in the two-dimensional t-J and Hubbard models on a square lattice including a next-nearest-neighbor hopping t'. We present the phase diagram in the parameter space (J/t,t'/t), discussing the ground-state properties of the hole. At J=0, a crossing of levels exists at some value of t' separating a ferromagnetic from an antiferromagnetic ground state. For nonzero J, at least four different regions appear where the system behaves like an antiferromagnet or a (not fully saturated) ferromagnet. We study the quasiparticle behavior of the hole, showing that for small values of |t'| the previously presented string picture is still valid. We also find that, for a realistic set of parameters derived from the Cu-O Hamiltonian, the hole has momentum (π/2,π/2), suggesting an enhancement of the p-wave superconducting mode due to the second-neighbor interactions in the spin-bag picture. Results for the t-t'-U model are also discussed with conclusions similar to those of the t-t'-J model. In general we found that t'=0 is not a singular point of these models
Directory of Open Access Journals (Sweden)
Weide Li
2017-05-01
Full Text Available Electric load forecasting plays an important role in electricity markets and power systems. Because electric load time series are complicated and nonlinear, it is very difficult to achieve a satisfactory forecasting accuracy. In this paper, a hybrid model, Wavelet Denoising-Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EWKM, which combines k-Nearest Neighbor (KNN and Extreme Learning Machine (ELM based on a wavelet denoising technique is proposed for short-term load forecasting. The proposed hybrid model decomposes the time series into a low frequency-associated main signal and some detailed signals associated with high frequencies at first, then uses KNN to determine the independent and dependent variables from the low-frequency signal. Finally, the ELM is used to get the non-linear relationship between these variables to get the final prediction result for the electric load. Compared with three other models, Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EKM, Wavelet Denoising-Extreme Learning Machine (WKM and Wavelet Denoising-Back Propagation Neural Network optimized by k-Nearest Neighbor Regression (WNNM, the model proposed in this paper can improve the accuracy efficiently. New South Wales is the economic powerhouse of Australia, so we use the proposed model to predict electric demand for that region. The accurate prediction has a significant meaning.
Frog sound identification using extended k-nearest neighbor classifier
Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati
2017-09-01
Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.
Dimensionality reduction with unsupervised nearest neighbors
Kramer, Oliver
2013-01-01
This book is devoted to a novel approach for dimensionality reduction based on the famous nearest neighbor method that is a powerful classification and regression approach. It starts with an introduction to machine learning concepts and a real-world application from the energy domain. Then, unsupervised nearest neighbors (UNN) is introduced as efficient iterative method for dimensionality reduction. Various UNN models are developed step by step, reaching from a simple iterative strategy for discrete latent spaces to a stochastic kernel-based algorithm for learning submanifolds with independent parameterizations. Extensions that allow the embedding of incomplete and noisy patterns are introduced. Various optimization approaches are compared, from evolutionary to swarm-based heuristics. Experimental comparisons to related methodologies taking into account artificial test data sets and also real-world data demonstrate the behavior of UNN in practical scenarios. The book contains numerous color figures to illustr...
International Nuclear Information System (INIS)
Hatsugai, Y.; Kohmoto, M.
1992-01-01
We investigate the energy spectrum and the Hall effect of electrons on the square lattice with next-nearest-neighbor (NNN) hopping as well as nearest-neighbor hopping. General rational values of magnetic flux per unit cell φ=p/q are considered. In the absence of NNN hopping, the two bands at the center touch for q even, thus the Hall conductance is not well defined at half filling. An energy gap opens there by introducing NNN hoping. When φ=1/2, the NNN model coincides with the mean field Hamiltonian for the chiral spin state proposed by Wen, Wilczek and Zee (WWZ). The Hall conductance is calculated from the Diophantine equation and the E-φ diagram. We find that gaps close for other fillings at certain values of NNN hopping strength. The quantized value of the Hall conductance changes once this phenomenon occurs. In a mean field treatment of the t-J model, the effective Hamiltonian is the same as our NNN model. From this point of view, the statistics of the quasi-particles is not always semion and depends on the filling and the strength of the mean field. (orig.)
Nearest neighbors by neighborhood counting.
Wang, Hui
2006-06-01
Finding nearest neighbors is a general idea that underlies many artificial intelligence tasks, including machine learning, data mining, natural language understanding, and information retrieval. This idea is explicitly used in the k-nearest neighbors algorithm (kNN), a popular classification method. In this paper, this idea is adopted in the development of a general methodology, neighborhood counting, for devising similarity functions. We turn our focus from neighbors to neighborhoods, a region in the data space covering the data point in question. To measure the similarity between two data points, we consider all neighborhoods that cover both data points. We propose to use the number of such neighborhoods as a measure of similarity. Neighborhood can be defined for different types of data in different ways. Here, we consider one definition of neighborhood for multivariate data and derive a formula for such similarity, called neighborhood counting measure or NCM. NCM was tested experimentally in the framework of kNN. Experiments show that NCM is generally comparable to VDM and its variants, the state-of-the-art distance functions for multivariate data, and, at the same time, is consistently better for relatively large k values. Additionally, NCM consistently outperforms HEOM (a mixture of Euclidean and Hamming distances), the "standard" and most widely used distance function for multivariate data. NCM has a computational complexity in the same order as the standard Euclidean distance function and NCM is task independent and works for numerical and categorical data in a conceptually uniform way. The neighborhood counting methodology is proven sound for multivariate data experimentally. We hope it will work for other types of data.
DEFF Research Database (Denmark)
Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune H.
2014-01-01
n combined PET/MR, attenuation correction (AC) is performed indirectly based on the available MR image information. Metal implant-induced susceptibility artifacts and subsequent signal voids challenge MR-based AC. Several papers acknowledge the problem in PET attenuation correction when dental...... artifacts are ignored, but none of them attempts to solve the problem. We propose a clinically feasible correction method which combines Active Shape Models (ASM) and k- Nearest-Neighbors (kNN) into a simple approach which finds and corrects the dental artifacts within the surface boundaries of the patient...... anatomy. ASM is used to locate a number of landmarks in the T1-weighted MR-image of a new patient. We calculate a vector of offsets from each voxel within a signal void to each of the landmarks. We then use kNN to classify each voxel as belonging to an artifact or an actual signal void using this offset...
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.
Sznajd, J.
2016-12-01
The linear perturbation renormalization group (LPRG) is used to study the phase transition of the weakly coupled Ising chains with intrachain (J ) and interchain nearest-neighbor (J1) and next-nearest-neighbor (J2) interactions forming the triangular and rectangular lattices in a field. The phase diagrams with the frustration point at J2=-J1/2 for a rectangular lattice and J2=-J1 for a triangular lattice have been found. The LPRG calculations support the idea that the phase transition is always continuous except for the frustration point and is accompanied by a divergence of the specific heat. For the antiferromagnetic chains, the external field does not change substantially the shape of the phase diagram. The critical temperature is suppressed to zero according to the power law when approaching the frustration point with an exponent dependent on the value of the field.
Lectures on the nearest neighbor method
Biau, Gérard
2015-01-01
This text presents a wide-ranging and rigorous overview of nearest neighbor methods, one of the most important paradigms in machine learning. Now in one self-contained volume, this book systematically covers key statistical, probabilistic, combinatorial and geometric ideas for understanding, analyzing and developing nearest neighbor methods. Gérard Biau is a professor at Université Pierre et Marie Curie (Paris). Luc Devroye is a professor at the School of Computer Science at McGill University (Montreal). .
Common Nearest Neighbor Clustering—A Benchmark
Directory of Open Access Journals (Sweden)
Oliver Lemke
2018-02-01
Full Text Available Cluster analyses are often conducted with the goal to characterize an underlying probability density, for which the data-point density serves as an estimate for this probability density. We here test and benchmark the common nearest neighbor (CNN cluster algorithm. This algorithm assigns a spherical neighborhood R to each data point and estimates the data-point density between two data points as the number of data points N in the overlapping region of their neighborhoods (step 1. The main principle in the CNN cluster algorithm is cluster growing. This grows the clusters by sequentially adding data points and thereby effectively positions the border of the clusters along an iso-surface of the underlying probability density. This yields a strict partitioning with outliers, for which the cluster represents peaks in the underlying probability density—termed core sets (step 2. The removal of the outliers on the basis of a threshold criterion is optional (step 3. The benchmark datasets address a series of typical challenges, including datasets with a very high dimensional state space and datasets in which the cluster centroids are aligned along an underlying structure (Birch sets. The performance of the CNN algorithm is evaluated with respect to these challenges. The results indicate that the CNN cluster algorithm can be useful in a wide range of settings. Cluster algorithms are particularly important for the analysis of molecular dynamics (MD simulations. We demonstrate how the CNN cluster results can be used as a discretization of the molecular state space for the construction of a core-set model of the MD improving the accuracy compared to conventional full-partitioning models. The software for the CNN clustering is available on GitHub.
Efficient and accurate nearest neighbor and closest pair search in high-dimensional space
Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos
2010-01-01
Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii
Directory of Open Access Journals (Sweden)
Zhen Liu
2017-11-01
Full Text Available The insulated gate bipolar transistor (IGBT is a kind of excellent performance switching device used widely in power electronic systems. How to estimate the remaining useful life (RUL of an IGBT to ensure the safety and reliability of the power electronics system is currently a challenging issue in the field of IGBT reliability. The aim of this paper is to develop a prognostic technique for estimating IGBTs’ RUL. There is a need for an efficient prognostic algorithm that is able to support in-situ decision-making. In this paper, a novel prediction model with a complete structure based on optimally pruned extreme learning machine (OPELM and Volterra series is proposed to track the IGBT’s degradation trace and estimate its RUL; we refer to this model as Volterra k-nearest neighbor OPELM prediction (VKOPP model. This model uses the minimum entropy rate method and Volterra series to reconstruct phase space for IGBTs’ ageing samples, and a new weight update algorithm, which can effectively reduce the influence of the outliers and noises, is utilized to establish the VKOPP network; then a combination of the k-nearest neighbor method (KNN and least squares estimation (LSE method is used to calculate the output weights of OPELM and predict the RUL of the IGBT. The prognostic results show that the proposed approach can predict the RUL of IGBT modules with small error and achieve higher prediction precision and lower time cost than some classic prediction approaches.
Text Categorization Using Weight Adjusted k-Nearest Neighbor Classification
National Research Council Canada - National Science Library
Han, Euihong; Karypis, George; Kumar, Vipin
1999-01-01
.... The authors present a nearest neighbor classification scheme for text categorization in which the importance of discriminating words is learned using mutual information and weight adjustment techniques...
The Islands Approach to Nearest Neighbor Querying in Spatial Networks
DEFF Research Database (Denmark)
Huang, Xuegang; Jensen, Christian Søndergaard; Saltenis, Simonas
2005-01-01
, and versatile approach to k nearest neighbor computation that obviates the need for using several k nearest neighbor approaches for supporting a single service scenario. The experimental comparison with the existing techniques uses real-world road network data and considers both I/O and CPU performance...
Blel, Sonia; Hamouda, Ajmi BH.; Mahjoub, B.; Einstein, T. L.
2017-02-01
In this paper we explore the meandering instability of vicinal steps with a kinetic Monte Carlo simulations (kMC) model including the attractive next-nearest-neighbor (NNN) interactions. kMC simulations show that increase of the NNN interaction strength leads to considerable reduction of the meandering wavelength and to weaker dependence of the wavelength on the deposition rate F. The dependences of the meandering wavelength on the temperature and the deposition rate obtained with simulations are in good quantitative agreement with the experimental result on the meandering instability of Cu(0 2 24) [T. Maroutian et al., Phys. Rev. B 64, 165401 (2001), 10.1103/PhysRevB.64.165401]. The effective step stiffness is found to depend not only on the strength of NNN interactions and the Ehrlich-Schwoebel barrier, but also on F. We argue that attractive NNN interactions intensify the incorporation of adatoms at step edges and enhance step roughening. Competition between NNN and nearest-neighbor interactions results in an alternative form of meandering instability which we call "roughening-limited" growth, rather than attachment-detachment-limited growth that governs the Bales-Zangwill instability. The computed effective wavelength and the effective stiffness behave as λeff˜F-q and β˜eff˜F-p , respectively, with q ≈p /2 .
Introduction to machine learning: k-nearest neighbors.
Zhang, Zhongheng
2016-06-01
Machine learning techniques have been widely used in many scientific fields, but its use in medical literature is limited partly because of technical difficulties. k-nearest neighbors (kNN) is a simple method of machine learning. The article introduces some basic ideas underlying the kNN algorithm, and then focuses on how to perform kNN modeling with R. The dataset should be prepared before running the knn() function in R. After prediction of outcome with kNN algorithm, the diagnostic performance of the model should be checked. Average accuracy is the mostly widely used statistic to reflect the kNN algorithm. Factors such as k value, distance calculation and choice of appropriate predictors all have significant impact on the model performance.
Implementation of Nearest Neighbor using HSV to Identify Skin Disease
Gerhana, Y. A.; Zulfikar, W. B.; Ramdani, A. H.; Ramdhani, M. A.
2018-01-01
Today, Android is one of the most widely used operating system in the world. Most of android device has a camera that could capture an image, this feature could be optimized to identify skin disease. The disease is one of health problem caused by bacterium, fungi, and virus. The symptoms of skin disease usually visible. In this work, the symptoms that captured as image contains HSV in every pixel of the image. HSV can extracted and then calculate to earn euclidean value. The value compared using nearest neighbor algorithm to discover closer value between image testing and image training to get highest value that decide class label or type of skin disease. The testing result show that 166 of 200 or about 80% is accurate. There are some reasons that influence the result of classification model like number of image training and quality of android device’s camera.
Dimensional testing for reverse k-nearest neighbor search
DEFF Research Database (Denmark)
Casanova, Guillaume; Englmeier, Elias; Houle, Michael E.
2017-01-01
Given a query object q, reverse k-nearest neighbor (RkNN) search aims to locate those objects of the database that have q among their k-nearest neighbors. In this paper, we propose an approximation method for solving RkNN queries, where the pruning operations and termination tests are guided...... by a characterization of the intrinsic dimensionality of the data. The method can accommodate any index structure supporting incremental (forward) nearest-neighbor search for the generation and verification of candidates, while avoiding impractically-high preprocessing costs. We also provide experimental evidence...
Takashi, Tonegawa; Makoto, Kaburagi; Takeshi, Nakao; Department of Physics, Faculty of Science, Kobe University; Faculty of Cross-Cultural Studies, Kobe University; Department of Physics, Faculty of Science, Kobe University
1995-01-01
The Haldane to dimer phase transition is studied in the spin-1 Haldane system with bond-alternating nearest-neighbor and uniform next-nearest-neighbor exchange interactions, where both interactions are antiferromagnetic and thus compete with each other. By using a method of exact diagonalization, the ground-state phase diagram on the ratio of the next-nearest-neighbor interaction constant to the nearest-neighbor one versus the bond-alternation parameter of the nearest-neighbor interactions is...
Multiple k Nearest Neighbor Query Processing in Spatial Network Databases
DEFF Research Database (Denmark)
Xuegang, Huang; Jensen, Christian Søndergaard; Saltenis, Simonas
2006-01-01
This paper concerns the efficient processing of multiple k nearest neighbor queries in a road-network setting. The assumed setting covers a range of scenarios such as the one where a large population of mobile service users that are constrained to a road network issue nearest-neighbor queries...... for points of interest that are accessible via the road network. Given multiple k nearest neighbor queries, the paper proposes progressive techniques that selectively cache query results in main memory and subsequently reuse these for query processing. The paper initially proposes techniques for the case...... where an upper bound on k is known a priori and then extends the techniques to the case where this is not so. Based on empirical studies with real-world data, the paper offers insight into the circumstances under which the different proposed techniques can be used with advantage for multiple k nearest...
Recursive nearest neighbor search in a sparse and multiscale domain for comparing audio signals
DEFF Research Database (Denmark)
Sturm, Bob L.; Daudet, Laurent
2011-01-01
We investigate recursive nearest neighbor search in a sparse domain at the scale of audio signals. Essentially, to approximate the cosine distance between the signals we make pairwise comparisons between the elements of localized sparse models built from large and redundant multiscale dictionaries...
River Flow Prediction Using the Nearest Neighbor Probabilistic Ensemble Method
Directory of Open Access Journals (Sweden)
H. Sanikhani
2016-02-01
Full Text Available Introduction: In the recent years, researchers interested on probabilistic forecasting of hydrologic variables such river flow.A probabilistic approach aims at quantifying the prediction reliability through a probability distribution function or a prediction interval for the unknown future value. The evaluation of the uncertainty associated to the forecast is seen as a fundamental information, not only to correctly assess the prediction, but also to compare forecasts from different methods and to evaluate actions and decisions conditionally on the expected values. Several probabilistic approaches have been proposed in the literature, including (1 methods that use resampling techniques to assess parameter and model uncertainty, such as the Metropolis algorithm or the Generalized Likelihood Uncertainty Estimation (GLUE methodology for an application to runoff prediction, (2 methods based on processing the forecast errors of past data to produce the probability distributions of future values and (3 methods that evaluate how the uncertainty propagates from the rainfall forecast to the river discharge prediction, as the Bayesian forecasting system. Materials and Methods: In this study, two different probabilistic methods are used for river flow prediction.Then the uncertainty related to the forecast is quantified. One approach is based on linear predictors and in the other, nearest neighbor was used. The nonlinear probabilistic ensemble can be used for nonlinear time series analysis using locally linear predictors, while NNPE utilize a method adapted for one step ahead nearest neighbor methods. In this regard, daily river discharge (twelve years of Dizaj and Mashin Stations on Baranduz-Chay basin in west Azerbijan and Zard-River basin in Khouzestan provinces were used, respectively. The first six years of data was applied for fitting the model. The next three years was used to calibration and the remained three yeas utilized for testing the models
Nearest Neighbor Queries in Road Networks
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Kolar, Jan; Pedersen, Torben Bach
2003-01-01
in road networks. Such queries may be of use in many services. Specifically, we present an easily implementable data model that serves well as a foundation for such queries. We also present the design of a prototype system that implements the queries based on the data model. The algorithm used...
k-Nearest Neighbors Algorithm in Profiling Power Analysis Attacks
Directory of Open Access Journals (Sweden)
Z. Martinasek
2016-06-01
Full Text Available Power analysis presents the typical example of successful attacks against trusted cryptographic devices such as RFID (Radio-Frequency IDentifications and contact smart cards. In recent years, the cryptographic community has explored new approaches in power analysis based on machine learning models such as Support Vector Machine (SVM, RF (Random Forest and Multi-Layer Perceptron (MLP. In this paper, we made an extensive comparison of machine learning algorithms in the power analysis. For this purpose, we implemented a verification program that always chooses the optimal settings of individual machine learning models in order to obtain the best classification accuracy. In our research, we used three datasets, the first containing the power traces of an unprotected AES (Advanced Encryption Standard implementation. The second and third datasets are created independently from public available power traces corresponding to a masked AES implementation (DPA Contest v4. The obtained results revealed some interesting facts, namely, an elementary k-NN (k-Nearest Neighbors algorithm, which has not been commonly used in power analysis yet, shows great application potential in practice.
Nearest neighbor 3D segmentation with context features
Hristova, Evelin; Schulz, Heinrich; Brosch, Tom; Heinrich, Mattias P.; Nickisch, Hannes
2018-03-01
Automated and fast multi-label segmentation of medical images is challenging and clinically important. This paper builds upon a supervised machine learning framework that uses training data sets with dense organ annotations and vantage point trees to classify voxels in unseen images based on similarity of binary feature vectors extracted from the data. Without explicit model knowledge, the algorithm is applicable to different modalities and organs, and achieves high accuracy. The method is successfully tested on 70 abdominal CT and 42 pelvic MR images. With respect to ground truth, an average Dice overlap score of 0.76 for the CT segmentation of liver, spleen and kidneys is achieved. The mean score for the MR delineation of bladder, bones, prostate and rectum is 0.65. Additionally, we benchmark several variations of the main components of the method and reduce the computation time by up to 47% without significant loss of accuracy. The segmentation results are - for a nearest neighbor method - surprisingly accurate, robust as well as data and time efficient.
Secure Nearest Neighbor Query on Crowd-Sensing Data
Directory of Open Access Journals (Sweden)
Ke Cheng
2016-09-01
Full Text Available Nearest neighbor queries are fundamental in location-based services, and secure nearest neighbor queries mainly focus on how to securely and quickly retrieve the nearest neighbor in the outsourced cloud server. However, the previous big data system structure has changed because of the crowd-sensing data. On the one hand, sensing data terminals as the data owner are numerous and mistrustful, while, on the other hand, in most cases, the terminals find it difficult to finish many safety operation due to computation and storage capability constraints. In light of they Multi Owners and Multi Users (MOMU situation in the crowd-sensing data cloud environment, this paper presents a secure nearest neighbor query scheme based on the proxy server architecture, which is constructed by protocols of secure two-party computation and secure Voronoi diagram algorithm. It not only preserves the data confidentiality and query privacy but also effectively resists the collusion between the cloud server and the data owners or users. Finally, extensive theoretical and experimental evaluations are presented to show that our proposed scheme achieves a superior balance between the security and query performance compared to other schemes.
Nearest Neighbor Networks: clustering expression data based on gene neighborhoods
Directory of Open Access Journals (Sweden)
Olszewski Kellen L
2007-07-01
Full Text Available Abstract Background The availability of microarrays measuring thousands of genes simultaneously across hundreds of biological conditions represents an opportunity to understand both individual biological pathways and the integrated workings of the cell. However, translating this amount of data into biological insight remains a daunting task. An important initial step in the analysis of microarray data is clustering of genes with similar behavior. A number of classical techniques are commonly used to perform this task, particularly hierarchical and K-means clustering, and many novel approaches have been suggested recently. While these approaches are useful, they are not without drawbacks; these methods can find clusters in purely random data, and even clusters enriched for biological functions can be skewed towards a small number of processes (e.g. ribosomes. Results We developed Nearest Neighbor Networks (NNN, a graph-based algorithm to generate clusters of genes with similar expression profiles. This method produces clusters based on overlapping cliques within an interaction network generated from mutual nearest neighborhoods. This focus on nearest neighbors rather than on absolute distance measures allows us to capture clusters with high connectivity even when they are spatially separated, and requiring mutual nearest neighbors allows genes with no sufficiently similar partners to remain unclustered. We compared the clusters generated by NNN with those generated by eight other clustering methods. NNN was particularly successful at generating functionally coherent clusters with high precision, and these clusters generally represented a much broader selection of biological processes than those recovered by other methods. Conclusion The Nearest Neighbor Networks algorithm is a valuable clustering method that effectively groups genes that are likely to be functionally related. It is particularly attractive due to its simplicity, its success in the
[Galaxy/quasar classification based on nearest neighbor method].
Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun
2011-09-01
With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.
A new approach to very short term wind speed prediction using k-nearest neighbor classification
International Nuclear Information System (INIS)
Yesilbudak, Mehmet; Sagiroglu, Seref; Colak, Ilhami
2013-01-01
Highlights: ► Wind speed parameter was predicted in an n-tupled inputs using k-NN classification. ► The effects of input parameters, nearest neighbors and distance metrics were analyzed. ► Many useful and reasonable inferences were uncovered using the developed model. - Abstract: Wind energy is an inexhaustible energy source and wind power production has been growing rapidly in recent years. However, wind power has a non-schedulable nature due to wind speed variations. Hence, wind speed prediction is an indispensable requirement for power system operators. This paper predicts wind speed parameter in an n-tupled inputs using k-nearest neighbor (k-NN) classification and analyzes the effects of input parameters, nearest neighbors and distance metrics on wind speed prediction. The k-NN classification model was developed using the object oriented programming techniques and includes Manhattan and Minkowski distance metrics except from Euclidean distance metric on the contrary of literature. The k-NN classification model which uses wind direction, air temperature, atmospheric pressure and relative humidity parameters in a 4-tupled space achieved the best wind speed prediction for k = 5 in the Manhattan distance metric. Differently, the k-NN classification model which uses wind direction, air temperature and atmospheric pressure parameters in a 3-tupled inputs gave the worst wind speed prediction for k = 1 in the Minkowski distance metric
Classification of EEG Signals using adaptive weighted distance nearest neighbor algorithm
Directory of Open Access Journals (Sweden)
E. Parvinnia
2014-01-01
Full Text Available Electroencephalogram (EEG signals are often used to diagnose diseases such as seizure, alzheimer, and schizophrenia. One main problem with the recorded EEG samples is that they are not equally reliable due to the artifacts at the time of recording. EEG signal classification algorithms should have a mechanism to handle this issue. It seems that using adaptive classifiers can be useful for the biological signals such as EEG. In this paper, a general adaptive method named weighted distance nearest neighbor (WDNN is applied for EEG signal classification to tackle this problem. This classification algorithm assigns a weight to each training sample to control its influence in classifying test samples. The weights of training samples are used to find the nearest neighbor of an input query pattern. To assess the performance of this scheme, EEG signals of thirteen schizophrenic patients and eighteen normal subjects are analyzed for the classification of these two groups. Several features including, fractal dimension, band power and autoregressive (AR model are extracted from EEG signals. The classification results are evaluated using Leave one (subject out cross validation for reliable estimation. The results indicate that combination of WDNN and selected features can significantly outperform the basic nearest-neighbor and the other methods proposed in the past for the classification of these two groups. Therefore, this method can be a complementary tool for specialists to distinguish schizophrenia disorder.
The nearest neighbor and the bayes error rates.
Loizou, G; Maybank, S J
1987-02-01
The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal.
Diagnostic tools for nearest neighbors techniques when used with satellite imagery
Ronald E. McRoberts
2009-01-01
Nearest neighbors techniques are non-parametric approaches to multivariate prediction that are useful for predicting both continuous and categorical forest attribute variables. Although some assumptions underlying nearest neighbor techniques are common to other prediction techniques such as regression, other assumptions are unique to nearest neighbor techniques....
Using K-Nearest Neighbor in Optical Character Recognition
Directory of Open Access Journals (Sweden)
Veronica Ong
2016-03-01
Full Text Available The growth in computer vision technology has aided society with various kinds of tasks. One of these tasks is the ability of recognizing text contained in an image, or usually referred to as Optical Character Recognition (OCR. There are many kinds of algorithms that can be implemented into an OCR. The K-Nearest Neighbor is one such algorithm. This research aims to find out the process behind the OCR mechanism by using K-Nearest Neighbor algorithm; one of the most influential machine learning algorithms. It also aims to find out how precise the algorithm is in an OCR program. To do that, a simple OCR program to classify alphabets of capital letters is made to produce and compare real results. The result of this research yielded a maximum of 76.9% accuracy with 200 training samples per alphabet. A set of reasons are also given as to why the program is able to reach said level of accuracy.
Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio
Nababan, A. A.; Sitompul, O. S.; Tulus
2018-04-01
K- Nearest Neighbor (KNN) is a good classifier, but from several studies, the result performance accuracy of KNN still lower than other methods. One of the causes of the low accuracy produced, because each attribute has the same effect on the classification process, while some less relevant characteristics lead to miss-classification of the class assignment for new data. In this research, we proposed Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio as a parameter to see the correlation between each attribute in the data and the Gain Ratio also will be used as the basis for weighting each attribute of the dataset. The accuracy of results is compared to the accuracy acquired from the original KNN method using 10-fold Cross-Validation with several datasets from the UCI Machine Learning repository and KEEL-Dataset Repository, such as abalone, glass identification, haberman, hayes-roth and water quality status. Based on the result of the test, the proposed method was able to increase the classification accuracy of KNN, where the highest difference of accuracy obtained hayes-roth dataset is worth 12.73%, and the lowest difference of accuracy obtained in the abalone dataset of 0.07%. The average result of the accuracy of all dataset increases the accuracy by 5.33%.
Enhanced Approximate Nearest Neighbor via Local Area Focused Search.
Energy Technology Data Exchange (ETDEWEB)
Gonzales, Antonio [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Blazier, Nicholas Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
Approximate Nearest Neighbor (ANN) algorithms are increasingly important in machine learning, data mining, and image processing applications. There is a large family of space- partitioning ANN algorithms, such as randomized KD-Trees, that work well in practice but are limited by an exponential increase in similarity comparisons required to optimize recall. Additionally, they only support a small set of similarity metrics. We present Local Area Fo- cused Search (LAFS), a method that enhances the way queries are performed using an existing ANN index. Instead of a single query, LAFS performs a number of smaller (fewer similarity comparisons) queries and focuses on a local neighborhood which is refined as candidates are identified. We show that our technique improves performance on several well known datasets and is easily extended to general similarity metrics using kernel projection techniques.
Nearest Neighbor Estimates of Entropy for Multivariate Circular Distributions
Directory of Open Access Journals (Sweden)
Neeraj Misra
2010-05-01
Full Text Available In molecular sciences, the estimation of entropies of molecules is important for the understanding of many chemical and biological processes. Motivated by these applications, we consider the problem of estimating the entropies of circular random vectors and introduce non-parametric estimators based on circular distances between n sample points and their k th nearest neighbors (NN, where k (≤ n – 1 is a fixed positive integer. The proposed NN estimators are based on two different circular distances, and are proven to be asymptotically unbiased and consistent. The performance of one of the circular-distance estimators is investigated and compared with that of the already established Euclidean-distance NN estimator using Monte Carlo samples from an analytic distribution of six circular variables of an exactly known entropy and a large sample of seven internal-rotation angles in the molecule of tartaric acid, obtained by a realistic molecular-dynamics simulation.
Morphological type correlation between nearest neighbor pairs of galaxies
Yamagata, Tomohiko
1990-01-01
Although the morphological type of galaxies is one of the most fundamental properties of galaxies, its origin and evolutionary processes, if any, are not yet fully understood. It has been established that the galaxy morphology strongly depends on the environment in which the galaxy resides (e.g., Dressler 1980). Galaxy pairs correspond to the smallest scales of galaxy clustering and may provide important clues to how the environment influences the formation and evolution of galaxies. Several investigators pointed out that there is a tendency for pair galaxies to have similar morphological types (Karachentsev and Karachentseva 1974, Page 1975, Noerdlinger 1979). Here, researchers analyze morphological type correlation for 18,364 nearest neighbor pairs of galaxies identified in the magnetic tape version of the Center for Astrophysics Redshift Catalogue.
Designing lattice structures with maximal nearest-neighbor entanglement
Energy Technology Data Exchange (ETDEWEB)
Navarro-Munoz, J C; Lopez-Sandoval, R [Instituto Potosino de Investigacion CientIfica y Tecnologica, Camino a la presa San Jose 2055, 78216 San Luis Potosi (Mexico); Garcia, M E [Theoretische Physik, FB 18, Universitaet Kassel and Center for Interdisciplinary Nanostructure Science and Technology (CINSaT), Heinrich-Plett-Str.40, 34132 Kassel (Germany)
2009-08-07
In this paper, we study the numerical optimization of nearest-neighbor concurrence of bipartite one- and two-dimensional lattices, as well as non-bipartite two-dimensional lattices. These systems are described in the framework of a tight-binding Hamiltonian while the optimization of concurrence was performed using genetic algorithms. Our results show that the concurrence of the optimized lattice structures is considerably higher than that of non-optimized systems. In the case of one-dimensional chains, the concurrence increases dramatically when the system begins to dimerize, i.e., it undergoes a structural phase transition (Peierls distortion). This result is consistent with the idea that entanglement is maximal or shows a singularity near quantum phase transitions. Moreover, the optimization of concurrence in two-dimensional bipartite and non-bipartite lattices is achieved when the structures break into smaller subsystems, which are arranged in geometrically distinguishable configurations.
Credit scoring analysis using weighted k nearest neighbor
Mukid, M. A.; Widiharih, T.; Rusgiyono, A.; Prahutama, A.
2018-05-01
Credit scoring is a quatitative method to evaluate the credit risk of loan applications. Both statistical methods and artificial intelligence are often used by credit analysts to help them decide whether the applicants are worthy of credit. These methods aim to predict future behavior in terms of credit risk based on past experience of customers with similar characteristics. This paper reviews the weighted k nearest neighbor (WKNN) method for credit assessment by considering the use of some kernels. We use credit data from a private bank in Indonesia. The result shows that the Gaussian kernel and rectangular kernel have a better performance based on the value of percentage corrected classified whose value is 82.4% respectively.
Directory of Open Access Journals (Sweden)
Cobaugh Christian W
2004-08-01
Full Text Available Abstract Background A detailed understanding of an RNA's correct secondary and tertiary structure is crucial to understanding its function and mechanism in the cell. Free energy minimization with energy parameters based on the nearest-neighbor model and comparative analysis are the primary methods for predicting an RNA's secondary structure from its sequence. Version 3.1 of Mfold has been available since 1999. This version contains an expanded sequence dependence of energy parameters and the ability to incorporate coaxial stacking into free energy calculations. We test Mfold 3.1 by performing the largest and most phylogenetically diverse comparison of rRNA and tRNA structures predicted by comparative analysis and Mfold, and we use the results of our tests on 16S and 23S rRNA sequences to assess the improvement between Mfold 2.3 and Mfold 3.1. Results The average prediction accuracy for a 16S or 23S rRNA sequence with Mfold 3.1 is 41%, while the prediction accuracies for the majority of 16S and 23S rRNA structures tested are between 20% and 60%, with some having less than 20% prediction accuracy. The average prediction accuracy was 71% for 5S rRNA and 69% for tRNA. The majority of the 5S rRNA and tRNA sequences have prediction accuracies greater than 60%. The prediction accuracy of 16S rRNA base-pairs decreases exponentially as the number of nucleotides intervening between the 5' and 3' halves of the base-pair increases. Conclusion Our analysis indicates that the current set of nearest-neighbor energy parameters in conjunction with the Mfold folding algorithm are unable to consistently and reliably predict an RNA's correct secondary structure. For 16S or 23S rRNA structure prediction, Mfold 3.1 offers little improvement over Mfold 2.3. However, the nearest-neighbor energy parameters do work well for shorter RNA sequences such as tRNA or 5S rRNA, or for larger rRNAs when the contact distance between the base-pairs is less than 100 nucleotides.
Kenneth B. Pierce; Janet L. Ohmann; Michael C. Wimberly; Matthew J. Gregory; Jeremy S. Fried
2009-01-01
Land managers need consistent information about the geographic distribution of wildland fuels and forest structure over large areas to evaluate fire risk and plan fuel treatments. We compared spatial predictions for 12 fuel and forest structure variables across three regions in the western United States using gradient nearest neighbor (GNN) imputation, linear models (...
Forecasting of steel consumption with use of nearest neighbors method
Directory of Open Access Journals (Sweden)
Rogalewicz Michał
2017-01-01
Full Text Available In the process of building a steel construction, its design is usually commissioned to the design office. Then a quotation is made and the finished offer is delivered to the customer. Its final shape is influenced by steel consumption to a great extent. Correct determination of the potential consumption of this material most often determines the profitability of the project. Because of a long waiting time for a final project from the design office, it is worthwhile to pre-analyze the project’s profitability and feasibility using historical data on already realized orders. The paper presents an innovative approach to decision-making support in one of the Polish construction companies. The authors have defined and prioritized the most important factors that differentiate the executed orders and have the greatest impact on steel consumption. These are, among others: height and width of steel structure, number of aisles, type of roof, etc. Then they applied and adapted the method of k-nearest neighbors to the specificity of the discussed problem. The goal was to search a set of historical orders and find the most similar to the analyzed one. On this basis, consumption of steel can be estimated. The method was programmed within the EXPLOR application.
Predicting Audience Location on the Basis of the k-Nearest Neighbor Multilabel Classification
Directory of Open Access Journals (Sweden)
Haitao Wu
2014-01-01
Full Text Available Understanding audience location information in online social networks is important in designing recommendation systems, improving information dissemination, and so on. In this paper, we focus on predicting the location distribution of audiences on YouTube. And we transform this problem to a multilabel classification problem, while we find there exist three problems when the classical k-nearest neighbor based algorithm for multilabel classification (ML-kNN is used to predict location distribution. Firstly, the feature weights are not considered in measuring the similarity degree. Secondly, it consumes considerable computing time in finding similar items by traversing all the training set. Thirdly, the goal of ML-kNN is to find relevant labels for every sample which is different from audience location prediction. To solve these problems, we propose the methods of measuring similarity based on weight, quickly finding similar items, and ranking a specific number of labels. On the basis of these methods and the ML-kNN, the k-nearest neighbor based model for audience location prediction (AL-kNN is proposed for predicting audience location. The experiments based on massive YouTube data show that the proposed model can more accurately predict the location of YouTube video audience than the ML-kNN, MLNB, and Rank-SVM methods.
Aftershock identification problem via the nearest-neighbor analysis for marked point processes
Gabrielov, A.; Zaliapin, I.; Wong, H.; Keilis-Borok, V.
2007-12-01
The centennial observations on the world seismicity have revealed a wide variety of clustering phenomena that unfold in the space-time-energy domain and provide most reliable information about the earthquake dynamics. However, there is neither a unifying theory nor a convenient statistical apparatus that would naturally account for the different types of seismic clustering. In this talk we present a theoretical framework for nearest-neighbor analysis of marked processes and obtain new results on hierarchical approach to studying seismic clustering introduced by Baiesi and Paczuski (2004). Recall that under this approach one defines an asymmetric distance D in space-time-energy domain such that the nearest-neighbor spanning graph with respect to D becomes a time- oriented tree. We demonstrate how this approach can be used to detect earthquake clustering. We apply our analysis to the observed seismicity of California and synthetic catalogs from ETAS model and show that the earthquake clustering part is statistically different from the homogeneous part. This finding may serve as a basis for an objective aftershock identification procedure.
Quality and efficiency in high dimensional Nearest neighbor search
Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos
2009-01-01
Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.
International Nuclear Information System (INIS)
Jahnel, Benedikt; Külske, Christof; Botirov, Golibjon I.
2014-01-01
We consider a ferromagnetic nearest-neighbor model on a Cayley tree of degree k ⩾ 2 with uncountable local state space [0,1] where the energy function depends on a parameter θ ∊[0, 1). We show that for 0 ⩽ θ ⩽ 5 3 k the model has a unique translation-invariant Gibbs measure. If 5 3 k < θ < 1 , there is a phase transition, in particular there are three translation-invariant Gibbs measures
Nearest Neighbor Search in the Metric Space of a Complex Network for Community Detection
Directory of Open Access Journals (Sweden)
Suman Saha
2016-03-01
Full Text Available The objective of this article is to bridge the gap between two important research directions: (1 nearest neighbor search, which is a fundamental computational tool for large data analysis; and (2 complex network analysis, which deals with large real graphs but is generally studied via graph theoretic analysis or spectral analysis. In this article, we have studied the nearest neighbor search problem in a complex network by the development of a suitable notion of nearness. The computation of efficient nearest neighbor search among the nodes of a complex network using the metric tree and locality sensitive hashing (LSH are also studied and experimented. For evaluation of the proposed nearest neighbor search in a complex network, we applied it to a network community detection problem. Experiments are performed to verify the usefulness of nearness measures for the complex networks, the role of metric tree and LSH to compute fast and approximate node nearness and the the efficiency of community detection using nearest neighbor search. We observed that nearest neighbor between network nodes is a very efficient tool to explore better the community structure of the real networks. Several efficient approximation schemes are very useful for large networks, which hardly made any degradation of results, whereas they save lot of computational times, and nearest neighbor based community detection approach is very competitive in terms of efficiency and time.
Rusdiana, Lili; Marfuah
2017-12-01
K-Nearest Neighbors method is one of methods used for classification which calculate a value to find out the closest in distance. It is used to group a set of data such as students’ graduation status that are got from the amount of course credits taken by them, the grade point average (AVG), and the mini-thesis grade. The study is conducted to know the results of using K-Nearest Neighbors method on the application of determining students’ graduation status, so it can be analyzed from the method used, the data, and the application constructed. The aim of this study is to find out the application results by using K-Nearest Neighbors concept to determine students’ graduation status using the data of STMIK Palangkaraya students. The development of the software used Extreme Programming, since it was appropriate and precise for this study which was to quickly finish the project. The application was created using Microsoft Office Excel 2007 for the training data and Matlab 7 to implement the application. The result of K-Nearest Neighbors method on the application of determining students’ graduation status was 92.5%. It could determine the predicate graduation of 94 data used from the initial data before the processing as many as 136 data which the maximal training data was 50data. The K-Nearest Neighbors method is one of methods used to group a set of data based on the closest value, so that using K-Nearest Neighbors method agreed with this study. The results of K-Nearest Neighbors method on the application of determining students’ graduation status was 92.5% could determine the predicate graduation which is the maximal training data. The K-Nearest Neighbors method is one of methods used to group a set of data based on the closest value, so that using K-Nearest Neighbors method agreed with this study.
Dairi, Abdelkader; Harrou, Fouzi; Sun, Ying; Senouci, Mohamed
2018-01-01
Obstacle detection is an essential element for the development of intelligent transportation systems so that accidents can be avoided. In this study, we propose a stereovisionbased method for detecting obstacles in urban environment. The proposed method uses a deep stacked auto-encoders (DSA) model that combines the greedy learning features with the dimensionality reduction capacity and employs an unsupervised k-nearest neighbors algorithm (KNN) to accurately and reliably detect the presence of obstacles. We consider obstacle detection as an anomaly detection problem. We evaluated the proposed method by using practical data from three publicly available datasets, the Malaga stereovision urban dataset (MSVUD), the Daimler urban segmentation dataset (DUSD), and Bahnhof dataset. Also, we compared the efficiency of DSA-KNN approach to the deep belief network (DBN)-based clustering schemes. Results show that the DSA-KNN is suitable to visually monitor urban scenes.
Dairi, Abdelkader
2018-04-30
Obstacle detection is an essential element for the development of intelligent transportation systems so that accidents can be avoided. In this study, we propose a stereovisionbased method for detecting obstacles in urban environment. The proposed method uses a deep stacked auto-encoders (DSA) model that combines the greedy learning features with the dimensionality reduction capacity and employs an unsupervised k-nearest neighbors algorithm (KNN) to accurately and reliably detect the presence of obstacles. We consider obstacle detection as an anomaly detection problem. We evaluated the proposed method by using practical data from three publicly available datasets, the Malaga stereovision urban dataset (MSVUD), the Daimler urban segmentation dataset (DUSD), and Bahnhof dataset. Also, we compared the efficiency of DSA-KNN approach to the deep belief network (DBN)-based clustering schemes. Results show that the DSA-KNN is suitable to visually monitor urban scenes.
International Nuclear Information System (INIS)
Pinhal, N.M.; Vugman, N.V.
1983-01-01
Further splitting of chlorine superhyperfine lines on the EPR spectrum of the [Ir (CN) 4 Cl 2 ] 4 - molecular species in NaCl latice indicates a super-superhyperfine interaction with the nearest neighbors sodium atoms. (Author) [pt
On Competitiveness of Nearest-Neighbor-Based Music Classification: A Methodological Critique
DEFF Research Database (Denmark)
Pálmason, Haukur; Jónsson, Björn Thór; Amsaleg, Laurent
2017-01-01
The traditional role of nearest-neighbor classification in music classification research is that of a straw man opponent for the learning approach of the hour. Recent work in high-dimensional indexing has shown that approximate nearest-neighbor algorithms are extremely scalable, yielding results...... of reasonable quality from billions of high-dimensional features. With such efficient large-scale classifiers, the traditional music classification methodology of aggregating and compressing the audio features is incorrect; instead the approximate nearest-neighbor classifier should be given an extensive data...... collection to work with. We present a case study, using a well-known MIR classification benchmark with well-known music features, which shows that a simple nearest-neighbor classifier performs very competitively when given ample data. In this position paper, we therefore argue that nearest...
Zhang, Zhongzhi; Dong, Yuze; Sheng, Yibin
2015-10-01
Random walks including non-nearest-neighbor jumps appear in many real situations such as the diffusion of adatoms and have found numerous applications including PageRank search algorithm; however, related theoretical results are much less for this dynamical process. In this paper, we present a study of mixed random walks in a family of fractal scale-free networks, where both nearest-neighbor and next-nearest-neighbor jumps are included. We focus on trapping problem in the network family, which is a particular case of random walks with a perfect trap fixed at the central high-degree node. We derive analytical expressions for the average trapping time (ATT), a quantitative indicator measuring the efficiency of the trapping process, by using two different methods, the results of which are consistent with each other. Furthermore, we analytically determine all the eigenvalues and their multiplicities for the fundamental matrix characterizing the dynamical process. Our results show that although next-nearest-neighbor jumps have no effect on the leading scaling of the trapping efficiency, they can strongly affect the prefactor of ATT, providing insight into better understanding of random-walk process in complex systems.
Fracton topological order from nearest-neighbor two-spin interactions and dualities
Slagle, Kevin; Kim, Yong Baek
2017-10-01
Fracton topological order describes a remarkable phase of matter, which can be characterized by fracton excitations with constrained dynamics and a ground-state degeneracy that increases exponentially with the length of the system on a three-dimensional torus. However, previous models exhibiting this order require many-spin interactions, which may be very difficult to realize in a real material or cold atom system. In this work, we present a more physically realistic model which has the so-called X-cube fracton topological order [Vijay, Haah, and Fu, Phys. Rev. B 94, 235157 (2016), 10.1103/PhysRevB.94.235157] but only requires nearest-neighbor two-spin interactions. The model lives on a three-dimensional honeycomb-based lattice with one to two spin-1/2 degrees of freedom on each site and a unit cell of six sites. The model is constructed from two orthogonal stacks of Z2 topologically ordered Kitaev honeycomb layers [Kitaev, Ann. Phys. 321, 2 (2006), 10.1016/j.aop.2005.10.005], which are coupled together by a two-spin interaction. It is also shown that a four-spin interaction can be included to instead stabilize 3+1D Z2 topological order. We also find dual descriptions of four quantum phase transitions in our model, all of which appear to be discontinuous first-order transitions.
A Novel Preferential Diffusion Recommendation Algorithm Based on User’s Nearest Neighbors
Directory of Open Access Journals (Sweden)
Fuguo Zhang
2017-01-01
Full Text Available Recommender system is a very efficient way to deal with the problem of information overload for online users. In recent years, network based recommendation algorithms have demonstrated much better performance than the standard collaborative filtering methods. However, most of network based algorithms do not give a high enough weight to the influence of the target user’s nearest neighbors in the resource diffusion process, while a user or an object with high degree will obtain larger influence in the standard mass diffusion algorithm. In this paper, we propose a novel preferential diffusion recommendation algorithm considering the significance of the target user’s nearest neighbors and evaluate it in the three real-world data sets: MovieLens 100k, MovieLens 1M, and Epinions. Experiments results demonstrate that the novel preferential diffusion recommendation algorithm based on user’s nearest neighbors can significantly improve the recommendation accuracy and diversity.
Local Order in the Unfolded State: Conformational Biases and Nearest Neighbor Interactions
Directory of Open Access Journals (Sweden)
Siobhan Toal
2014-07-01
Full Text Available The discovery of Intrinsically Disordered Proteins, which contain significant levels of disorder yet perform complex biologically functions, as well as unwanted aggregation, has motivated numerous experimental and theoretical studies aimed at describing residue-level conformational ensembles. Multiple lines of evidence gathered over the last 15 years strongly suggest that amino acids residues display unique and restricted conformational preferences in the unfolded state of peptides and proteins, contrary to one of the basic assumptions of the canonical random coil model. To fully understand residue level order/disorder, however, one has to gain a quantitative, experimentally based picture of conformational distributions and to determine the physical basis underlying residue-level conformational biases. Here, we review the experimental, computational and bioinformatic evidence for conformational preferences of amino acid residues in (mostly short peptides that can be utilized as suitable model systems for unfolded states of peptides and proteins. In this context particular attention is paid to the alleged high polyproline II preference of alanine. We discuss how these conformational propensities may be modulated by peptide solvent interactions and so called nearest-neighbor interactions. The relevance of conformational propensities for the protein folding problem and the understanding of IDPs is briefly discussed.
Vasylkivska, Veronika S.; Huerta, Nicolas J.
2017-07-01
Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog's inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable with respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.
Geometric k-nearest neighbor estimation of entropy and mutual information
Lord, Warren M.; Sun, Jie; Bollt, Erik M.
2018-03-01
Nonparametric estimation of mutual information is used in a wide range of scientific problems to quantify dependence between variables. The k-nearest neighbor (knn) methods are consistent, and therefore expected to work well for a large sample size. These methods use geometrically regular local volume elements. This practice allows maximum localization of the volume elements, but can also induce a bias due to a poor description of the local geometry of the underlying probability measure. We introduce a new class of knn estimators that we call geometric knn estimators (g-knn), which use more complex local volume elements to better model the local geometry of the probability measures. As an example of this class of estimators, we develop a g-knn estimator of entropy and mutual information based on elliptical volume elements, capturing the local stretching and compression common to a wide range of dynamical system attractors. A series of numerical examples in which the thickness of the underlying distribution and the sample sizes are varied suggest that local geometry is a source of problems for knn methods such as the Kraskov-Stögbauer-Grassberger estimator when local geometric effects cannot be removed by global preprocessing of the data. The g-knn method performs well despite the manipulation of the local geometry. In addition, the examples suggest that the g-knn estimators can be of particular relevance to applications in which the system is large, but the data size is limited.
Reentrant behavior in the nearest-neighbor Ising antiferromagnet in a magnetic field
Neto, Minos A.; de Sousa, J. Ricardo
2004-12-01
Motived by the H-T phase diagram in the bcc Ising antiferromagnetic with nearest-neighbor interactions obtained by Monte Carlo simulation [Landau, Phys. Rev. B 16, 4164 (1977)] that shows a reentrant behavior at low temperature, with two critical temperatures in magnetic field about 2% greater than the critical value Hc=8J , we apply the effective field renormalization group (EFRG) approach in this model on three-dimensional lattices (simple cubic-sc and body centered cubic-bcc). We find that the critical curve TN(H) exhibits a maximum point around of H≃Hc only in the bcc lattice case. We also discuss the critical behavior by the effective field theory in clusters with one (EFT-1) and two (EFT-2) spins, and a reentrant behavior is observed for the sc and bcc lattices. We have compared our results of EFRG in the bcc lattice with Monte Carlo and series expansion, and we observe a good accordance between the methods.
Directory of Open Access Journals (Sweden)
Caro, Norma Patricia
2017-12-01
Full Text Available En la presente década, en economías emergentes como las latinoamericanas, se han comenzado a aplicar modelos logísticos mixtos para predecir el fracaso financiero de las empresas. No obstante, existen limitaciones subyacentes a la metodología, vinculadas a la factibilidad de predicción del estado de nuevas empresas que no han formado parte de la muestra de entrenamiento con la que se estimó el modelo. En la literatura se han propuesto diversos métodos de predicción para los efectos aleatorios que forman parte de los modelos mixtos, entre ellos, el del vecino más cercano. Este método es aplicado en una segunda etapa, luego de la estimación de un modelo que explica la situación financiera (en crisis o sana de las empresas mediante la consideración del comportamiento de sus ratios contables. En el presente trabajo, se consideraron empresas de Argentina, Chile y Perú, estimando los efectos aleatorios que resultaron significativos en la estimación del modelo mixto. De este modo, se concluye que la aplicación de este método permite identificar empresas con problemas financieros con una tasa de clasificación correcta superior a 80%, lo cual cobra relevancia en la modelación y predicción de este tipo de riesgo. || In the present decade, in emerging economies such as those in Latin-America, mixed logistic models have been started applying to predict the financial failure of companies. However, there are limitations for the methodology linked to the feasibility of predicting the state of new companies that have not been part of the training sample which was used to estimate the model. In the literature, several methods have been proposed for predicting random effects in the mixed models such as, for example, the nearest neighbor. This method is applied in a second step, after estimating a model that explains the financial situation (in crisis or healthy of companies by considering the behavior of its financial ratios. In this study
Applying an efficient K-nearest neighbor search to forest attribute imputation
Andrew O. Finley; Ronald E. McRoberts; Alan R. Ek
2006-01-01
This paper explores the utility of an efficient nearest neighbor (NN) search algorithm for applications in multi-source kNN forest attribute imputation. The search algorithm reduces the number of distance calculations between a given target vector and each reference vector, thereby, decreasing the time needed to discover the NN subset. Results of five trials show gains...
Estimating forest attribute parameters for small areas using nearest neighbors techniques
Ronald E. McRoberts
2012-01-01
Nearest neighbors techniques have become extremely popular, particularly for use with forest inventory data. With these techniques, a population unit prediction is calculated as a linear combination of observations for a selected number of population units in a sample that are most similar, or nearest, in a space of ancillary variables to the population unit requiring...
Mapping change of older forest with nearest-neighbor imputation and Landsat time-series
Janet L. Ohmann; Matthew J. Gregory; Heather M. Roberts; Warren B. Cohen; Robert E. Kennedy; Zhiqiang. Yang
2012-01-01
The Northwest Forest Plan (NWFP), which aims to conserve late-successional and old-growth forests (older forests) and associated species, established new policies on federal lands in the Pacific Northwest USA. As part of monitoring for the NWFP, we tested nearest-neighbor imputation for mapping change in older forest, defined by threshold values for forest attributes...
Kenneth B. Jr. Pierce; C. Kenneth Brewer; Janet L. Ohmann
2010-01-01
This study was designed to test the feasibility of combining a method designed to populate pixels with inventory plot data at the 30-m scale with a new national predictor data set. The new national predictor data set was developed by the USDA Forest Service Remote Sensing Applications Center (hereafter RSAC) at the 250-m scale. Gradient Nearest Neighbor (GNN)...
Improved Fuzzy K-Nearest Neighbor Using Modified Particle Swarm Optimization
Jamaluddin; Siringoringo, Rimbun
2017-12-01
Fuzzy k-Nearest Neighbor (FkNN) is one of the most powerful classification methods. The presence of fuzzy concepts in this method successfully improves its performance on almost all classification issues. The main drawbackof FKNN is that it is difficult to determine the parameters. These parameters are the number of neighbors (k) and fuzzy strength (m). Both parameters are very sensitive. This makes it difficult to determine the values of ‘m’ and ‘k’, thus making FKNN difficult to control because no theories or guides can deduce how proper ‘m’ and ‘k’ should be. This study uses Modified Particle Swarm Optimization (MPSO) to determine the best value of ‘k’ and ‘m’. MPSO is focused on the Constriction Factor Method. Constriction Factor Method is an improvement of PSO in order to avoid local circumstances optima. The model proposed in this study was tested on the German Credit Dataset. The test of the data/The data test has been standardized by UCI Machine Learning Repository which is widely applied to classification problems. The application of MPSO to the determination of FKNN parameters is expected to increase the value of classification performance. Based on the experiments that have been done indicating that the model offered in this research results in a better classification performance compared to the Fk-NN model only. The model offered in this study has an accuracy rate of 81%, while. With using Fk-NN model, it has the accuracy of 70%. At the end is done comparison of research model superiority with 2 other classification models;such as Naive Bayes and Decision Tree. This research model has a better performance level, where Naive Bayes has accuracy 75%, and the decision tree model has 70%
A Regression-based K nearest neighbor algorithm for gene function prediction from heterogeneous data
Directory of Open Access Journals (Sweden)
Ruzzo Walter L
2006-03-01
Full Text Available Abstract Background As a variety of functional genomic and proteomic techniques become available, there is an increasing need for functional analysis methodologies that integrate heterogeneous data sources. Methods In this paper, we address this issue by proposing a general framework for gene function prediction based on the k-nearest-neighbor (KNN algorithm. The choice of KNN is motivated by its simplicity, flexibility to incorporate different data types and adaptability to irregular feature spaces. A weakness of traditional KNN methods, especially when handling heterogeneous data, is that performance is subject to the often ad hoc choice of similarity metric. To address this weakness, we apply regression methods to infer a similarity metric as a weighted combination of a set of base similarity measures, which helps to locate the neighbors that are most likely to be in the same class as the target gene. We also suggest a novel voting scheme to generate confidence scores that estimate the accuracy of predictions. The method gracefully extends to multi-way classification problems. Results We apply this technique to gene function prediction according to three well-known Escherichia coli classification schemes suggested by biologists, using information derived from microarray and genome sequencing data. We demonstrate that our algorithm dramatically outperforms the naive KNN methods and is competitive with support vector machine (SVM algorithms for integrating heterogenous data. We also show that by combining different data sources, prediction accuracy can improve significantly. Conclusion Our extension of KNN with automatic feature weighting, multi-class prediction, and probabilistic inference, enhance prediction accuracy significantly while remaining efficient, intuitive and flexible. This general framework can also be applied to similar classification problems involving heterogeneous datasets.
International Nuclear Information System (INIS)
Bentz, Jonathan L.; Kozak, John J.
2006-01-01
We explore the effect of imposing different constraints (biases, boundary conditions) on the mean time to trapping (or mean walklength) for a particle (excitation) migrating on a finite dendrimer lattice with a centrally positioned trap. By mobilizing the theory of finite Markov processes, we are able to obtain exact analytic expressions for site-specific walklengths as well as the overall walklength for both nearest-neighbor and second-nearest-neighbor displacements. This allows the comparison with and generalization of earlier results [A. Bar-Haim, J. Klafter, J. Phys. Chem. B 102 (1998) 1662; A. Bar-Haim, J. Klafter, J. Lumin. 76, 77 (1998) 197; O. Flomenbom, R.J. Amir, D. Shabat, J. Klafter, J. Lumin. 111 (2005) 315; J.L. Bentz, F.N. Hosseini, J.J. Kozak, Chem. Phys. Lett. 370 (2003) 319]. A novel feature of this work is the establishment of a connection between the random walk models studied here and percolation theory. The full dynamical behavior was also determined via solution of the stochastic master equation, and the results obtained compared with recent spectroscopic experiments
Energy Technology Data Exchange (ETDEWEB)
Bentz, Jonathan L. [Department of Chemistry, Iowa State University, Ames, IA, 50011 (United States)]. E-mail: jnbntz@iastate.edu; Kozak, John J. [Beckman Institute, California Institute of Technology, 1200 E. California Boulevard, Pasadena, CA 91125-7400 (United States)
2006-11-15
We explore the effect of imposing different constraints (biases, boundary conditions) on the mean time to trapping (or mean walklength) for a particle (excitation) migrating on a finite dendrimer lattice with a centrally positioned trap. By mobilizing the theory of finite Markov processes, we are able to obtain exact analytic expressions for site-specific walklengths as well as the overall walklength for both nearest-neighbor and second-nearest-neighbor displacements. This allows the comparison with and generalization of earlier results [A. Bar-Haim, J. Klafter, J. Phys. Chem. B 102 (1998) 1662; A. Bar-Haim, J. Klafter, J. Lumin. 76, 77 (1998) 197; O. Flomenbom, R.J. Amir, D. Shabat, J. Klafter, J. Lumin. 111 (2005) 315; J.L. Bentz, F.N. Hosseini, J.J. Kozak, Chem. Phys. Lett. 370 (2003) 319]. A novel feature of this work is the establishment of a connection between the random walk models studied here and percolation theory. The full dynamical behavior was also determined via solution of the stochastic master equation, and the results obtained compared with recent spectroscopic experiments.
Polymers with nearest- and next nearest-neighbor interactions on the Husimi lattice
Oliveira, Tiago J.
2016-04-01
The exact grand-canonical solution of a generalized interacting self-avoid walk (ISAW) model, placed on a Husimi lattice built with squares, is presented. In this model, beyond the traditional interaction {ω }1={{{e}}}{ɛ 1/{k}BT} between (nonconsecutive) monomers on nearest-neighbor (NN) sites, an additional energy {ɛ }2 is associated to next-NN (NNN) monomers. Three definitions of NNN sites/interactions are considered, where each monomer can have, effectively, at most two, four, or six NNN monomers on the Husimi lattice. The phase diagrams found in all cases have (qualitatively) the same thermodynamic properties: a non-polymerized (NP) and a polymerized (P) phase separated by a critical and a coexistence surface that meet at a tricritical (θ-) line. This θ-line is found even when one of the interactions is repulsive, existing for {ω }1 in the range [0,∞ ), i.e., for {ɛ }1/{k}BT in the range [-∞ ,∞ ). Thus, counterintuitively, a θ-point exists even for an infinite repulsion between NN monomers ({ω }1=0), being associated to a coil-‘soft globule’ transition. In the limit of an infinite repulsive force between NNN monomers, however, the coil-globule transition disappears, and only NP-P continuous transition is observed. This particular case, with {ω }2=0, is also solved exactly on the square lattice, using a transfer matrix calculation where a discontinuous NP-P transition is found. For attractive and repulsive forces between NN and NNN monomers, respectively, the model becomes quite similar to the semiflexible-ISAW one, whose crystalline phase is not observed here, as a consequence of the frustration due to competing NN and NNN forces. The mapping of the phase diagrams in canonical ones is discussed and compared with recent results from Monte Carlo simulations on the square lattice.
Ronald E. McRoberts
2009-01-01
Nearest neighbors techniques have been shown to be useful for predicting multiple forest attributes from forest inventory and Landsat satellite image data. However, in regions lacking good digital land cover information, nearest neighbors selected to predict continuous variables such as tree volume must be selected without regard to relevant categorical variables such...
Multi-strategy based quantum cost reduction of linear nearest-neighbor quantum circuit
Tan, Ying-ying; Cheng, Xue-yun; Guan, Zhi-jin; Liu, Yang; Ma, Haiying
2018-03-01
With the development of reversible and quantum computing, study of reversible and quantum circuits has also developed rapidly. Due to physical constraints, most quantum circuits require quantum gates to interact on adjacent quantum bits. However, many existing quantum circuits nearest-neighbor have large quantum cost. Therefore, how to effectively reduce quantum cost is becoming a popular research topic. In this paper, we proposed multiple optimization strategies to reduce the quantum cost of the circuit, that is, we reduce quantum cost from MCT gates decomposition, nearest neighbor and circuit simplification, respectively. The experimental results show that the proposed strategies can effectively reduce the quantum cost, and the maximum optimization rate is 30.61% compared to the corresponding results.
ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms
DEFF Research Database (Denmark)
Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander
2017-01-01
This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...
Chaotic Synchronization in Nearest-Neighbor Coupled Networks of 3D CNNs
Serrano-Guerrero, H.; Cruz-Hernández, C.; López-Gutiérrez, R.M.; Cardoza-Avendaño, L.; Chávez-Pérez, R.A.
2013-01-01
In this paper, a synchronization of Cellular Neural Networks (CNNs) in nearest-neighbor coupled arrays, is numerically studied. Synchronization of multiple chaotic CNNs is achieved by appealing to complex systems theory. In particular, we consider dynamical networks composed by 3D CNNs, as interconnected nodes, where the interactions in the networks are defined by coupling the first state of each node. Four cases of interest are considered: i) synchronization without chaotic master, ii) maste...
FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule
Lu Si; Jie Yu; Shasha Li; Jun Ma; Lei Luo; Qingbo Wu; Yongqi Ma; Zhengji Liu
2017-01-01
Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rul...
A Hybrid Instance Selection Using Nearest-Neighbor for Cross-Project Defect Prediction
Institute of Scientific and Technical Information of China (English)
Duksan Ryu; Jong-In Jang; Jongmoon Baik; Member; ACM; IEEE
2015-01-01
Software defect prediction (SDP) is an active research field in software engineering to identify defect-prone modules. Thanks to SDP, limited testing resources can be effectively allocated to defect-prone modules. Although SDP requires suffcient local data within a company, there are cases where local data are not available, e.g., pilot projects. Companies without local data can employ cross-project defect prediction (CPDP) using external data to build classifiers. The major challenge of CPDP is different distributions between training and test data. To tackle this, instances of source data similar to target data are selected to build classifiers. Software datasets have a class imbalance problem meaning the ratio of defective class to clean class is far low. It usually lowers the performance of classifiers. We propose a Hybrid Instance Selection Using Nearest-Neighbor (HISNN) method that performs a hybrid classification selectively learning local knowledge (via k-nearest neighbor) and global knowledge (via na¨ıve Bayes). Instances having strong local knowledge are identified via nearest-neighbors with the same class label. Previous studies showed low PD (probability of detection) or high PF (probability of false alarm) which is impractical to use. The experimental results show that HISNN produces high overall performance as well as high PD and low PF.
Collective Behaviors of Mobile Robots Beyond the Nearest Neighbor Rules With Switching Topology.
Ning, Boda; Han, Qing-Long; Zuo, Zongyu; Jin, Jiong; Zheng, Jinchuan
2018-05-01
This paper is concerned with the collective behaviors of robots beyond the nearest neighbor rules, i.e., dispersion and flocking, when robots interact with others by applying an acute angle test (AAT)-based interaction rule. Different from a conventional nearest neighbor rule or its variations, the AAT-based interaction rule allows interactions with some far-neighbors and excludes unnecessary nearest neighbors. The resulting dispersion and flocking hold the advantages of scalability, connectivity, robustness, and effective area coverage. For the dispersion, a spring-like controller is proposed to achieve collision-free coordination. With switching topology, a new fixed-time consensus-based energy function is developed to guarantee the system stability. An upper bound of settling time for energy consensus is obtained, and a uniform time interval is accordingly set so that energy distribution is conducted in a fair manner. For the flocking, based on a class of generalized potential functions taking nonsmooth switching into account, a new controller is proposed to ensure that the same velocity for all robots is eventually reached. A co-optimizing problem is further investigated to accomplish additional tasks, such as enhancing communication performance, while maintaining the collective behaviors of mobile robots. Simulation results are presented to show the effectiveness of the theoretical results.
Efficient and accurate nearest neighbor and closest pair search in high-dimensional space
Tao, Yufei
2010-07-01
Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial
Allen, Victoria W; Shirasu-Hiza, Mimi
2018-01-01
Despite being pervasive, the control of programmed grooming is poorly understood. We addressed this gap by developing a high-throughput platform that allows long-term detection of grooming in Drosophila melanogaster. In our method, a k-nearest neighbors algorithm automatically classifies fly behavior and finds grooming events with over 90% accuracy in diverse genotypes. Our data show that flies spend ~13% of their waking time grooming, driven largely by two major internal programs. One of these programs regulates the timing of grooming and involves the core circadian clock components cycle, clock, and period. The second program regulates the duration of grooming and, while dependent on cycle and clock, appears to be independent of period. This emerging dual control model in which one program controls timing and another controls duration, resembles the two-process regulatory model of sleep. Together, our quantitative approach presents the opportunity for further dissection of mechanisms controlling long-term grooming in Drosophila. PMID:29485401
Bi, Jiang-lin; Wang, Wei; Li, Qi
2017-07-01
In this paper, the effects of the next-nearest neighbors exchange couplings on the magnetic and thermal properties of the ferrimagnetic mixed-spin (2, 5/2) Ising model on a 3D honeycomb lattice have been investigated by the use of Monte Carlo simulation. In particular, the influences of exchange couplings (Ja, Jb, Jan) and the single-ion anisotropy(Da) on the phase diagrams, the total magnetization, the sublattice magnetization, the total susceptibility, the internal energy and the specific heat have been discussed in detail. The results clearly show that the system can express the critical and compensation behavior within the next-nearest neighbors exchange coupling. Great deals of the M curves such as N-, Q-, P- and L-types have been discovered, owing to the competition between the exchange coupling and the temperature. Compared with other theoretical and experimental works, our results have an excellent consistency with theirs.
Ralko, Arnaud; Mila, Frédéric; Rousochatzakis, Ioannis
2018-03-01
The spin-1/2 Heisenberg model on the kagome lattice, which is closely realized in layered Mott insulators such as ZnCu3(OH) 6Cl2 , is one of the oldest and most enigmatic spin-1/2 lattice models. While the numerical evidence has accumulated in favor of a quantum spin liquid, the debate is still open as to whether it is a Z2 spin liquid with very short-range correlations (some kind of resonating valence bond spin liquid), or an algebraic spin liquid with power-law correlations. To address this issue, we have pushed the program started by Rokhsar and Kivelson in their derivation of the effective quantum dimer model description of Heisenberg models to unprecedented accuracy for the spin-1/2 kagome, by including all the most important virtual singlet contributions on top of the orthogonalization of the nearest-neighbor valence bond singlet basis. Quite remarkably, the resulting picture is a competition between a Z2 spin liquid and a diamond valence bond crystal with a 12-site unit cell, as in the density-matrix renormalization group simulations of Yan et al. Furthermore, we found that, on cylinders of finite diameter d , there is a transition between the Z2 spin liquid at small d and the diamond valence bond crystal at large d , the prediction of the present microscopic description for the two-dimensional lattice. These results show that, if the ground state of the spin-1/2 kagome antiferromagnet can be described by nearest-neighbor singlet dimers, it is a diamond valence bond crystal, and, a contrario, that, if the system is a quantum spin liquid, it has to involve long-range singlets, consistent with the algebraic spin liquid scenario.
Bernot, K.; Luzon, J.; Caneschi, A.; Gatteschi, D.; Sessoli, R.; Bogani, L.; Vindigni, A.; Rettori, A.; Pini, M. G.
2009-04-01
We investigate theoretically and experimentally the static magnetic properties of single crystals of the molecular-based single-chain magnet of formula [Dy(hfac)3NIT(C6H4OPh)]∞ comprising alternating Dy3+ and organic radicals. The magnetic molar susceptibility χM displays a strong angular variation for sample rotations around two directions perpendicular to the chain axis. A peculiar inversion between maxima and minima in the angular dependence of χM occurs on increasing temperature. Using information regarding the monomeric building block as well as an ab initio estimation of the magnetic anisotropy of the Dy3+ ion, this “anisotropy-inversion” phenomenon can be assigned to weak one-dimensional ferromagnetism along the chain axis. This indicates that antiferromagnetic next-nearest-neighbor interactions between Dy3+ ions dominate, despite the large Dy-Dy separation, over the nearest-neighbor interactions between the radicals and the Dy3+ ions. Measurements of the field dependence of the magnetization, both along and perpendicularly to the chain, and of the angular dependence of χM in a strong magnetic field confirm such an interpretation. Transfer-matrix simulations of the experimental measurements are performed using a classical one-dimensional spin model with antiferromagnetic Heisenberg exchange interaction and noncollinear uniaxial single-ion anisotropies favoring a canted antiferromagnetic spin arrangement, with a net magnetic moment along the chain axis. The fine agreement obtained with experimental data provides estimates of the Hamiltonian parameters, essential for further study of the dynamics of rare-earth-based molecular chains.
A γ dose distribution evaluation technique using the k-d tree for nearest neighbor searching
International Nuclear Information System (INIS)
Yuan Jiankui; Chen Weimin
2010-01-01
Purpose: The authors propose an algorithm based on the k-d tree for nearest neighbor searching to improve the γ calculation time for 2D and 3D dose distributions. Methods: The γ calculation method has been widely used for comparisons of dose distributions in clinical treatment plans and quality assurances. By specifying the acceptable dose and distance-to-agreement criteria, the method provides quantitative measurement of the agreement between the reference and evaluation dose distributions. The γ value indicates the acceptability. In regions where γ≤1, the predefined criterion is satisfied and thus the agreement is acceptable; otherwise, the agreement fails. Although the concept of the method is not complicated and a quick naieve implementation is straightforward, an efficient and robust implementation is not trivial. Recent algorithms based on exhaustive searching within a maximum radius, the geometric Euclidean distance, and the table lookup method have been proposed to improve the computational time for multidimensional dose distributions. Motivated by the fact that the least searching time for finding a nearest neighbor can be an O(log N) operation with a k-d tree, where N is the total number of the dose points, the authors propose an algorithm based on the k-d tree for the γ evaluation in this work. Results: In the experiment, the authors found that the average k-d tree construction time per reference point is O(log N), while the nearest neighbor searching time per evaluation point is proportional to O(N 1/k ), where k is between 2 and 3 for two-dimensional and three-dimensional dose distributions, respectively. Conclusions: Comparing with other algorithms such as exhaustive search and sorted list O(N), the k-d tree algorithm for γ evaluation is much more efficient.
Penerapan Metode K-nearest Neighbor pada Penentuan Grade Dealer Sepeda Motor
Leidiyana, Henny
2017-01-01
The mutually beneficial cooperation is a very important thing for a leasing and dealer. Incentives for marketing is given in order to get consumers as much as possible. But sometimes the surveyor objectivity is lost due to the conspiracy on the field of marketing and surveyors. To overcome this, leasing a variety of ways one of them is doing ranking against the dealer. In this study the application of the k-Nearest Neighbor method and Euclidean distance measurement to determine the grade deal...
Fast and Accuracy Control Chart Pattern Recognition using a New cluster-k-Nearest Neighbor
Samir Brahim Belhaouari
2009-01-01
By taking advantage of both k-NN which is highly accurate and K-means cluster which is able to reduce the time of classification, we can introduce Cluster-k-Nearest Neighbor as "variable k"-NN dealing with the centroid or mean point of all subclasses generated by clustering algorithm. In general the algorithm of K-means cluster is not stable, in term of accuracy, for that reason we develop another algorithm for clustering our space which gives a higher accuracy than K-means cluster, less ...
Seismic clusters analysis in Northeastern Italy by the nearest-neighbor approach
Peresan, Antonella; Gentili, Stefania
2018-01-01
The main features of earthquake clusters in Northeastern Italy are explored, with the aim to get new insights on local scale patterns of seismicity in the area. The study is based on a systematic analysis of robustly and uniformly detected seismic clusters, which are identified by a statistical method, based on nearest-neighbor distances of events in the space-time-energy domain. The method permits us to highlight and investigate the internal structure of earthquake sequences, and to differentiate the spatial properties of seismicity according to the different topological features of the clusters structure. To analyze seismicity of Northeastern Italy, we use information from local OGS bulletins, compiled at the National Institute of Oceanography and Experimental Geophysics since 1977. A preliminary reappraisal of the earthquake bulletins is carried out and the area of sufficient completeness is outlined. Various techniques are considered to estimate the scaling parameters that characterize earthquakes occurrence in the region, namely the b-value and the fractal dimension of epicenters distribution, required for the application of the nearest-neighbor technique. Specifically, average robust estimates of the parameters of the Unified Scaling Law for Earthquakes, USLE, are assessed for the whole outlined region and are used to compute the nearest-neighbor distances. Clusters identification by the nearest-neighbor method turn out quite reliable and robust with respect to the minimum magnitude cutoff of the input catalog; the identified clusters are well consistent with those obtained from manual aftershocks identification of selected sequences. We demonstrate that the earthquake clusters have distinct preferred geographic locations, and we identify two areas that differ substantially in the examined clustering properties. Specifically, burst-like sequences are associated with the north-western part and swarm-like sequences with the south-eastern part of the study
A Novel Quantum Solution to Privacy-Preserving Nearest Neighbor Query in Location-Based Services
Luo, Zhen-yu; Shi, Run-hua; Xu, Min; Zhang, Shun
2018-04-01
We present a cheating-sensitive quantum protocol for Privacy-Preserving Nearest Neighbor Query based on Oblivious Quantum Key Distribution and Quantum Encryption. Compared with the classical related protocols, our proposed protocol has higher security, because the security of our protocol is based on basic physical principles of quantum mechanics, instead of difficulty assumptions. Especially, our protocol takes single photons as quantum resources and only needs to perform single-photon projective measurement. Therefore, it is feasible to implement this protocol with the present technologies.
International Nuclear Information System (INIS)
Fang Xiaoling; Yu Hongjie; Jiang Zonglai
2009-01-01
The chaotic synchronization of Hindmarsh-Rose neural networks linked by a nonlinear coupling function is discussed. The HR neural networks with nearest-neighbor diffusive coupling form are treated as numerical examples. By the construction of a special nonlinear-coupled term, the chaotic system is coupled symmetrically. For three and four neurons network, a certain region of coupling strength corresponding to full synchronization is given, and the effect of network structure and noise position are analyzed. For five and more neurons network, the full synchronization is very difficult to realize. All the results have been proved by the calculation of the maximum conditional Lyapunov exponent.
CHIKH, Mohamed Amine; SAIDI, Meryem; SETTOUTI, Nesma
2012-01-01
The use of expert systems and artificial intelligence techniques in disease diagnosis has been increasing gradually. Artificial Immune Recognition System (AIRS) is one of the methods used in medical classification problems. AIRS2 is a more efficient version of the AIRS algorithm. In this paper, we used a modified AIRS2 called MAIRS2 where we replace the K- nearest neighbors algorithm with the fuzzy K-nearest neighbors to improve the diagnostic accuracy of diabetes diseases. The diabetes disea...
Sistem Rekomendasi Pada E-Commerce Menggunakan K-Nearest Neighbor
Directory of Open Access Journals (Sweden)
Chandra Saha Dewa Prasetya
2017-09-01
The growing number of product information available on the internet brings challenges to both customer and online businesses in the e-commerce environment. Customer often have difﬁculty when looking for products on the internet because of the number of products sold on the internet. In addition, online businessman often experience difﬁculties because they has much data about products, customers and transactions, thus causing online businessman have difﬁculty to promote the right product to a particular customer target. A recommendation system was developed to address those problem with various methods such as Collaborative Filtering, ContentBased, and Hybrid. Collaborative ﬁltering method uses customer’s rating data, content based using product content such as title or description, and hybrid using both as the basis of the recommendation. In this research, the k-nearest neighbor algorithm is used to determine the top-n product recommendations for each buyer. The result of this research method Content Based outperforms other methods because the sparse data, that is the condition where the number of rating given by the customers is relatively little compared the number of products available in e-commerce. Keywords: recomendation system, k-nearest neighbor, collaborative filtering, content based.
Directory of Open Access Journals (Sweden)
D.A. Adeniyi
2016-01-01
Full Text Available The major problem of many on-line web sites is the presentation of many choices to the client at a time; this usually results to strenuous and time consuming task in finding the right product or information on the site. In this work, we present a study of automatic web usage data mining and recommendation system based on current user behavior through his/her click stream data on the newly developed Really Simple Syndication (RSS reader website, in order to provide relevant information to the individual without explicitly asking for it. The K-Nearest-Neighbor (KNN classification method has been trained to be used on-line and in Real-Time to identify clients/visitors click stream data, matching it to a particular user group and recommend a tailored browsing option that meet the need of the specific user at a particular time. To achieve this, web users RSS address file was extracted, cleansed, formatted and grouped into meaningful session and data mart was developed. Our result shows that the K-Nearest Neighbor classifier is transparent, consistent, straightforward, simple to understand, high tendency to possess desirable qualities and easy to implement than most other machine learning techniques specifically when there is little or no prior knowledge about data distribution.
Sequential nearest-neighbor effects on computed {sup 13}C{sup {alpha}} chemical shifts
Energy Technology Data Exchange (ETDEWEB)
Vila, Jorge A. [Cornell University, Baker Laboratory of Chemistry and Chemical Biology (United States); Serrano, Pedro; Wuethrich, Kurt [The Scripps Research Institute, Department of Molecular Biology (United States); Scheraga, Harold A., E-mail: has5@cornell.ed [Cornell University, Baker Laboratory of Chemistry and Chemical Biology (United States)
2010-09-15
To evaluate sequential nearest-neighbor effects on quantum-chemical calculations of {sup 13}C{sup {alpha}} chemical shifts, we selected the structure of the nucleic acid binding (NAB) protein from the SARS coronavirus determined by NMR in solution (PDB id 2K87). NAB is a 116-residue {alpha}/{beta} protein, which contains 9 prolines and has 50% of its residues located in loops and turns. Overall, the results presented here show that sizeable nearest-neighbor effects are seen only for residues preceding proline, where Pro introduces an overestimation, on average, of 1.73 ppm in the computed {sup 13}C{sup {alpha}} chemical shifts. A new ensemble of 20 conformers representing the NMR structure of the NAB, which was calculated with an input containing backbone torsion angle constraints derived from the theoretical {sup 13}C{sup {alpha}} chemical shifts as supplementary data to the NOE distance constraints, exhibits very similar topology and comparable agreement with the NOE constraints as the published NMR structure. However, the two structures differ in the patterns of differences between observed and computed {sup 13}C{sup {alpha}} chemical shifts, {Delta}{sub ca,i}, for the individual residues along the sequence. This indicates that the {Delta}{sub ca,i} -values for the NAB protein are primarily a consequence of the limited sampling by the bundles of 20 conformers used, as in common practice, to represent the two NMR structures, rather than of local flaws in the structures.
International Nuclear Information System (INIS)
Juang, M.T.; Wager, J.F.; Van Vechten, J.A.
1988-01-01
Drain current drift in InP metal insulator semiconductor devices display distinct activation energies and pre-exponential factors. The authors have given evidence that these result from two physical mechanisms: thermionic tunneling of electrons into native oxide traps and phosphorous vacancy nearest neighbor hopping (PVNNH). They here present a computer simulation of the effect of the PVNHH mechanism on flatband voltage shift vs. bias stress time measurements. The simulation is based on an analysis of the kinetics of the PVNNH defect reaction sequence in which the electron concentration in the channel is related to the applied bias by a solution of the Poisson equation. The simulation demonstrates quantitatively that the temperature dependence of the flatband shift is associated with PVNNH for temperatures above room temperature
False-nearest-neighbors algorithm and noise-corrupted time series
International Nuclear Information System (INIS)
Rhodes, C.; Morari, M.
1997-01-01
The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented. copyright 1997 The American Physical Society
Directory of Open Access Journals (Sweden)
Firdaus Firdaus
2017-12-01
Full Text Available Non-invasive blood pressure measurement devices are widely available in the marketplace. Most of these devices use the oscillometric principle that store and analyze oscillometric waveforms during cuff deflation to obtain mean arterial pressure, systolic blood pressure and diastolic blood pressure. Those pressure values are determined from the oscillometric waveform envelope. Several methods to detect the envelope of oscillometric pulses utilize a complex algorithm that requires a large capacity memory and certainly difficult to process by a low memory capacity embedded system. A simple nearest-neighbor interpolation method is applied for oscillometric pulse envelope detection in non-invasive blood pressure measurement using microcontroller such ATmega328. The experiment yields 59 seconds average time to process the computation with 3.6% average percent error in blood pressure measurement.
Nearest neighbor spacing distributions of low-lying levels of vibrational nuclei
International Nuclear Information System (INIS)
Abul-Magd, A.Y.; Simbel, M.H.
1996-01-01
Energy-level statistics are considered for nuclei whose Hamiltonian is divided into intrinsic and collective-vibrational terms. The levels are described as a random superposition of independent sequences, each corresponding to a given number of phonons. The intrinsic motion is assumed chaotic. The level spacing distribution is found to be intermediate between the Wigner and Poisson distributions and similar in form to the spacing distribution of a system with classical phase space divided into separate regular and chaotic domains. We have obtained approximate expressions for the nearest neighbor spacing and cumulative spacing distribution valid when the level density is described by a constant-temperature formula and not involving additional free parameters. These expressions have been able to achieve good agreement with the experimental spacing distributions. copyright 1996 The American Physical Society
K-Nearest Neighbor Intervals Based AP Clustering Algorithm for Large Incomplete Data
Directory of Open Access Journals (Sweden)
Cheng Lu
2015-01-01
Full Text Available The Affinity Propagation (AP algorithm is an effective algorithm for clustering analysis, but it can not be directly applicable to the case of incomplete data. In view of the prevalence of missing data and the uncertainty of missing attributes, we put forward a modified AP clustering algorithm based on K-nearest neighbor intervals (KNNI for incomplete data. Based on an Improved Partial Data Strategy, the proposed algorithm estimates the KNNI representation of missing attributes by using the attribute distribution information of the available data. The similarity function can be changed by dealing with the interval data. Then the improved AP algorithm can be applicable to the case of incomplete data. Experiments on several UCI datasets show that the proposed algorithm achieves impressive clustering results.
Quantum Algorithm for K-Nearest Neighbors Classification Based on the Metric of Hamming Distance
Ruan, Yue; Xue, Xiling; Liu, Heng; Tan, Jianing; Li, Xi
2017-11-01
K-nearest neighbors (KNN) algorithm is a common algorithm used for classification, and also a sub-routine in various complicated machine learning tasks. In this paper, we presented a quantum algorithm (QKNN) for implementing this algorithm based on the metric of Hamming distance. We put forward a quantum circuit for computing Hamming distance between testing sample and each feature vector in the training set. Taking advantage of this method, we realized a good analog for classical KNN algorithm by setting a distance threshold value t to select k - n e a r e s t neighbors. As a result, QKNN achieves O( n 3) performance which is only relevant to the dimension of feature vectors and high classification accuracy, outperforms Llyod's algorithm (Lloyd et al. 2013) and Wiebe's algorithm (Wiebe et al. 2014).
Tibi, R.; Young, C. J.; Gonzales, A.; Ballard, S.; Encarnacao, A. V.
2016-12-01
The matched filtering technique involving the cross-correlation of a waveform of interest with archived signals from a template library has proven to be a powerful tool for detecting events in regions with repeating seismicity. However, waveform correlation is computationally expensive, and therefore impractical for large template sets unless dedicated distributed computing hardware and software are used. In this study, we introduce an Approximate Nearest Neighbor (ANN) approach that enables the use of very large template libraries for waveform correlation without requiring a complex distributed computing system. Our method begins with a projection into a reduced dimensionality space based on correlation with a randomized subset of the full template archive. Searching for a specified number of nearest neighbors is accomplished by using randomized K-dimensional trees. We used the approach to search for matches to each of 2700 analyst-reviewed signal detections reported for May 2010 for the IMS station MKAR. The template library in this case consists of a dataset of more than 200,000 analyst-reviewed signal detections for the same station from 2002-2014 (excluding May 2010). Of these signal detections, 60% are teleseismic first P, and 15% regional phases (Pn, Pg, Sn, and Lg). The analyses performed on a standard desktop computer shows that the proposed approach performs the search of the large template libraries about 20 times faster than the standard full linear search, while achieving recall rates greater than 80%, with the recall rate increasing for higher correlation values. To decide whether to confirm a match, we use a hybrid method involving a cluster approach for queries with two or more matches, and correlation score for single matches. Of the signal detections that passed our confirmation process, 52% were teleseismic first P, and 30% were regional phases.
Distance-Constraint k-Nearest Neighbor Searching in Mobile Sensor Networks.
Han, Yongkoo; Park, Kisung; Hong, Jihye; Ulamin, Noor; Lee, Young-Koo
2015-07-27
The κ-Nearest Neighbors ( κNN) query is an important spatial query in mobile sensor networks. In this work we extend κNN to include a distance constraint, calling it a l-distant κ-nearest-neighbors (l-κNN) query, which finds the κ sensor nodes nearest to a query point that are also at or greater distance from each other. The query results indicate the objects nearest to the area of interest that are scattered from each other by at least distance l. The l-κNN query can be used in most κNN applications for the case of well distributed query results. To process an l-κNN query, we must discover all sets of κNN sensor nodes and then find all pairs of sensor nodes in each set that are separated by at least a distance l. Given the limited battery and computing power of sensor nodes, this l-κNN query processing is problematically expensive in terms of energy consumption. In this paper, we propose a greedy approach for l-κNN query processing in mobile sensor networks. The key idea of the proposed approach is to divide the search space into subspaces whose all sides are l. By selecting κ sensor nodes from the other subspaces near the query point, we guarantee accurate query results for l-κNN. In our experiments, we show that the proposed method exhibits superior performance compared with a post-processing based method using the κNN query in terms of energy efficiency, query latency, and accuracy.
Elliptic Painlevé equations from next-nearest-neighbor translations on the E_8^{(1)} lattice
Joshi, Nalini; Nakazono, Nobutaka
2017-07-01
The well known elliptic discrete Painlevé equation of Sakai is constructed by a standard translation on the E_8(1) lattice, given by nearest neighbor vectors. In this paper, we give a new elliptic discrete Painlevé equation obtained by translations along next-nearest-neighbor vectors. This equation is a generic (8-parameter) version of a 2-parameter elliptic difference equation found by reduction from Adler’s partial difference equation, the so-called Q4 equation. We also provide a projective reduction of the well known equation of Sakai.
Wang, ShaoPeng; Zhang, Yu-Hang; Lu, Jing; Cui, Weiren; Hu, Jerry; Cai, Yu-Dong
2016-01-01
The development of biochemistry and molecular biology has revealed an increasingly important role of compounds in several biological processes. Like the aptamer-protein interaction, aptamer-compound interaction attracts increasing attention. However, it is time-consuming to select proper aptamers against compounds using traditional methods, such as exponential enrichment. Thus, there is an urgent need to design effective computational methods for searching effective aptamers against compounds. This study attempted to extract important features for aptamer-compound interactions using feature selection methods, such as Maximum Relevance Minimum Redundancy, as well as incremental feature selection. Each aptamer-compound pair was represented by properties derived from the aptamer and compound, including frequencies of single nucleotides and dinucleotides for the aptamer, as well as the constitutional, electrostatic, quantum-chemical, and space conformational descriptors of the compounds. As a result, some important features were obtained. To confirm the importance of the obtained features, we further discussed the associations between them and aptamer-compound interactions. Simultaneously, an optimal prediction model based on the nearest neighbor algorithm was built to identify aptamer-compound interactions, which has the potential to be a useful tool for the identification of novel aptamer-compound interactions. The program is available upon the request.
Directory of Open Access Journals (Sweden)
Hyung-Ju Cho
2012-01-01
Full Text Available Given two positive parameters k and r, a constrained k-nearest neighbor (CkNN query returns the k closest objects within a network distance r of the query location in road networks. In terms of the scalability of monitoring these CkNN queries, existing solutions based on central processing at a server suffer from a sudden and sharp rise in server load as well as messaging cost as the number of queries increases. In this paper, we propose a distributed and scalable scheme called DAEMON for the continuous monitoring of CkNN queries in road networks. Our query processing is distributed among clients (query objects and server. Specifically, the server evaluates CkNN queries issued at intersections of road segments, retrieves the objects on the road segments between neighboring intersections, and sends responses to the query objects. Finally, each client makes its own query result using this server response. As a result, our distributed scheme achieves close-to-optimal communication costs and scales well to large numbers of monitoring queries. Exhaustive experimental results demonstrate that our scheme substantially outperforms its competitor in terms of query processing time and messaging cost.
International Nuclear Information System (INIS)
Vorob'ev, V.S.
2003-01-01
We suggest a concept of multiple disordering scaling of the crystalline state. Such a scaling procedure applied to a crystal leads to the liquid and (in low density limit) gas states. This approach provides an explanation to a high value of configuration (common) entropy of liquefied noble gases, which can be deduced from experimental data. We use the generalized nearest-neighbor approach to calculate free energy and pressure of the Lennard-Jones systems after performing this scaling procedure. These thermodynamic functions depend on one parameter characterizing the disordering only. Condensed states of the system (liquid and solid) correspond to small values of this parameter. When this parameter tends to unity, we get an asymptotically exact equation of state for a gas involving the second virial coefficient. A reasonable choice of the values for the disordering parameter (ranging between zero and unity) allows us to find the lines of coexistence between different phase states in the Lennard-Jones systems, which are in a good agreement with the available experimental data
An RFID Indoor Positioning Algorithm Based on Bayesian Probability and K-Nearest Neighbor.
Xu, He; Ding, Ye; Li, Peng; Wang, Ruchuan; Li, Yizhu
2017-08-05
The Global Positioning System (GPS) is widely used in outdoor environmental positioning. However, GPS cannot support indoor positioning because there is no signal for positioning in an indoor environment. Nowadays, there are many situations which require indoor positioning, such as searching for a book in a library, looking for luggage in an airport, emergence navigation for fire alarms, robot location, etc. Many technologies, such as ultrasonic, sensors, Bluetooth, WiFi, magnetic field, Radio Frequency Identification (RFID), etc., are used to perform indoor positioning. Compared with other technologies, RFID used in indoor positioning is more cost and energy efficient. The Traditional RFID indoor positioning algorithm LANDMARC utilizes a Received Signal Strength (RSS) indicator to track objects. However, the RSS value is easily affected by environmental noise and other interference. In this paper, our purpose is to reduce the location fluctuation and error caused by multipath and environmental interference in LANDMARC. We propose a novel indoor positioning algorithm based on Bayesian probability and K -Nearest Neighbor (BKNN). The experimental results show that the Gaussian filter can filter some abnormal RSS values. The proposed BKNN algorithm has the smallest location error compared with the Gaussian-based algorithm, LANDMARC and an improved KNN algorithm. The average error in location estimation is about 15 cm using our method.
An RFID Indoor Positioning Algorithm Based on Bayesian Probability and K-Nearest Neighbor
Directory of Open Access Journals (Sweden)
He Xu
2017-08-01
Full Text Available The Global Positioning System (GPS is widely used in outdoor environmental positioning. However, GPS cannot support indoor positioning because there is no signal for positioning in an indoor environment. Nowadays, there are many situations which require indoor positioning, such as searching for a book in a library, looking for luggage in an airport, emergence navigation for fire alarms, robot location, etc. Many technologies, such as ultrasonic, sensors, Bluetooth, WiFi, magnetic field, Radio Frequency Identification (RFID, etc., are used to perform indoor positioning. Compared with other technologies, RFID used in indoor positioning is more cost and energy efficient. The Traditional RFID indoor positioning algorithm LANDMARC utilizes a Received Signal Strength (RSS indicator to track objects. However, the RSS value is easily affected by environmental noise and other interference. In this paper, our purpose is to reduce the location fluctuation and error caused by multipath and environmental interference in LANDMARC. We propose a novel indoor positioning algorithm based on Bayesian probability and K-Nearest Neighbor (BKNN. The experimental results show that the Gaussian filter can filter some abnormal RSS values. The proposed BKNN algorithm has the smallest location error compared with the Gaussian-based algorithm, LANDMARC and an improved KNN algorithm. The average error in location estimation is about 15 cm using our method.
Hu, Weiwei; Tan, Ying
2016-12-01
The nearest neighbor (NN) classifier suffers from high time complexity when classifying a test instance since the need of searching the whole training set. Prototype generation is a widely used approach to reduce the classification time, which generates a small set of prototypes to classify a test instance instead of using the whole training set. In this paper, particle swarm optimization is applied to prototype generation and two novel methods for improving the classification performance are presented: 1) a fitness function named error rank and 2) the multiobjective (MO) optimization strategy. Error rank is proposed to enhance the generation ability of the NN classifier, which takes the ranks of misclassified instances into consideration when designing the fitness function. The MO optimization strategy pursues the performance on multiple subsets of data simultaneously, in order to keep the classifier from overfitting the training set. Experimental results over 31 UCI data sets and 59 additional data sets show that the proposed algorithm outperforms nearly 30 existing prototype generation algorithms.
Energy Technology Data Exchange (ETDEWEB)
Van de Wiele, Ben [Department of Electrical Energy, Systems and Automation, Ghent University, Technologiepark 913, B-9052 Ghent-Zwijnaarde (Belgium); Fin, Samuele [Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, 44122 Ferrara (Italy); Pancaldi, Matteo [CIC nanoGUNE, E-20018 Donostia-San Sebastian (Spain); Vavassori, Paolo [CIC nanoGUNE, E-20018 Donostia-San Sebastian (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Sarella, Anandakumar [Physics Department, Mount Holyoke College, 211 Kendade, 50 College St., South Hadley, Massachusetts 01075 (United States); Bisero, Diego [Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, 44122 Ferrara (Italy); CNISM, Unità di Ferrara, 44122 Ferrara (Italy)
2016-05-28
Various proposals for future magnetic memories, data processing devices, and sensors rely on a precise control of the magnetization ground state and magnetization reversal process in periodically patterned media. In finite dot arrays, such control is hampered by the magnetostatic interactions between the nanomagnets, leading to the non-uniform magnetization state distributions throughout the sample while reversing. In this paper, we evidence how during reversal typical geometric arrangements of dots in an identical magnetization state appear that originate in the dominance of either Global Configurational Anisotropy or Nearest-Neighbor Magnetostatic interactions, which depends on the fields at which the magnetization reversal sets in. Based on our findings, we propose design rules to obtain the uniform magnetization state distributions throughout the array, and also suggest future research directions to achieve non-uniform state distributions of interest, e.g., when aiming at guiding spin wave edge-modes through dot arrays. Our insights are based on the Magneto-Optical Kerr Effect and Magnetic Force Microscopy measurements as well as the extensive micromagnetic simulations.
CATEGORIZATION OF GELAM, ACACIA AND TUALANG HONEY ODORPROFILE USING K-NEAREST NEIGHBORS
Directory of Open Access Journals (Sweden)
Nurdiyana Zahed
2018-02-01
Full Text Available Honey authenticity refer to honey types is of great importance issue and interest in agriculture. In current research, several documents of specific types of honey have their own usage in medical field. However, it is quite challenging task to classify different types of honey by simply using our naked eye. This work demostrated a successful an electronic nose (E-nose application as an instrument for identifying odor profile pattern of three common honey in Malaysia (Gelam, Acacia and Tualang honey. The applied E-nose has produced signal for odor measurement in form of numeric resistance (Ω. The data reading have been pre-processed using normalization technique for standardized scale of unique features. Mean features is extracted and boxplot used as the statistical tool to present the data pattern according to three types of honey. Mean features that have been extracted were employed into K-Nearest Neighbors classifier as an input features and evaluated using several splitting ratio. Excellent results were obtained by showing 100% rate of accuracy, sensitivity and specificity of classification from KNN using weigh (k=1, ratio 90:10 and Euclidean distance. The findings confirmed the ability of KNN classifier as intelligent classification to classify different honey types from E-nose calibration. Outperform of other classifier, KNN required less parameter optimization and achieved promising result.
Evidence of codon usage in the nearest neighbor spacing distribution of bases in bacterial genomes
Higareda, M. F.; Geiger, O.; Mendoza, L.; Méndez-Sánchez, R. A.
2012-02-01
Statistical analysis of whole genomic sequences usually assumes a homogeneous nucleotide density throughout the genome, an assumption that has been proved incorrect for several organisms since the nucleotide density is only locally homogeneous. To avoid giving a single numerical value to this variable property, we propose the use of spectral statistics, which characterizes the density of nucleotides as a function of its position in the genome. We show that the cumulative density of bases in bacterial genomes can be separated into an average (or secular) plus a fluctuating part. Bacterial genomes can be divided into two groups according to the qualitative description of their secular part: linear and piecewise linear. These two groups of genomes show different properties when their nucleotide spacing distribution is studied. In order to analyze genomes having a variable nucleotide density, statistically, the use of unfolding is necessary, i.e., to get a separation between the secular part and the fluctuations. The unfolding allows an adequate comparison with the statistical properties of other genomes. With this methodology, four genomes were analyzed Burkholderia, Bacillus, Clostridium and Corynebacterium. Interestingly, the nearest neighbor spacing distributions or detrended distance distributions are very similar for species within the same genus but they are very different for species from different genera. This difference can be attributed to the difference in the codon usage.
Wang, Xueyi
2012-02-08
The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.
Majid, Abdul; Ali, Safdar; Iqbal, Mubashar; Kausar, Nabeela
2014-03-01
This study proposes a novel prediction approach for human breast and colon cancers using different feature spaces. The proposed scheme consists of two stages: the preprocessor and the predictor. In the preprocessor stage, the mega-trend diffusion (MTD) technique is employed to increase the samples of the minority class, thereby balancing the dataset. In the predictor stage, machine-learning approaches of K-nearest neighbor (KNN) and support vector machines (SVM) are used to develop hybrid MTD-SVM and MTD-KNN prediction models. MTD-SVM model has provided the best values of accuracy, G-mean and Matthew's correlation coefficient of 96.71%, 96.70% and 71.98% for cancer/non-cancer dataset, breast/non-breast cancer dataset and colon/non-colon cancer dataset, respectively. We found that hybrid MTD-SVM is the best with respect to prediction performance and computational cost. MTD-KNN model has achieved moderately better prediction as compared to hybrid MTD-NB (Naïve Bayes) but at the expense of higher computing cost. MTD-KNN model is faster than MTD-RF (random forest) but its prediction is not better than MTD-RF. To the best of our knowledge, the reported results are the best results, so far, for these datasets. The proposed scheme indicates that the developed models can be used as a tool for the prediction of cancer. This scheme may be useful for study of any sequential information such as protein sequence or any nucleic acid sequence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Jyuo-Min Shyu
2010-11-01
Full Text Available A great deal of work has been done to develop techniques for odor analysis by electronic nose systems. These analyses mostly focus on identifying a particular odor by comparing with a known odor dataset. However, in many situations, it would be more practical if each individual odorant could be determined directly. This paper proposes two methods for such odor components analysis for electronic nose systems. First, a K-nearest neighbor (KNN-based local weighted nearest neighbor (LWNN algorithm is proposed to determine the components of an odor. According to the component analysis, the odor training data is firstly categorized into several groups, each of which is represented by its centroid. The examined odor is then classified as the class of the nearest centroid. The distance between the examined odor and the centroid is calculated based on a weighting scheme, which captures the local structure of each predefined group. To further determine the concentration of each component, odor models are built by regressions. Then, a weighted and constrained least-squares (WCLS method is proposed to estimate the component concentrations. Experiments were carried out to assess the effectiveness of the proposed methods. The LWNN algorithm is able to classify mixed odors with different mixing ratios, while the WCLS method can provide good estimates on component concentrations.
An Improvement To The k-Nearest Neighbor Classifier For ECG Database
Jaafar, Haryati; Hidayah Ramli, Nur; Nasir, Aimi Salihah Abdul
2018-03-01
The k nearest neighbor (kNN) is a non-parametric classifier and has been widely used for pattern classification. However, in practice, the performance of kNN often tends to fail due to the lack of information on how the samples are distributed among them. Moreover, kNN is no longer optimal when the training samples are limited. Another problem observed in kNN is regarding the weighting issues in assigning the class label before classification. Thus, to solve these limitations, a new classifier called Mahalanobis fuzzy k-nearest centroid neighbor (MFkNCN) is proposed in this study. Here, a Mahalanobis distance is applied to avoid the imbalance of samples distribition. Then, a surrounding rule is employed to obtain the nearest centroid neighbor based on the distributions of training samples and its distance to the query point. Consequently, the fuzzy membership function is employed to assign the query point to the class label which is frequently represented by the nearest centroid neighbor Experimental studies from electrocardiogram (ECG) signal is applied in this study. The classification performances are evaluated in two experimental steps i.e. different values of k and different sizes of feature dimensions. Subsequently, a comparative study of kNN, kNCN, FkNN and MFkCNN classifier is conducted to evaluate the performances of the proposed classifier. The results show that the performance of MFkNCN consistently exceeds the kNN, kNCN and FkNN with the best classification rates of 96.5%.
Using K-Nearest Neighbor Classification to Diagnose Abnormal Lung Sounds
Directory of Open Access Journals (Sweden)
Chin-Hsing Chen
2015-06-01
Full Text Available A reported 30% of people worldwide have abnormal lung sounds, including crackles, rhonchi, and wheezes. To date, the traditional stethoscope remains the most popular tool used by physicians to diagnose such abnormal lung sounds, however, many problems arise with the use of a stethoscope, including the effects of environmental noise, the inability to record and store lung sounds for follow-up or tracking, and the physician’s subjective diagnostic experience. This study has developed a digital stethoscope to help physicians overcome these problems when diagnosing abnormal lung sounds. In this digital system, mel-frequency cepstral coefficients (MFCCs were used to extract the features of lung sounds, and then the K-means algorithm was used for feature clustering, to reduce the amount of data for computation. Finally, the K-nearest neighbor method was used to classify the lung sounds. The proposed system can also be used for home care: if the percentage of abnormal lung sound frames is > 30% of the whole test signal, the system can automatically warn the user to visit a physician for diagnosis. We also used bend sensors together with an amplification circuit, Bluetooth, and a microcontroller to implement a respiration detector. The respiratory signal extracted by the bend sensors can be transmitted to the computer via Bluetooth to calculate the respiratory cycle, for real-time assessment. If an abnormal status is detected, the device will warn the user automatically. Experimental results indicated that the error in respiratory cycles between measured and actual values was only 6.8%, illustrating the potential of our detector for home care applications.
Xia, Wenjun; Mita, Yoshio; Shibata, Tadashi
2016-05-01
Aiming at efficient data condensation and improving accuracy, this paper presents a hardware-friendly template reduction (TR) method for the nearest neighbor (NN) classifiers by introducing the concept of critical boundary vectors. A hardware system is also implemented to demonstrate the feasibility of using an field-programmable gate array (FPGA) to accelerate the proposed method. Initially, k -means centers are used as substitutes for the entire template set. Then, to enhance the classification performance, critical boundary vectors are selected by a novel learning algorithm, which is completed within a single iteration. Moreover, to remove noisy boundary vectors that can mislead the classification in a generalized manner, a global categorization scheme has been explored and applied to the algorithm. The global characterization automatically categorizes each classification problem and rapidly selects the boundary vectors according to the nature of the problem. Finally, only critical boundary vectors and k -means centers are used as the new template set for classification. Experimental results for 24 data sets show that the proposed algorithm can effectively reduce the number of template vectors for classification with a high learning speed. At the same time, it improves the accuracy by an average of 2.17% compared with the traditional NN classifiers and also shows greater accuracy than seven other TR methods. We have shown the feasibility of using a proof-of-concept FPGA system of 256 64-D vectors to accelerate the proposed method on hardware. At a 50-MHz clock frequency, the proposed system achieves a 3.86 times higher learning speed than on a 3.4-GHz PC, while consuming only 1% of the power of that used by the PC.
Directory of Open Access Journals (Sweden)
Gede Aditra Pradnyana
2018-01-01
Full Text Available Permasalahan yang terjadi saat pembentukan atau pembagian kelas mahasiswa adalah perbedaan kemampuan yang dimiliki oleh mahasiswa di setiap kelasnya yang dapat berdampak pada tidak efektifnya proses pembelajaran yang berlangsung. Pengelompokkan mahasiswa dengan kemampuan yang sama merupakan hal yang sangat penting dalam rangka meningkatkan kualitas proses belajar mengajar yang dilakukan. Dengan pengelompokkan mahasiswa yang tepat, mereka akan dapat saling membantu dalam proses pembelajaran. Selain itu, membagi kelas mahasiswa sesuai dengan kemampuannya dapat mempermudah tenaga pendidik dalam menentukan metode atau strategi pembelajaran yang sesuai. Penggunaan metode dan strategi pembelajaran yang tepat akan meningkatkan efektifitas proses belajar mengajar. Pada penelitian ini dirancang sebuah metode baru untuk pembagian kelas kuliah mahasiswa dengan mengkombinasikan metode K-Means dan K-Nearest Neighbors (KNN. Metode K-means digunakan untuk pembagian kelas kuliah mahasiswa berdasarkan komponen penilaian dari mata kuliah prasyaratnya. Adapun fitur yang digunakan dalam pengelompokkan adalah nilai tugas, nilai ujian tengah semester, nilai ujian akhir semester, dan indeks prestasi kumulatif (IPK. Metode KNN digunakan untuk memprediksi kelulusan seoarang mahasiswa di sebuah matakuliah berdasarkan data sebelumnya. Hasil prediksi ini akan digunakan sebagai fitur tambahan yang digunakan dalam pembentukan kelas mahasiswa menggunakan metode K-means. Pendekatan yang digunakan dalam penelitian ini adalah Software Development Live Cycle (SDLC dengan model waterfall. Berdasarkan hasil pengujian yang dilakukan diperoleh kesimpulan bahwa jumlah cluster atau kelas dan jumlah data yang digunakan mempengaruhi dari kualitas cluster yang dibentuk oleh metode K-Means dan KNN yang digunakan. Nilai Silhouette Indeks tertinggi diperolah saat menggunakan 100 data dengan jumlah cluster 10 sebesar 0,534 yang tergolong kelas dengan kualitas medium structure.
Chikh, Mohamed Amine; Saidi, Meryem; Settouti, Nesma
2012-10-01
The use of expert systems and artificial intelligence techniques in disease diagnosis has been increasing gradually. Artificial Immune Recognition System (AIRS) is one of the methods used in medical classification problems. AIRS2 is a more efficient version of the AIRS algorithm. In this paper, we used a modified AIRS2 called MAIRS2 where we replace the K- nearest neighbors algorithm with the fuzzy K-nearest neighbors to improve the diagnostic accuracy of diabetes diseases. The diabetes disease dataset used in our work is retrieved from UCI machine learning repository. The performances of the AIRS2 and MAIRS2 are evaluated regarding classification accuracy, sensitivity and specificity values. The highest classification accuracy obtained when applying the AIRS2 and MAIRS2 using 10-fold cross-validation was, respectively 82.69% and 89.10%.
Nearest-neighbor Kitaev exchange blocked by charge order in electron-doped α -RuCl3
Koitzsch, A.; Habenicht, C.; Müller, E.; Knupfer, M.; Büchner, B.; Kretschmer, S.; Richter, M.; van den Brink, J.; Börrnert, F.; Nowak, D.; Isaeva, A.; Doert, Th.
2017-10-01
A quantum spin liquid might be realized in α -RuCl3 , a honeycomb-lattice magnetic material with substantial spin-orbit coupling. Moreover, α -RuCl3 is a Mott insulator, which implies the possibility that novel exotic phases occur upon doping. Here, we study the electronic structure of this material when intercalated with potassium by photoemission spectroscopy, electron energy loss spectroscopy, and density functional theory calculations. We obtain a stable stoichiometry at K0.5RuCl3 . This gives rise to a peculiar charge disproportionation into formally Ru2 + (4 d6 ) and Ru3 + (4 d5 ). Every Ru 4 d5 site with one hole in the t2 g shell is surrounded by nearest neighbors of 4 d6 character, where the t2 g level is full and magnetically inert. Thus, each type of Ru site forms a triangular lattice, and nearest-neighbor interactions of the original honeycomb are blocked.
Nearest neighbor imputation using spatial-temporal correlations in wireless sensor networks.
Li, YuanYuan; Parker, Lynne E
2014-01-01
Missing data is common in Wireless Sensor Networks (WSNs), especially with multi-hop communications. There are many reasons for this phenomenon, such as unstable wireless communications, synchronization issues, and unreliable sensors. Unfortunately, missing data creates a number of problems for WSNs. First, since most sensor nodes in the network are battery-powered, it is too expensive to have the nodes retransmit missing data across the network. Data re-transmission may also cause time delays when detecting abnormal changes in an environment. Furthermore, localized reasoning techniques on sensor nodes (such as machine learning algorithms to classify states of the environment) are generally not robust enough to handle missing data. Since sensor data collected by a WSN is generally correlated in time and space, we illustrate how replacing missing sensor values with spatially and temporally correlated sensor values can significantly improve the network's performance. However, our studies show that it is important to determine which nodes are spatially and temporally correlated with each other. Simple techniques based on Euclidean distance are not sufficient for complex environmental deployments. Thus, we have developed a novel Nearest Neighbor (NN) imputation method that estimates missing data in WSNs by learning spatial and temporal correlations between sensor nodes. To improve the search time, we utilize a k d-tree data structure, which is a non-parametric, data-driven binary search tree. Instead of using traditional mean and variance of each dimension for k d-tree construction, and Euclidean distance for k d-tree search, we use weighted variances and weighted Euclidean distances based on measured percentages of missing data. We have evaluated this approach through experiments on sensor data from a volcano dataset collected by a network of Crossbow motes, as well as experiments using sensor data from a highway traffic monitoring application. Our experimental
Directory of Open Access Journals (Sweden)
Jianbin Xiong
2015-01-01
Full Text Available It is difficult to well distinguish the dimensionless indexes between normal petrochemical rotating machinery equipment and those with complex faults. When the conflict of evidence is too big, it will result in uncertainty of diagnosis. This paper presents a diagnosis method for rotation machinery fault based on dimensionless indexes combined with K-nearest neighbor (KNN algorithm. This method uses a KNN algorithm and an evidence fusion theoretical formula to process fuzzy data, incomplete data, and accurate data. This method can transfer the signals from the petrochemical rotating machinery sensors to the reliability manners using dimensionless indexes and KNN algorithm. The input information is further integrated by an evidence synthesis formula to get the final data. The type of fault will be decided based on these data. The experimental results show that the proposed method can integrate data to provide a more reliable and reasonable result, thereby reducing the decision risk.
Directory of Open Access Journals (Sweden)
S. P. Arunachalam
2018-01-01
Full Text Available Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR and atrial fibrillation (AF was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.
Combining Fourier and lagged k-nearest neighbor imputation for biomedical time series data.
Rahman, Shah Atiqur; Huang, Yuxiao; Claassen, Jan; Heintzman, Nathaniel; Kleinberg, Samantha
2015-12-01
Most clinical and biomedical data contain missing values. A patient's record may be split across multiple institutions, devices may fail, and sensors may not be worn at all times. While these missing values are often ignored, this can lead to bias and error when the data are mined. Further, the data are not simply missing at random. Instead the measurement of a variable such as blood glucose may depend on its prior values as well as that of other variables. These dependencies exist across time as well, but current methods have yet to incorporate these temporal relationships as well as multiple types of missingness. To address this, we propose an imputation method (FLk-NN) that incorporates time lagged correlations both within and across variables by combining two imputation methods, based on an extension to k-NN and the Fourier transform. This enables imputation of missing values even when all data at a time point is missing and when there are different types of missingness both within and across variables. In comparison to other approaches on three biological datasets (simulated and actual Type 1 diabetes datasets, and multi-modality neurological ICU monitoring) the proposed method has the highest imputation accuracy. This was true for up to half the data being missing and when consecutive missing values are a significant fraction of the overall time series length. Copyright © 2015 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Kudrawiec, R.; Poloczek, P.; Misiewicz, J.; Korpijaervi, V.-M.; Laukkanen, P.; Pakarinen, J.; Dumitrescu, M.; Guina, M.; Pessa, M.
2009-01-01
The energy fine structure, corresponding to different nitrogen nearest-neighbor environments, was observed in contactless electroreflectance (CER) spectra of as-grown GaInNAs quantum wells (QWs) obtained at various As/III pressure ratios. In the spectral range of the fundamental transition, two CER resonances were detected for samples grown at low As pressures whereas only one CER resonance was observed for samples obtained at higher As pressures. This resonance corresponds to the most favorable nitrogen nearest-neighbor environment in terms of the total crystal energy. It means that the nitrogen nearest-neighbor environment in GaInNAs QWs can be controlled in molecular beam epitaxy process by As/III pressure ratio.
Manganaro, Alberto; Pizzo, Fabiola; Lombardo, Anna; Pogliaghi, Alberto; Benfenati, Emilio
2016-02-01
The ability of a substance to resist degradation and persist in the environment needs to be readily identified in order to protect the environment and human health. Many regulations require the assessment of persistence for substances commonly manufactured and marketed. Besides laboratory-based testing methods, in silico tools may be used to obtain a computational prediction of persistence. We present a new program to develop k-Nearest Neighbor (k-NN) models. The k-NN algorithm is a similarity-based approach that predicts the property of a substance in relation to the experimental data for its most similar compounds. We employed this software to identify persistence in the sediment compartment. Data on half-life (HL) in sediment were obtained from different sources and, after careful data pruning the final dataset, containing 297 organic compounds, was divided into four experimental classes. We developed several models giving satisfactory performances, considering that both the training and test set accuracy ranged between 0.90 and 0.96. We finally selected one model which will be made available in the near future in the freely available software platform VEGA. This model offers a valuable in silico tool that may be really useful for fast and inexpensive screening. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
CARLOS ALBERTO SILVA
Full Text Available ABSTRACT Accurate forest inventory is of great economic importance to optimize the entire supply chain management in pulp and paper companies. The aim of this study was to estimate stand dominate and mean heights (HD and HM and tree density (TD of Pinus taeda plantations located in South Brazil using in-situ measurements, airborne Light Detection and Ranging (LiDAR data and the non- k-nearest neighbor (k-NN imputation. Forest inventory attributes and LiDAR derived metrics were calculated at 53 regular sample plots and we used imputation models to retrieve the forest attributes at plot and landscape-levels. The best LiDAR-derived metrics to predict HD, HM and TD were H99TH, HSD, SKE and HMIN. The Imputation model using the selected metrics was more effective for retrieving height than tree density. The model coefficients of determination (adj.R2 and a root mean squared difference (RMSD for HD, HM and TD were 0.90, 0.94, 0.38m and 6.99, 5.70, 12.92%, respectively. Our results show that LiDAR and k-NN imputation can be used to predict stand heights with high accuracy in Pinus taeda. However, furthers studies need to be realized to improve the accuracy prediction of TD and to evaluate and compare the cost of acquisition and processing of LiDAR data against the conventional inventory procedures.
Silva, Carlos Alberto; Klauberg, Carine; Hudak, Andrew T; Vierling, Lee A; Liesenberg, Veraldo; Bernett, Luiz G; Scheraiber, Clewerson F; Schoeninger, Emerson R
2018-01-01
Accurate forest inventory is of great economic importance to optimize the entire supply chain management in pulp and paper companies. The aim of this study was to estimate stand dominate and mean heights (HD and HM) and tree density (TD) of Pinus taeda plantations located in South Brazil using in-situ measurements, airborne Light Detection and Ranging (LiDAR) data and the non- k-nearest neighbor (k-NN) imputation. Forest inventory attributes and LiDAR derived metrics were calculated at 53 regular sample plots and we used imputation models to retrieve the forest attributes at plot and landscape-levels. The best LiDAR-derived metrics to predict HD, HM and TD were H99TH, HSD, SKE and HMIN. The Imputation model using the selected metrics was more effective for retrieving height than tree density. The model coefficients of determination (adj.R2) and a root mean squared difference (RMSD) for HD, HM and TD were 0.90, 0.94, 0.38m and 6.99, 5.70, 12.92%, respectively. Our results show that LiDAR and k-NN imputation can be used to predict stand heights with high accuracy in Pinus taeda. However, furthers studies need to be realized to improve the accuracy prediction of TD and to evaluate and compare the cost of acquisition and processing of LiDAR data against the conventional inventory procedures.
Renormalization-group studies of antiferromagnetic chains. I. Nearest-neighbor interactions
International Nuclear Information System (INIS)
Rabin, J.M.
1980-01-01
The real-space renormalization-group method introduced by workers at the Stanford Linear Accelerator Center (SLAC) is used to study one-dimensional antiferromagnetic chains at zero temperature. Calculations using three-site blocks (for the Heisenberg-Ising model) and two-site blocks (for the isotropic Heisenberg model) are compared with exact results. In connection with the two-site calculation a duality transformation is introduced under which the isotropic Heisenberg model is self-dual. Such duality transformations can be defined for models other than those considered here, and may be useful in various block-spin calculations
Zhang, Xiaoli; Zhang, Guoren; Jia, Ting; Zeng, Zhi; Lin, H. Q.
2016-05-01
We study the abnormal ferromagnetism in α-K2AgF4, which is very similar to high-TC parent material La2CuO4 in structure. We find out that the electron correlation is very important in determining the insulating property of α-K2AgF4. The Ag(II) 4d9 in the octahedron crystal field has the t2 g 6 eg 3 electron occupation with eg x2-y2 orbital fully occupied and 3z2-r2 orbital partially occupied. The two eg orbitals are very extended indicating both of them are active in superexchange. Using the Hubbard model combined with Nth-order muffin-tin orbital (NMTO) downfolding technique, it is concluded that the exchange interaction between eg 3z2-r2 and x2-y2 from the first nearest neighbor Ag ions leads to the anomalous ferromagnetism in α-K2AgF4.
Directory of Open Access Journals (Sweden)
Xiaoli Zhang
2016-05-01
Full Text Available We study the abnormal ferromagnetism in α-K2AgF4, which is very similar to high-TC parent material La2CuO4 in structure. We find out that the electron correlation is very important in determining the insulating property of α-K2AgF4. The Ag(II 4d9 in the octahedron crystal field has the t 2 g 6 e g 3 electron occupation with eg x2-y2 orbital fully occupied and 3z2-r2 orbital partially occupied. The two eg orbitals are very extended indicating both of them are active in superexchange. Using the Hubbard model combined with Nth-order muffin-tin orbital (NMTO downfolding technique, it is concluded that the exchange interaction between eg 3z2-r2 and x2-y2 from the first nearest neighbor Ag ions leads to the anomalous ferromagnetism in α-K2AgF4.
Directory of Open Access Journals (Sweden)
Jaime Vitola
2017-02-01
Full Text Available Civil and military structures are susceptible and vulnerable to damage due to the environmental and operational conditions. Therefore, the implementation of technology to provide robust solutions in damage identification (by using signals acquired directly from the structure is a requirement to reduce operational and maintenance costs. In this sense, the use of sensors permanently attached to the structures has demonstrated a great versatility and benefit since the inspection system can be automated. This automation is carried out with signal processing tasks with the aim of a pattern recognition analysis. This work presents the detailed description of a structural health monitoring (SHM system based on the use of a piezoelectric (PZT active system. The SHM system includes: (i the use of a piezoelectric sensor network to excite the structure and collect the measured dynamic response, in several actuation phases; (ii data organization; (iii advanced signal processing techniques to define the feature vectors; and finally; (iv the nearest neighbor algorithm as a machine learning approach to classify different kinds of damage. A description of the experimental setup, the experimental validation and a discussion of the results from two different structures are included and analyzed.
Wang, Lusheng; Yang, Yong; Lin, Guohui
Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.
International Nuclear Information System (INIS)
Zhang Yanxia; Ma He; Peng Nanbo; Zhao Yongheng; Wu Xuebing
2013-01-01
We apply one of the lazy learning methods, the k-nearest neighbor (kNN) algorithm, to estimate the photometric redshifts of quasars based on various data sets from the Sloan Digital Sky Survey (SDSS), the UKIRT Infrared Deep Sky Survey (UKIDSS), and the Wide-field Infrared Survey Explorer (WISE; the SDSS sample, the SDSS-UKIDSS sample, the SDSS-WISE sample, and the SDSS-UKIDSS-WISE sample). The influence of the k value and different input patterns on the performance of kNN is discussed. kNN performs best when k is different with a special input pattern for a special data set. The best result belongs to the SDSS-UKIDSS-WISE sample. The experimental results generally show that the more information from more bands, the better performance of photometric redshift estimation with kNN. The results also demonstrate that kNN using multiband data can effectively solve the catastrophic failure of photometric redshift estimation, which is met by many machine learning methods. Compared with the performance of various other methods of estimating the photometric redshifts of quasars, kNN based on KD-Tree shows superiority, exhibiting the best accuracy.
Energy Technology Data Exchange (ETDEWEB)
Zhang Yanxia; Ma He; Peng Nanbo; Zhao Yongheng [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, 100012 Beijing (China); Wu Xuebing, E-mail: zyx@bao.ac.cn [Department of Astronomy, Peking University, 100871 Beijing (China)
2013-08-01
We apply one of the lazy learning methods, the k-nearest neighbor (kNN) algorithm, to estimate the photometric redshifts of quasars based on various data sets from the Sloan Digital Sky Survey (SDSS), the UKIRT Infrared Deep Sky Survey (UKIDSS), and the Wide-field Infrared Survey Explorer (WISE; the SDSS sample, the SDSS-UKIDSS sample, the SDSS-WISE sample, and the SDSS-UKIDSS-WISE sample). The influence of the k value and different input patterns on the performance of kNN is discussed. kNN performs best when k is different with a special input pattern for a special data set. The best result belongs to the SDSS-UKIDSS-WISE sample. The experimental results generally show that the more information from more bands, the better performance of photometric redshift estimation with kNN. The results also demonstrate that kNN using multiband data can effectively solve the catastrophic failure of photometric redshift estimation, which is met by many machine learning methods. Compared with the performance of various other methods of estimating the photometric redshifts of quasars, kNN based on KD-Tree shows superiority, exhibiting the best accuracy.
Directory of Open Access Journals (Sweden)
Jiandong Zhao
2018-01-01
Full Text Available Remote transportation microwave sensor (RTMS technology is being promoted for China’s highways. The distance is about 2 to 5 km between RTMSs, which leads to missing data and data sparseness problems. These two problems seriously restrict the accuracy of travel time prediction. Aiming at the data-missing problem, based on traffic multimode characteristics, a tensor completion method is proposed to recover the lost RTMS speed and volume data. Aiming at the data sparseness problem, virtual sensor nodes are set up between real RTMS nodes, and the two-dimensional linear interpolation and piecewise method are applied to estimate the average travel time between two nodes. Next, compared with the traditional K-nearest neighbor method, an optimal KNN method is proposed for travel time prediction. optimization is made in three aspects. Firstly, the three original state vectors, that is, speed, volume, and time of the day, are subdivided into seven periods. Secondly, the traffic congestion level is added as a new state vector. Thirdly, the cross-validation method is used to calibrate the K value to improve the adaptability of the KNN algorithm. Based on the data collected from Jinggangao highway, all the algorithms are validated. The results show that the proposed method can improve data quality and prediction precision of travel time.
Yang, Dongzheng; Hu, Xixi; Zhang, Dong H.; Xie, Daiqian
2018-02-01
Solving the time-independent close coupling equations of a diatom-diatom inelastic collision system by using the rigorous close-coupling approach is numerically difficult because of its expensive matrix manipulation. The coupled-states approximation decouples the centrifugal matrix by neglecting the important Coriolis couplings completely. In this work, a new approximation method based on the coupled-states approximation is presented and applied to time-independent quantum dynamic calculations. This approach only considers the most important Coriolis coupling with the nearest neighbors and ignores weaker Coriolis couplings with farther K channels. As a result, it reduces the computational costs without a significant loss of accuracy. Numerical tests for para-H2+ortho-H2 and para-H2+HD inelastic collision were carried out and the results showed that the improved method dramatically reduces the errors due to the neglect of the Coriolis couplings in the coupled-states approximation. This strategy should be useful in quantum dynamics of other systems.
International Nuclear Information System (INIS)
Fatollahi, Amir H.; Khorrami, Mohammad; Shariati, Ahmad; Aghamohammadi, Amir
2011-01-01
A complete classification is given for one-dimensional chains with nearest-neighbor interactions having two states in each site, for which a matrix product ground state exists. The Hamiltonians and their corresponding matrix product ground states are explicitly obtained.
Self-consistent-field calculations of proteinlike incorporations in polyelectrolyte complex micelles
Lindhoud, S.; Cohen Stuart, M.A.; Norde, W.; Leermakers, F.A.M.
2009-01-01
Self-consistent field theory is applied to model the structure and stability of polyelectrolyte complex micelles with incorporated protein (molten globule) molecules in the core. The electrostatic interactions that drive the micelle formation are mimicked by nearest-neighbor interactions using
International Nuclear Information System (INIS)
Hu, Chao; Jain, Gaurav; Zhang, Puqiang; Schmidt, Craig; Gomadam, Parthasarathy; Gorka, Tom
2014-01-01
Highlights: • We develop a data-driven method for the battery capacity estimation. • Five charge-related features that are indicative of the capacity are defined. • The kNN regression model captures the dependency of the capacity on the features. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the battery health condition by estimating the battery capacity over the life-time. This paper presents a data-driven method for estimating the capacity of Li-ion battery based on the charge voltage and current curves. The contributions of this paper are three-fold: (i) the definition of five characteristic features of the charge curves that are indicative of the capacity, (ii) the development of a non-linear kernel regression model, based on the k-nearest neighbor (kNN) regression, that captures the complex dependency of the capacity on the five features, and (iii) the adaptation of particle swarm optimization (PSO) to finding the optimal combination of feature weights for creating a kNN regression model that minimizes the cross validation (CV) error in the capacity estimation. Verification with 10 years’ continuous cycling data suggests that the proposed method is able to accurately estimate the capacity of Li-ion battery throughout the whole life-time
Li, Xiaohui; Yang, Sibo; Fan, Rongwei; Yu, Xin; Chen, Deying
2018-06-01
In this paper, discrimination of soft tissues using laser-induced breakdown spectroscopy (LIBS) in combination with multivariate statistical methods is presented. Fresh pork fat, skin, ham, loin and tenderloin muscle tissues are manually cut into slices and ablated using a 1064 nm pulsed Nd:YAG laser. Discrimination analyses between fat, skin and muscle tissues, and further between highly similar ham, loin and tenderloin muscle tissues, are performed based on the LIBS spectra in combination with multivariate statistical methods, including principal component analysis (PCA), k nearest neighbors (kNN) classification, and support vector machine (SVM) classification. Performances of the discrimination models, including accuracy, sensitivity and specificity, are evaluated using 10-fold cross validation. The classification models are optimized to achieve best discrimination performances. The fat, skin and muscle tissues can be definitely discriminated using both kNN and SVM classifiers, with accuracy of over 99.83%, sensitivity of over 0.995 and specificity of over 0.998. The highly similar ham, loin and tenderloin muscle tissues can also be discriminated with acceptable performances. The best performances are achieved with SVM classifier using Gaussian kernel function, with accuracy of 76.84%, sensitivity of over 0.742 and specificity of over 0.869. The results show that the LIBS technique assisted with multivariate statistical methods could be a powerful tool for online discrimination of soft tissues, even for tissues of high similarity, such as muscles from different parts of the animal body. This technique could be used for discrimination of tissues suffering minor clinical changes, thus may advance the diagnosis of early lesions and abnormalities.
DEFF Research Database (Denmark)
Lefmann, K.; Rischel, C.
1996-01-01
We present a numerical diagonalization study of two one-dimensional S=1/2 antiferromagnetic Heisenberg chains, having nearest-neighbor and Haldane-Shastry (1/r(2)) interactions, respectively. We have obtained the T=0 dynamical correlation function, S-alpha alpha(q,omega), for chains of length N=8......-28. We have studied S-zz(q,omega) for the Heisenberg chain in zero field, and from finite-size scaling we have obtained a limiting behavior that for large omega deviates from the conjecture proposed earlier by Muller ct al. For both chains we describe the behavior of S-zz(q,omega) and S...
Surmach, M. A.; Chen, B. J.; Deng, Z.; Jin, C. Q.; Glasbrenner, J. K.; Mazin, I. I.; Ivanov, A.; Inosov, D. S.
2018-03-01
Dilute magnetic semiconductors (DMS) are nonmagnetic semiconductors doped with magnetic transition metals. The recently discovered DMS material (Ba1 -xKx) (Zn1-yMny) 2As2 offers a unique and versatile control of the Curie temperature TC by decoupling the spin (Mn2 +, S =5 /2 ) and charge (K+) doping in different crystallographic layers. In an attempt to describe from first-principles calculations the role of hole doping in stabilizing ferromagnetic order, it was recently suggested that the antiferromagnetic exchange coupling J between the nearest-neighbor Mn ions would experience a nearly twofold suppression upon doping 20% of holes by potassium substitution. At the same time, further-neighbor interactions become increasingly ferromagnetic upon doping, leading to a rapid increase of TC. Using inelastic neutron scattering, we have observed a localized magnetic excitation at about 13 meV associated with the destruction of the nearest-neighbor Mn-Mn singlet ground state. Hole doping results in a notable broadening of this peak, evidencing significant particle-hole damping, but with only a minor change in the peak position. We argue that this unexpected result can be explained by a combined effect of superexchange and double-exchange interactions.
Shariq, Ahmed
2012-01-01
A next nearest neighbor evaluation procedure of atom probe tomography data provides distributions of the distances between atoms. The width of these distributions for metallic glasses studied so far is a few Angstrom reflecting the spatial resolution of the analytical technique. However, fitting Gaussian distributions to the distribution of atomic distances yields average distances with statistical uncertainties of 2 to 3 hundredth of an Angstrom. Fe 40Ni40B20 metallic glass ribbons are characterized this way in the as quenched state and for a state heat treated at 350 °C for 1 h revealing a change in the structure on the sub-nanometer scale. By applying the statistical tool of the χ2 test a slight deviation from a random distribution of B-atoms in the as quenched sample is perceived, whereas a pronounced elemental inhomogeneity of boron is detected for the annealed state. In addition, the distance distribution of the first fifteen atomic neighbors is determined by using this algorithm for both annealed and as quenched states. The next neighbor evaluation algorithm evinces a steric periodicity of the atoms when the next neighbor distances are normalized by the first next neighbor distance. A comparison of the nearest neighbor atomic distribution for as quenched and annealed state shows accumulation of Ni and B. Moreover, it also reveals the tendency of Fe and B to move slightly away from each other, an incipient step to Ni rich boride formation. © 2011 Elsevier B.V.
Datta, A.; Banerjee, S.; Finley, A.O.; Hamm, N.A.S.; Schaap, M.
2016-01-01
Particulate matter (PM) is a class of malicious environmental pollutants known to be detrimental to human health. Regulatory efforts aimed at curbing PM levels in different countries often require high resolution space–time maps that can identify red-flag regions exceeding statutory concentration
Whitmore, Lee; Mavridis, Lazaros; Wallace, B A; Janes, Robert W
2018-01-01
Circular dichroism spectroscopy is a well-used, but simple method in structural biology for providing information on the secondary structure and folds of proteins. DichroMatch (DM@PCDDB) is an online tool that is newly available in the Protein Circular Dichroism Data Bank (PCDDB), which takes advantage of the wealth of spectral and metadata deposited therein, to enable identification of spectral nearest neighbors of a query protein based on four different methods of spectral matching. DM@PCDDB can potentially provide novel information about structural relationships between proteins and can be used in comparison studies of protein homologs and orthologs. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
Directory of Open Access Journals (Sweden)
Brett A McKinney
Full Text Available Relief-F is a nonparametric, nearest-neighbor machine learning method that has been successfully used to identify relevant variables that may interact in complex multivariate models to explain phenotypic variation. While several tools have been developed for assessing differential expression in sequence-based transcriptomics, the detection of statistical interactions between transcripts has received less attention in the area of RNA-seq analysis. We describe a new extension and assessment of Relief-F for feature selection in RNA-seq data. The ReliefSeq implementation adapts the number of nearest neighbors (k for each gene to optimize the Relief-F test statistics (importance scores for finding both main effects and interactions. We compare this gene-wise adaptive-k (gwak Relief-F method with standard RNA-seq feature selection tools, such as DESeq and edgeR, and with the popular machine learning method Random Forests. We demonstrate performance on a panel of simulated data that have a range of distributional properties reflected in real mRNA-seq data including multiple transcripts with varying sizes of main effects and interaction effects. For simulated main effects, gwak-Relief-F feature selection performs comparably to standard tools DESeq and edgeR for ranking relevant transcripts. For gene-gene interactions, gwak-Relief-F outperforms all comparison methods at ranking relevant genes in all but the highest fold change/highest signal situations where it performs similarly. The gwak-Relief-F algorithm outperforms Random Forests for detecting relevant genes in all simulation experiments. In addition, Relief-F is comparable to the other methods based on computational time. We also apply ReliefSeq to an RNA-Seq study of smallpox vaccine to identify gene expression changes between vaccinia virus-stimulated and unstimulated samples. ReliefSeq is an attractive tool for inclusion in the suite of tools used for analysis of mRNA-Seq data; it has power to
Ayu Cyntya Dewi, Dyah; Shaufiah; Asror, Ibnu
2018-03-01
SMS (Short Message Service) is on e of the communication services that still be the main choice, although now the phone grow with various applications. Along with the development of various other communication media, some countries lowered SMS rates to keep the interest of mobile users. It resulted in increased spam SMS that used by several parties, one of them for advertisement. Given the kind of multi-lingual documents in a message SMS, the Web, and others, necessary for effective multilingual or cross-lingual processing techniques is becoming increasingly important. The steps that performed in this research is data / messages first preprocessing then represented into a graph model. Then calculated using GKNN method. From this research we get the maximum accuracy is 98.86 with training data in Indonesian language and testing data in indonesian language with K 10 and threshold 0.001.
Directory of Open Access Journals (Sweden)
A. Moosavian
2013-01-01
Full Text Available Vibration analysis is an accepted method in condition monitoring of machines, since it can provide useful and reliable information about machine working condition. This paper surveys a new scheme for fault diagnosis of main journal-bearings of internal combustion (IC engine based on power spectral density (PSD technique and two classifiers, namely, K-nearest neighbor (KNN and artificial neural network (ANN. Vibration signals for three different conditions of journal-bearing; normal, with oil starvation condition and extreme wear fault were acquired from an IC engine. PSD was applied to process the vibration signals. Thirty features were extracted from the PSD values of signals as a feature source for fault diagnosis. KNN and ANN were trained by training data set and then used as diagnostic classifiers. Variable K value and hidden neuron count (N were used in the range of 1 to 20, with a step size of 1 for KNN and ANN to gain the best classification results. The roles of PSD, KNN and ANN techniques were studied. From the results, it is shown that the performance of ANN is better than KNN. The experimental results dèmonstrate that the proposed diagnostic method can reliably separate different fault conditions in main journal-bearings of IC engine.
He, Runnan; Wang, Kuanquan; Li, Qince; Yuan, Yongfeng; Zhao, Na; Liu, Yang; Zhang, Henggui
2017-12-01
Cardiovascular diseases are associated with high morbidity and mortality. However, it is still a challenge to diagnose them accurately and efficiently. Electrocardiogram (ECG), a bioelectrical signal of the heart, provides crucial information about the dynamical functions of the heart, playing an important role in cardiac diagnosis. As the QRS complex in ECG is associated with ventricular depolarization, therefore, accurate QRS detection is vital for interpreting ECG features. In this paper, we proposed a real-time, accurate, and effective algorithm for QRS detection. In the algorithm, a proposed preprocessor with a band-pass filter was first applied to remove baseline wander and power-line interference from the signal. After denoising, a method combining K-Nearest Neighbor (KNN) and Particle Swarm Optimization (PSO) was used for accurate QRS detection in ECGs with different morphologies. The proposed algorithm was tested and validated using 48 ECG records from MIT-BIH arrhythmia database (MITDB), achieved a high averaged detection accuracy, sensitivity and positive predictivity of 99.43, 99.69, and 99.72%, respectively, indicating a notable improvement to extant algorithms as reported in literatures.
Directory of Open Access Journals (Sweden)
Facundo Barbar
2018-05-01
Full Text Available The introduction of alien species could be changing food source composition, ultimately restructuring demography and spatial distribution of native communities. In Argentine Patagonia, the exotic European hare has one of the highest numbers recorded worldwide and is now a widely consumed prey for many predators. We examine the potential relationship between abundance of this relatively new prey and the abundance and breeding spacing of one of its main consumers, the Black-chested Buzzard-Eagle (Geranoaetus melanoleucus. First we analyze the abundance of individuals of a raptor guild in relation to hare abundance through a correspondence analysis. We then estimated the Nearest Neighbor Distance (NND of the Black-chested Buzzard-eagle abundances in the two areas with high hare abundances. Finally, we performed a meta-regression between the NND and the body masses of Accipitridae raptors, to evaluate if Black-chested Buzzard-eagle NND deviates from the expected according to their mass. We found that eagle abundance was highly associated with hare abundance, more than with any other raptor species in the study area. Their NND deviates from the value expected, which was significantly lower than expected for a raptor species of this size in two areas with high hare abundance. Our results support the hypothesis that high local abundance of prey leads to a reduction of the breeding spacing of its main predator, which could potentially alter other interspecific interactions, and thus the entire community.
Directory of Open Access Journals (Sweden)
Leonhard Suchenwirth
2014-07-01
Full Text Available Among the machine learning tools being used in recent years for environmental applications such as forestry, self-organizing maps (SOM and the k-nearest neighbor (kNN algorithm have been used successfully. We applied both methods for the mapping of organic carbon (Corg in riparian forests due to their considerably high carbon storage capacity. Despite the importance of floodplains for carbon sequestration, a sufficient scientific foundation for creating large-scale maps showing the spatial Corg distribution is still missing. We estimated organic carbon in a test site in the Danube Floodplain based on RapidEye remote sensing data and additional geodata. Accordingly, carbon distribution maps of vegetation, soil, and total Corg stocks were derived. Results were compared and statistically evaluated with terrestrial survey data for outcomes with pure remote sensing data and for the combination with additional geodata using bias and the Root Mean Square Error (RMSE. Results show that SOM and kNN approaches enable us to reproduce spatial patterns of riparian forest Corg stocks. While vegetation Corg has very high RMSEs, outcomes for soil and total Corg stocks are less biased with a lower RMSE, especially when remote sensing and additional geodata are conjointly applied. SOMs show similar percentages of RMSE to kNN estimations.
Yang, Jiaojiao; Guo, Qian; Li, Wenjie; Wang, Suhong; Zou, Ling
2016-04-01
This paper aims to assist the individual clinical diagnosis of children with attention-deficit/hyperactivity disorder using electroencephalogram signal detection method.Firstly,in our experiments,we obtained and studied the electroencephalogram signals from fourteen attention-deficit/hyperactivity disorder children and sixteen typically developing children during the classic interference control task of Simon-spatial Stroop,and we completed electroencephalogram data preprocessing including filtering,segmentation,removal of artifacts and so on.Secondly,we selected the subset electroencephalogram electrodes using principal component analysis(PCA)method,and we collected the common channels of the optimal electrodes which occurrence rates were more than 90%in each kind of stimulation.We then extracted the latency(200~450ms)mean amplitude features of the common electrodes.Finally,we used the k-nearest neighbor(KNN)classifier based on Euclidean distance and the support vector machine(SVM)classifier based on radial basis kernel function to classify.From the experiment,at the same kind of interference control task,the attention-deficit/hyperactivity disorder children showed lower correct response rates and longer reaction time.The N2 emerged in prefrontal cortex while P2 presented in the inferior parietal area when all kinds of stimuli demonstrated.Meanwhile,the children with attention-deficit/hyperactivity disorder exhibited markedly reduced N2 and P2amplitude compared to typically developing children.KNN resulted in better classification accuracy than SVM classifier,and the best classification rate was 89.29%in StI task.The results showed that the electroencephalogram signals were different in the brain regions of prefrontal cortex and inferior parietal cortex between attention-deficit/hyperactivity disorder and typically developing children during the interference control task,which provided a scientific basis for the clinical diagnosis of attention
Borghi, Giacomo; Tabacchini, Valerio; Seifert, Stefan; Schaart, Dennis R.
2015-02-01
Monolithic scintillator detectors can achieve excellent spatial resolution and coincidence resolving time. However, their practical use for positron emission tomography (PET) and other applications in the medical imaging field is still limited due to drawbacks of the different methods used to estimate the position of interaction. Common statistical methods for example require the collection of an extensive dataset of reference events with a narrow pencil beam aimed at a fine grid of reference positions. Such procedures are time consuming and not straightforwardly implemented in systems composed of many detectors. Here, we experimentally demonstrate for the first time a new calibration procedure for k-nearest neighbor ( k-NN) position estimation that utilizes reference data acquired with a fan beam. The procedure is tested on two detectors consisting of 16 mm ×16 mm ×10 mm and 16 mm ×16 mm ×20 mm monolithic, Ca-codoped LSO:Ce crystals and digital photon counter (DPC) arrays. For both detectors, the spatial resolution and the bias obtained with the new method are found to be practically the same as those obtained with the previously used method based on pencil-beam irradiation, while the calibration time is reduced by a factor of 20. Specifically, a FWHM of 1.1 mm and a FWTM of 2.7 mm were obtained using the fan-beam method with the 10 mm crystal, whereas a FWHM of 1.5 mm and a FWTM of 6 mm were achieved with the 20 mm crystal. Using a fan beam made with a 4.5 MBq 22Na point-source and a tungsten slit collimator with 0.5 mm aperture, the total measurement time needed to acquire the reference dataset was 3 hours for the thinner crystal and 2 hours for the thicker one.
Directory of Open Access Journals (Sweden)
Fuqian Shi
2012-01-01
Full Text Available Emotional cellular (EC, proposed in our previous works, is a kind of semantic cell that contains kernel and shell and the kernel is formalized by a triple- L = , where P denotes a typical set of positive examples relative to word-L, d is a pseudodistance measure on emotional two-dimensional space: valence-arousal, and δ is a probability density function on positive real number field. The basic idea of EC model is to assume that the neighborhood radius of each semantic concept is uncertain, and this uncertainty will be measured by one-dimensional density function δ. In this paper, product form features were evaluated by using ECs and to establish the product style database, fuzzy case based reasoning (FCBR model under a defined similarity measurement based on fuzzy nearest neighbors (FNN incorporating EC was applied to extract product styles. A mathematical formalized inference system for product style was also proposed, and it also includes uncertainty measurement tool emotional cellular. A case study of style acquisition of mobile phones illustrated the effectiveness of the proposed methodology.
Digital terrain model generalization incorporating scale, semantic and cognitive constraints
Partsinevelos, Panagiotis; Papadogiorgaki, Maria
2014-05-01
research scheme comprises of the combination of SOM with the variations of other widely used generalization algorithms. For instance, an adaptation of the Douglas-Peucker line simplification method in 3D data is used in order to reduce the initial nodes, while maintaining their actual coordinates. Furthermore, additional methods are deployed, aiming to corroborate and verify the significance of each node, such as mathematical algorithms exploiting the pixel's nearest neighbors. Finally, besides the quantitative evaluation of error vs information preservation in a DTM, cognitive inputs from geoscience experts are incorporated in order to test, fine-tune and advance our algorithm. Under the described strategy that incorporates mechanical, topology, semantic and cognitive restrains, results demonstrate the necessity to integrate these characteristics in describing raster DTM surfaces. Acknowledgements: This work is partially supported under the framework of the "Cooperation 2011" project ATLANTAS (11_SYN_6_1937) funded from the Operational Program "Competitiveness and Entrepreneurship" (co-funded by the European Regional Development Fund (ERDF)) and managed by the Greek General Secretariat for Research and Technology.
Norrie disease and MAO genes: nearest neighbors.
Chen, Z Y; Denney, R M; Breakefield, X O
1995-01-01
The Norrie disease and MAO genes are tandemly arranged in the p11.4-p11.3 region of the human X chromosome in the order tel-MAOA-MAOB-NDP-cent. This relationship is conserved in the mouse in the order tel-MAOB-MAOA-NDP-cent. The MAO genes appear to have arisen by tandem duplication of an ancestral MAO gene, but their positional relationship to NDP appears to be random. Distinctive X-linked syndromes have been described for mutations in the MAOA and NDP genes, and in addition, individuals have been identified with contiguous gene syndromes due to chromosomal deletions which encompass two or three of these genes. Loss of function of the NDP gene causes a syndrome of congenital blindness and progressive hearing loss, sometimes accompanied by signs of CNS dysfunction, including variable mental retardation and psychiatric symptoms. Other mutations in the NDP gene have been found to underlie another X-linked eye disease, exudative vitreo-retinopathy. An MAOA deficiency state has been described in one family to date, with features of altered amine and amine metabolite levels, low normal intelligence, apparent difficulty in impulse control and cardiovascular difficulty in affected males. A contiguous gene syndrome in which all three genes are lacking, as well as other as yet unidentified flanking genes, results in severe mental retardation, small stature, seizures and congenital blindness, as well as altered amine and amine metabolites. Issues that remain to be resolved are the function of the NDP gene product, the frequency and phenotype of the MAOA deficiency state, and the possible occurrence and phenotype of an MAOB deficiency state.
Quantum Lattice-Gas Model for the Diffusion Equation
National Research Council Canada - National Science Library
Yepez, J
2001-01-01
.... It is a minimal model with two qubits per node of a one-dimensional lattice and it is suitable for implementation on a large array of small quantum computers interconnected by nearest-neighbor...
Social aggregation in pea aphids: experiment and random walk modeling.
Directory of Open Access Journals (Sweden)
Christa Nilsen
Full Text Available From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.
Tomcho, Jeremy C; Tillman, Magdalena R; Znosko, Brent M
2015-09-01
Predicting the secondary structure of RNA is an intermediate in predicting RNA three-dimensional structure. Commonly, determining RNA secondary structure from sequence uses free energy minimization and nearest neighbor parameters. Current algorithms utilize a sequence-independent model to predict free energy contributions of dinucleotide bulges. To determine if a sequence-dependent model would be more accurate, short RNA duplexes containing dinucleotide bulges with different sequences and nearest neighbor combinations were optically melted to derive thermodynamic parameters. These data suggested energy contributions of dinucleotide bulges were sequence-dependent, and a sequence-dependent model was derived. This model assigns free energy penalties based on the identity of nucleotides in the bulge (3.06 kcal/mol for two purines, 2.93 kcal/mol for two pyrimidines, 2.71 kcal/mol for 5'-purine-pyrimidine-3', and 2.41 kcal/mol for 5'-pyrimidine-purine-3'). The predictive model also includes a 0.45 kcal/mol penalty for an A-U pair adjacent to the bulge and a -0.28 kcal/mol bonus for a G-U pair adjacent to the bulge. The new sequence-dependent model results in predicted values within, on average, 0.17 kcal/mol of experimental values, a significant improvement over the sequence-independent model. This model and new experimental values can be incorporated into algorithms that predict RNA stability and secondary structure from sequence.
Prototype-Incorporated Emotional Neural Network.
Oyedotun, Oyebade K; Khashman, Adnan
2017-08-15
Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.
Incorporating groundwater flow into the WEPP model
William Elliot; Erin Brooks; Tim Link; Sue Miller
2010-01-01
The water erosion prediction project (WEPP) model is a physically-based hydrology and erosion model. In recent years, the hydrology prediction within the model has been improved for forest watershed modeling by incorporating shallow lateral flow into watershed runoff prediction. This has greatly improved WEPP's hydrologic performance on small watersheds with...
Clustered K nearest neighbor algorithm for daily inflow forecasting
Akbari, M.; Van Overloop, P.J.A.T.M.; Afshar, A.
2010-01-01
Instance based learning (IBL) algorithms are a common choice among data driven algorithms for inflow forecasting. They are based on the similarity principle and prediction is made by the finite number of similar neighbors. In this sense, the similarity of a query instance is estimated according to
Utilization of Singularity Exponent in Nearest Neighbor Based Classifier
Czech Academy of Sciences Publication Activity Database
Jiřina, Marcel; Jiřina jr., M.
2013-01-01
Roč. 30, č. 1 (2013), s. 3-29 ISSN 0176-4268 Grant - others:Czech Technical University(CZ) CZ68407700 Institutional support: RVO:67985807 Keywords : multivariate data * probability density estimation * classification * probability distribution mapping function * probability density mapping function * power approximation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.571, year: 2013
MOST OBSERVATIONS OF OUR NEAREST NEIGHBOR: FLARES ON PROXIMA CENTAURI
Energy Technology Data Exchange (ETDEWEB)
Davenport, James R. A. [Department of Physics and Astronomy, Western Washington University, 516 High Street, Bellingham, WA 98225 (United States); Kipping, David M. [Department of Astronomy, Columbia University, 550 West 120th Street, New York, NY 10027 (United States); Sasselov, Dimitar [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Matthews, Jaymie M. [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada); Cameron, Chris [Department of Mathematics, Physics and Geology, Cape Breton University, 1250 Grand Lake Road, Sydney, NS B1P 6L2 (Canada)
2016-10-01
We present a study of white-light flares from the active M5.5 dwarf Proxima Centauri using the Canadian microsatellite Microvariability and Oscillations of STars . Using 37.6 days of monitoring data from 2014 to 2015, we have detected 66 individual flare events, the largest number of white-light flares observed to date on Proxima Cen. Flare energies in our sample range from 10{sup 29} to 10{sup 31.5} erg. The flare rate is lower than that of other classic flare stars of a similar spectral type, such as UV Ceti, which may indicate Proxima Cen had a higher flare rate in its youth. Proxima Cen does have an unusually high flare rate given its slow rotation period, however. Extending the observed power-law occurrence distribution down to 10{sup 28} erg, we show that flares with flux amplitudes of 0.5% occur 63 times per day, while superflares with energies of 10{sup 33} erg occur ∼8 times per year. Small flares may therefore pose a great difficulty in searches for transits from the recently announced 1.27 M {sub ⊕} Proxima b, while frequent large flares could have significant impact on the planetary atmosphere.
Czech Academy of Sciences Publication Activity Database
Tarasenko, Alexander
2018-01-01
Roč. 95, Jan (2018), s. 37-40 ISSN 1386-9477 R&D Projects: GA MŠk LO1409; GA MŠk LM2015088 Institutional support: RVO:68378271 Keywords : lattice gas systems * kinetic Monte Carlo simulations * diffusion and migration Subject RIV: BE - Theoretical Physics OBOR OECD: Atomic, molecular and chemical physics (physics of atoms and molecules including collision, interaction with radiation, magnetic resonances, Mössbauer effect) Impact factor: 2.221, year: 2016
Incorporating interfacial phenomena in solidification models
Beckermann, Christoph; Wang, Chao Yang
1994-01-01
A general methodology is available for the incorporation of microscopic interfacial phenomena in macroscopic solidification models that include diffusion and convection. The method is derived from a formal averaging procedure and a multiphase approach, and relies on the presence of interfacial integrals in the macroscopic transport equations. In a wider engineering context, these techniques are not new, but their application in the analysis and modeling of solidification processes has largely been overlooked. This article describes the techniques and demonstrates their utility in two examples in which microscopic interfacial phenomena are of great importance.
International Nuclear Information System (INIS)
Jin, Jiahua; Shen, Chen; Chu, Chen; Shi, Lei
2017-01-01
Highlights: • In spatial games, each player incorporates its environment into fitness only when its environment is greater than or equal to its payoff. • The mechanism of incorporating dominant environment promotes evolution of cooperation. • The robustness of such a mechanism to promote cooperation is verified for the snowdrift game and the various interaction networks. - Abstract: In spatial evolutionary games, the fitness of each player is usually measured by its inheritance (i.e. the accumulated payoffs by playing the game with its all nearest neighbors), or by the linear combination of its inheritance and its environment (i.e. the average of its all nearest neighbors’ inheritance). However, a rational individual incorporates environment into its fitness to develop itself only when environment is dominant in real life. Here, we redefine the individual fitness as a linear combination of inheritance and environment when environment performs better than inheritance. Multiple Monte Carlo simulation results show that incorporating dominant environment can improve cooperation comparing with the traditional case, and furthermore increasing the proportion of prevailing environment can enhance cooperative level better. These findings indicate that our mechanism enhances the individual ability to adapt environment, and makes the spatial reciprocity more efficient. Besides, we also verify its robustness against different game models and various topology structures.
Incorporating neurophysiological concepts in mathematical thermoregulation models
Kingma, Boris R. M.; Vosselman, M. J.; Frijns, A. J. H.; van Steenhoven, A. A.; van Marken Lichtenbelt, W. D.
2014-01-01
Skin blood flow (SBF) is a key player in human thermoregulation during mild thermal challenges. Various numerical models of SBF regulation exist. However, none explicitly incorporates the neurophysiology of thermal reception. This study tested a new SBF model that is in line with experimental data on thermal reception and the neurophysiological pathways involved in thermoregulatory SBF control. Additionally, a numerical thermoregulation model was used as a platform to test the function of the neurophysiological SBF model for skin temperature simulation. The prediction-error of the SBF-model was quantified by root-mean-squared-residual (RMSR) between simulations and experimental measurement data. Measurement data consisted of SBF (abdomen, forearm, hand), core and skin temperature recordings of young males during three transient thermal challenges (1 development and 2 validation). Additionally, ThermoSEM, a thermoregulation model, was used to simulate body temperatures using the new neurophysiological SBF-model. The RMSR between simulated and measured mean skin temperature was used to validate the model. The neurophysiological model predicted SBF with an accuracy of RMSR human thermoregulation models can be equipped with SBF control functions that are based on neurophysiology without loss of performance. The neurophysiological approach in modelling thermoregulation is favourable over engineering approaches because it is more in line with the underlying physiology.
Directory of Open Access Journals (Sweden)
Drzewiecki Wojciech
2016-12-01
Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.
Jafarpour, Farshid; Angheluta, Luiza; Goldenfeld, Nigel
2013-10-01
The dynamics of edge dislocations with parallel Burgers vectors, moving in the same slip plane, is mapped onto Dyson's model of a two-dimensional Coulomb gas confined in one dimension. We show that the tail distribution of the velocity of dislocations is power law in form, as a consequence of the pair interaction of nearest neighbors in one dimension. In two dimensions, we show the presence of a pairing phase transition in a system of interacting dislocations with parallel Burgers vectors. The scaling exponent of the velocity distribution at effective temperatures well below this pairing transition temperature can be derived from the nearest-neighbor interaction, while near the transition temperature, the distribution deviates from the form predicted by the nearest-neighbor interaction, suggesting the presence of collective effects.
Interacting-fermion approximation in the two-dimensional ANNNI model
International Nuclear Information System (INIS)
Grynberg, M.D.; Ceva, H.
1990-12-01
We investigate the effect of including domain-walls interactions in the two-dimensional axial next-nearest-neighbor Ising or ANNNI model. At low temperatures this problem is reduced to a one-dimensional system of interacting fermions which can be treated exactly. It is found that the critical boundaries of the low-temperature phases are in good agreement with those obtained using a free-fermion approximation. In contrast with the monotonic behavior derived from the free-fermion approach, the wall density or wave number displays reentrant phenomena when the ratio of the next-nearest-neighbor and nearest-neighbor interactions is greater than one-half. (author). 17 refs, 2 figs
Energetics and Dynamics of Cu(001)-c(2x2)Cl steps
van Dijk, F.R.; Zandvliet, Henricus J.W.; Poelsema, Bene
2006-01-01
The energetics of the step faceting transition of Cu(001) [copper (001) surface] upon Cl (chloride) adsorption in contact with HCl (hydrogen chloride) solution is modeled in terms of a solid-on-solid model that incorporates both nearest-neighbor and next-nearest-neighbor interactions. It is shown
Green function study of a mixed spin-((3)/(2)) and spin-((1)/(2)) Heisenberg ferrimagnetic model
International Nuclear Information System (INIS)
Li Jun; Wei Guozhu; Du An
2004-01-01
The magnetic properties of a mixed spin-((3)/(2)) and spin-((1)/(2)) Heisenberg ferrimagnetic system on a square lattice are investigated theoretically by a multisublattice Green-function technique which takes into account the quantum nature of Heisenberg spins. This model can be relevant for understanding the magnetic behavior of the new class of organometallic materials that exhibit spontaneous magnetic moments at room temperature. We discuss the spontaneous magnetic moments and the finite-temperature phase diagram. We find that there is no compensation point at finite temperature when only the nearest-neighbor interaction and the single-ion anisotropy are included. When the next-nearest-neighbor interaction between spin-((1)/(2)) is taken into account and exceeds a minimum value, a compensation point appears and it is basically unchanged for other values in Hamiltonian fixed. The next-nearest-neighbor interaction between spin-((3)/(2)) has the effect of changing the compensation temperature
Incorporating damage mechanics into explosion simulation models
International Nuclear Information System (INIS)
Sammis, C.G.
1993-01-01
The source region of an underground explosion is commonly modeled as a nested series of shells. In the innermost open-quotes hydrodynamic regimeclose quotes pressures and temperatures are sufficiently high that the rock deforms as a fluid and may be described using a PVT equation of state. Just beyond the hydrodynamic regime, is the open-quotes non-linear regimeclose quotes in which the rock has shear strength but the deformation is nonlinear. This regime extends out to the open-quotes elastic radiusclose quotes beyond which the deformation is linear. In this paper, we develop a model for the non-linear regime in crystalline source rock where the nonlinearity is mostly due to fractures. We divide the non-linear regime into a open-quotes damage regimeclose quotes in which the stresses are sufficiently high to nucleate new fractures from preexisting ones and a open-quotes crack-slidingclose quotes regime where motion on preexisting cracks produces amplitude dependent attenuation and other non-linear effects, but no new cracks are nucleated. The boundary between these two regimes is called the open-quotes damage radius.close quotes The micromechanical damage mechanics recently developed by Ashby and Sammis (1990) is used to write an analytic expression for the damage radius in terms of the initial fracture spectrum of the source rock, and to develop an algorithm which may be used to incorporate damage mechanics into computer source models for the damage regime. Effects of water saturation and loading rate are also discussed
Abelian tensor models on the lattice
Chaudhuri, Soumyadeep; Giraldo-Rivera, Victor I.; Joseph, Anosh; Loganayagam, R.; Yoon, Junggi
2018-04-01
We consider a chain of Abelian Klebanov-Tarnopolsky fermionic tensor models coupled through quartic nearest-neighbor interactions. We characterize the gauge-singlet spectrum for small chains (L =2 ,3 ,4 ,5 ) and observe that the spectral statistics exhibits strong evidence in favor of quasi-many-body localization.
Energy Technology Data Exchange (ETDEWEB)
Deviren, Bayram [Institute of Science, Erciyes University, Kayseri 38039 (Turkey); Canko, Osman [Department of Physics, Erciyes University, Kayseri 38039 (Turkey); Keskin, Mustafa [Department of Physics, Erciyes University, Kayseri 38039 (Turkey)], E-mail: keskin@erciyes.edu.tr
2008-09-15
The Ising model with three alternative layers on the honeycomb and square lattices is studied by using the effective-field theory with correlations. We consider that the nearest-neighbor spins of each layer are coupled ferromagnetically and the adjacent spins of the nearest-neighbor layers are coupled either ferromagnetically or anti-ferromagnetically depending on the sign of the bilinear exchange interactions. We investigate the thermal variations of the magnetizations and present the phase diagrams. The phase diagrams contain the paramagnetic, ferromagnetic and anti-ferromagnetic phases, and the system also exhibits a tricritical behavior.
International Nuclear Information System (INIS)
Deviren, Bayram; Canko, Osman; Keskin, Mustafa
2008-01-01
The Ising model with three alternative layers on the honeycomb and square lattices is studied by using the effective-field theory with correlations. We consider that the nearest-neighbor spins of each layer are coupled ferromagnetically and the adjacent spins of the nearest-neighbor layers are coupled either ferromagnetically or anti-ferromagnetically depending on the sign of the bilinear exchange interactions. We investigate the thermal variations of the magnetizations and present the phase diagrams. The phase diagrams contain the paramagnetic, ferromagnetic and anti-ferromagnetic phases, and the system also exhibits a tricritical behavior
Ground state phase diagram of extended attractive Hubbard model
International Nuclear Information System (INIS)
Robaszkiewicz, S.; Chao, K.A.; Micnas, R.
1980-08-01
The ground state phase diagram of the extended Hubbard model with intraatomic attraction has been derived in the Hartree-Fock approximation formulated in terms of the Bogoliubov variational approach. For a given value of electron density, the nature of the ordered ground state depends essentially on the sign and the strength of the nearest neighbor coupling. (author)
Incorporating direct marketing activity into latent attrition models
Schweidel, David A.; Knox, George
2013-01-01
When defection is unobserved, latent attrition models provide useful insights about customer behavior and accurate forecasts of customer value. Yet extant models ignore direct marketing efforts. Response models incorporate the effects of direct marketing, but because they ignore latent attrition,
Incorporating uncertainty in predictive species distribution modelling.
Beale, Colin M; Lennon, Jack J
2012-01-19
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.
A Financial Market Model Incorporating Herd Behaviour.
Wray, Christopher M; Bishop, Steven R
2016-01-01
Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the market
A Financial Market Model Incorporating Herd Behaviour.
Directory of Open Access Journals (Sweden)
Christopher M Wray
Full Text Available Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space, and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space, numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock
A Financial Market Model Incorporating Herd Behaviour
2016-01-01
Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents’ accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents’ accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the
Incorporating territory compression into population models
Ridley, J; Komdeur, J; Sutherland, WJ; Sutherland, William J.
The ideal despotic distribution, whereby the lifetime reproductive success a territory's owner achieves is unaffected by population density, is a mainstay of behaviour-based population models. We show that the population dynamics of an island population of Seychelles warblers (Acrocephalus
DEFF Research Database (Denmark)
Høst-Madsen, Anders; Shah, Peter Jivan; Hansen, Torben
1987-01-01
Computer-simulation techniques are used to study the domain-growth kinetics of (2×1) ordering in a two-dimensional Ising model with nonconserved order parameter and with variable ratio α of next-nearest- and nearest-neighbor interactions. At zero temperature, persistent growth characterized...
International Nuclear Information System (INIS)
Li Jun; Wei Guozhu; Du An
2005-01-01
The compensation and critical behaviors of a mixed spin-2 and spin-12 Heisenberg ferrimagnetic system on a square lattice are investigated theoretically by the two-time Green's function technique, which takes into account the quantum nature of Heisenberg spins. The model can be relevant for understanding the magnetic behavior of the new class of organometallic ferromagnetic materials that exhibit spontaneous magnetic properties at room temperature. We carry out the calculation of the sublattice magnetizations and the spin-wave spectra of the ground state. In particular, we have studied the effects of the nearest, next-nearest-neighbor interactions, the crystal field and the external magnetic field on the compensation temperature and the critical temperature. When only the nearest-neighbor interactions and the crystal field are included, no compensation temperature exists; when the next-nearest-neighbor interaction between spin-12 is taken into account and exceeds a minimum value, a compensation point appears and it is basically unchanged for other parameters in Hamiltonian fixed. The next-nearest-neighbor interactions between spin-2 and the external magnetic field have the effects of changing the compensation temperature and there is a narrow range of parameters of the Hamiltonian for which the model has the compensation temperatures and compensation temperature exists only for a small value of them
True dose from incorporated activities. Models for internal dosimetry
International Nuclear Information System (INIS)
Breustedt, B.; Eschner, W.; Nosske, D.
2012-01-01
The assessment of doses after incorporation of radionuclides cannot use direct measurements of the doses, as for example dosimetry in external radiation fields. The only observables are activities in the body or in excretions. Models are used to calculate the doses based on the measured activities. The incorporated activities and the resulting doses can vary by more than seven orders of magnitude between occupational and medical exposures. Nevertheless the models and calculations applied in both cases are similar. Since the models for the different applications have been developed independently by ICRP and MIRD different terminologies have been used. A unified terminology is being developed. (orig.)
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Incorporating functional inter-relationships into protein function prediction algorithms
Directory of Open Access Journals (Sweden)
Kumar Vipin
2009-05-01
Full Text Available Abstract Background Functional classification schemes (e.g. the Gene Ontology that serve as the basis for annotation efforts in several organisms are often the source of gold standard information for computational efforts at supervised protein function prediction. While successful function prediction algorithms have been developed, few previous efforts have utilized more than the protein-to-functional class label information provided by such knowledge bases. For instance, the Gene Ontology not only captures protein annotations to a set of functional classes, but it also arranges these classes in a DAG-based hierarchy that captures rich inter-relationships between different classes. These inter-relationships present both opportunities, such as the potential for additional training examples for small classes from larger related classes, and challenges, such as a harder to learn distinction between similar GO terms, for standard classification-based approaches. Results We propose a method to enhance the performance of classification-based protein function prediction algorithms by addressing the issue of using these interrelationships between functional classes constituting functional classification schemes. Using a standard measure for evaluating the semantic similarity between nodes in an ontology, we quantify and incorporate these inter-relationships into the k-nearest neighbor classifier. We present experiments on several large genomic data sets, each of which is used for the modeling and prediction of over hundred classes from the GO Biological Process ontology. The results show that this incorporation produces more accurate predictions for a large number of the functional classes considered, and also that the classes benefitted most by this approach are those containing the fewest members. In addition, we show how our proposed framework can be used for integrating information from the entire GO hierarchy for improving the accuracy of
"Violent Intent Modeling: Incorporating Cultural Knowledge into the Analytical Process
Energy Technology Data Exchange (ETDEWEB)
Sanfilippo, Antonio P.; Nibbs, Faith G.
2007-08-24
While culture has a significant effect on the appropriate interpretation of textual data, the incorporation of cultural considerations into data transformations has not been systematic. Recognizing that the successful prevention of terrorist activities could hinge on the knowledge of the subcultures, Anthropologist and DHS intern Faith Nibbs has been addressing the need to incorporate cultural knowledge into the analytical process. In this Brown Bag she will present how cultural ideology is being used to understand how the rhetoric of group leaders influences the likelihood of their constituents to engage in violent or radicalized behavior, and how violent intent modeling can benefit from understanding that process.
A simple spatiotemporal chaotic Lotka-Volterra model
International Nuclear Information System (INIS)
Sprott, J.C.; Wildenberg, J.C.; Azizi, Yousef
2005-01-01
A mathematically simple example of a high-dimensional (many-species) Lotka-Volterra model that exhibits spatiotemporal chaos in one spatial dimension is described. The model consists of a closed ring of identical agents, each competing for fixed finite resources with two of its four nearest neighbors. The model is prototypical of more complicated models in its quasiperiodic route to chaos (including attracting 3-tori), bifurcations, spontaneous symmetry breaking, and spatial pattern formation
A lattice gas model on a tangled chain
International Nuclear Information System (INIS)
Mejdani, R.
1993-04-01
We have used a model of a lattice gas defined on a tangled chain to study the enzyme kinetics by a modified transfer matrix method. By using a simple iterative algorithm we have obtained different kinds of saturation curves for different configurations of the tangled chain and different types of the additional interactions. In some special cases of configurations and interactions we have found the same equations for the saturation curves, which we have obtained before studying the lattice gas model with nearest neighbor interactions or the lattice gas model with alternate nearest neighbor interactions, using different techniques as the correlated walks' theory, the partition point technique or the transfer matrix model. This more general model and the new results could be useful for the experimental investigations. (author). 20 refs, 6 figs
Incorporating nitrogen fixing cyanobacteria in the global biogeochemical model HAMOCC
Paulsen, Hanna; Ilyina, Tatiana; Six, Katharina
2015-04-01
Nitrogen fixation by marine diazotrophs plays a fundamental role in the oceanic nitrogen and carbon cycle as it provides a major source of 'new' nitrogen to the euphotic zone that supports biological carbon export and sequestration. Since most global biogeochemical models include nitrogen fixation only diagnostically, they are not able to capture its spatial pattern sufficiently. Here we present the incorporation of an explicit, dynamic representation of diazotrophic cyanobacteria and the corresponding nitrogen fixation in the global ocean biogeochemical model HAMOCC (Hamburg Ocean Carbon Cycle model), which is part of the Max Planck Institute for Meteorology Earth system model (MPI-ESM). The parameterization of the diazotrophic growth is thereby based on available knowledge about the cyanobacterium Trichodesmium spp., which is considered as the most significant pelagic nitrogen fixer. Evaluation against observations shows that the model successfully reproduces the main spatial distribution of cyanobacteria and nitrogen fixation, covering large parts of the tropical and subtropical oceans. Besides the role of cyanobacteria in marine biogeochemical cycles, their capacity to form extensive surface blooms induces a number of bio-physical feedback mechanisms in the Earth system. The processes driving these interactions, which are related to the alteration of heat absorption, surface albedo and momentum input by wind, are incorporated in the biogeochemical and physical model of the MPI-ESM in order to investigate their impacts on a global scale. First preliminary results will be shown.
Methods improvements incorporated into the SAPHIRE ASP models
International Nuclear Information System (INIS)
Sattison, M.B.; Blackman, H.S.; Novack, S.D.; Smith, C.L.; Rasmuson, D.M.
1994-01-01
The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methodology, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements
Methods improvements incorporated into the SAPHIRE ASP models
International Nuclear Information System (INIS)
Sattison, M.B.; Blackman, H.S.; Novack, S.D.
1995-01-01
The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements
Truncated Calogero-Sutherland models on a circle
Tummuru, Tarun R.; Jain, Sudhir R.; Khare, Avinash
2017-12-01
We investigate a quantum many-body system with particles moving in a circle and subject to two-body and three-body potentials. This class of models, in which the range of interaction r can be set to a certain number of neighbors, extrapolates from a system with interactions up to next-to-nearest neighbors and the celebrated Calogero-Sutherland model. The exact ground state energy and a part of the excitation spectrum have been obtained.
Incorporating model parameter uncertainty into inverse treatment planning
International Nuclear Information System (INIS)
Lian Jun; Xing Lei
2004-01-01
Radiobiological treatment planning depends not only on the accuracy of the models describing the dose-response relation of different tumors and normal tissues but also on the accuracy of tissue specific radiobiological parameters in these models. Whereas the general formalism remains the same, different sets of model parameters lead to different solutions and thus critically determine the final plan. Here we describe an inverse planning formalism with inclusion of model parameter uncertainties. This is made possible by using a statistical analysis-based frameset developed by our group. In this formalism, the uncertainties of model parameters, such as the parameter a that describes tissue-specific effect in the equivalent uniform dose (EUD) model, are expressed by probability density function and are included in the dose optimization process. We found that the final solution strongly depends on distribution functions of the model parameters. Considering that currently available models for computing biological effects of radiation are simplistic, and the clinical data used to derive the models are sparse and of questionable quality, the proposed technique provides us with an effective tool to minimize the effect caused by the uncertainties in a statistical sense. With the incorporation of the uncertainties, the technique has potential for us to maximally utilize the available radiobiology knowledge for better IMRT treatment
Large-n limit of the Heisenberg model: The decorated lattice and the disordered chain
International Nuclear Information System (INIS)
Khoruzhenko, B.A.; Pastur, L.A.; Shcherbina, M.V.
1989-01-01
The critical temperature of the generalized spherical model (large-component limit of the classical Heisenberg model) on a cubic lattice, whose every bond is decorated by L spins, is found. When L → ∞, the asymptotics of the temperature is T c ∼ aL -1 . The reduction of the number of spherical constraints for the model is found to be fairly large. The free energy of the one-dimensional generalized spherical model with random nearest neighbor interaction is calculated
Transferable tight-binding model for strained group IV and III-V materials and heterostructures
Tan, Yaohua; Povolotskyi, Michael; Kubis, Tillmann; Boykin, Timothy B.; Klimeck, Gerhard
2016-07-01
It is critical to capture the effect due to strain and material interface for device level transistor modeling. We introduce a transferable s p3d5s* tight-binding model with nearest-neighbor interactions for arbitrarily strained group IV and III-V materials. The tight-binding model is parametrized with respect to hybrid functional (HSE06) calculations for varieties of strained systems. The tight-binding calculations of ultrasmall superlattices formed by group IV and group III-V materials show good agreement with the corresponding HSE06 calculations. The application of the tight-binding model to superlattices demonstrates that the transferable tight-binding model with nearest-neighbor interactions can be obtained for group IV and III-V materials.
A mathematical model for incorporating biofeedback into human postural control
Directory of Open Access Journals (Sweden)
Ersal Tulga
2013-02-01
Full Text Available Abstract Background Biofeedback of body motion can serve as a balance aid and rehabilitation tool. To date, mathematical models considering the integration of biofeedback into postural control have represented this integration as a sensory addition and limited their application to a single degree-of-freedom representation of the body. This study has two objectives: 1 to develop a scalable method for incorporating biofeedback into postural control that is independent of the model’s degrees of freedom, how it handles sensory integration, and the modeling of its postural controller; and 2 to validate this new model using multidirectional perturbation experimental results. Methods Biofeedback was modeled as an additional torque to the postural controller torque. For validation, this biofeedback modeling approach was applied to a vibrotactile biofeedback device and incorporated into a two-link multibody model with full-state-feedback control that represents the dynamics of bipedal stance. Average response trajectories of body sway and center of pressure (COP to multidirectional surface perturbations of subjects with vestibular deficits were used for model parameterization and validation in multiple perturbation directions and for multiple display resolutions. The quality of fit was quantified using average error and cross-correlation values. Results The mean of the average errors across all tactor configurations and perturbations was 0.24° for body sway and 0.39 cm for COP. The mean of the cross-correlation value was 0.97 for both body sway and COP. Conclusions The biofeedback model developed in this study is capable of capturing experimental response trajectory shapes with low average errors and high cross-correlation values in both the anterior-posterior and medial-lateral directions for all perturbation directions and spatial resolution display configurations considered. The results validate that biofeedback can be modeled as an additional
A mathematical model for incorporating biofeedback into human postural control
2013-01-01
Background Biofeedback of body motion can serve as a balance aid and rehabilitation tool. To date, mathematical models considering the integration of biofeedback into postural control have represented this integration as a sensory addition and limited their application to a single degree-of-freedom representation of the body. This study has two objectives: 1) to develop a scalable method for incorporating biofeedback into postural control that is independent of the model’s degrees of freedom, how it handles sensory integration, and the modeling of its postural controller; and 2) to validate this new model using multidirectional perturbation experimental results. Methods Biofeedback was modeled as an additional torque to the postural controller torque. For validation, this biofeedback modeling approach was applied to a vibrotactile biofeedback device and incorporated into a two-link multibody model with full-state-feedback control that represents the dynamics of bipedal stance. Average response trajectories of body sway and center of pressure (COP) to multidirectional surface perturbations of subjects with vestibular deficits were used for model parameterization and validation in multiple perturbation directions and for multiple display resolutions. The quality of fit was quantified using average error and cross-correlation values. Results The mean of the average errors across all tactor configurations and perturbations was 0.24° for body sway and 0.39 cm for COP. The mean of the cross-correlation value was 0.97 for both body sway and COP. Conclusions The biofeedback model developed in this study is capable of capturing experimental response trajectory shapes with low average errors and high cross-correlation values in both the anterior-posterior and medial-lateral directions for all perturbation directions and spatial resolution display configurations considered. The results validate that biofeedback can be modeled as an additional torque to the postural
Uniqueness of Gibbs Measure for Models with Uncountable Set of Spin Values on a Cayley Tree
International Nuclear Information System (INIS)
Eshkabilov, Yu. Kh.; Haydarov, F. H.; Rozikov, U. A.
2013-01-01
We consider models with nearest-neighbor interactions and with the set [0, 1] of spin values, on a Cayley tree of order K ≥ 1. It is known that the ‘splitting Gibbs measures’ of the model can be described by solutions of a nonlinear integral equation. For arbitrary k ≥ 2 we find a sufficient condition under which the integral equation has unique solution, hence under the condition the corresponding model has unique splitting Gibbs measure.
Fidelity study of superconductivity in extended Hubbard models
Plonka, N.; Jia, C. J.; Wang, Y.; Moritz, B.; Devereaux, T. P.
2015-07-01
The Hubbard model with local on-site repulsion is generally thought to possess a superconducting ground state for appropriate parameters, but the effects of more realistic long-range Coulomb interactions have not been studied extensively. We study the influence of these interactions on superconductivity by including nearest- and next-nearest-neighbor extended Hubbard interactions in addition to the usual on-site terms. Utilizing numerical exact diagonalization, we analyze the signatures of superconductivity in the ground states through the fidelity metric of quantum information theory. We find that nearest and next-nearest neighbor interactions have thresholds above which they destabilize superconductivity regardless of whether they are attractive or repulsive, seemingly due to competing charge fluctuations.
Incorporating modelled subglacial hydrology into inversions for basal drag
Directory of Open Access Journals (Sweden)
C. P. Koziol
2017-12-01
Full Text Available A key challenge in modelling coupled ice-flow–subglacial hydrology is initializing the state and parameters of the system. We address this problem by presenting a workflow for initializing these values at the start of a summer melt season. The workflow depends on running a subglacial hydrology model for the winter season, when the system is not forced by meltwater inputs, and ice velocities can be assumed constant. Key parameters of the winter run of the subglacial hydrology model are determined from an initial inversion for basal drag using a linear sliding law. The state of the subglacial hydrology model at the end of winter is incorporated into an inversion of basal drag using a non-linear sliding law which is a function of water pressure. We demonstrate this procedure in the Russell Glacier area and compare the output of the linear sliding law with two non-linear sliding laws. Additionally, we compare the modelled winter hydrological state to radar observations and find that it is in line with summer rather than winter observations.
An electricity generation planning model incorporating demand response
International Nuclear Information System (INIS)
Choi, Dong Gu; Thomas, Valerie M.
2012-01-01
Energy policies that aim to reduce carbon emissions and change the mix of electricity generation sources, such as carbon cap-and-trade systems and renewable electricity standards, can affect not only the source of electricity generation, but also the price of electricity and, consequently, demand. We develop an optimization model to determine the lowest cost investment and operation plan for the generating capacity of an electric power system. The model incorporates demand response to price change. In a case study for a U.S. state, we show the price, demand, and generation mix implications of a renewable electricity standard, and of a carbon cap-and-trade policy with and without initial free allocation of carbon allowances. This study shows that both the demand moderating effects and the generation mix changing effects of the policies can be the sources of carbon emissions reductions, and also shows that the share of the sources could differ with different policy designs. The case study provides different results when demand elasticity is excluded, underscoring the importance of incorporating demand response in the evaluation of electricity generation policies. - Highlights: ► We develop an electric power system optimization model including demand elasticity. ► Both renewable electricity and carbon cap-and-trade policies can moderate demand. ► Both policies affect the generation mix, price, and demand for electricity. ► Moderated demand can be a significant source of carbon emission reduction. ► For cap-and-trade policies, initial free allowances change outcomes significantly.
Effective model with strong Kitaev interactions for α -RuCl3
Suzuki, Takafumi; Suga, Sei-ichiro
2018-04-01
We use an exact numerical diagonalization method to calculate the dynamical spin structure factors of three ab initio models and one ab initio guided model for a honeycomb-lattice magnet α -RuCl3 . We also use thermal pure quantum states to calculate the temperature dependence of the heat capacity, the nearest-neighbor spin-spin correlation function, and the static spin structure factor. From the results obtained from these four effective models, we find that, even when the magnetic order is stabilized at low temperature, the intensity at the Γ point in the dynamical spin structure factors increases with increasing nearest-neighbor spin correlation. In addition, we find that the four models fail to explain heat-capacity measurements whereas two of the four models succeed in explaining inelastic-neutron-scattering experiments. In the four models, when temperature decreases, the heat capacity shows a prominent peak at a high temperature where the nearest-neighbor spin-spin correlation function increases. However, the peak temperature in heat capacity is too low in comparison with that observed experimentally. To address these discrepancies, we propose an effective model that includes strong ferromagnetic Kitaev coupling, and we show that this model quantitatively reproduces both inelastic-neutron-scattering experiments and heat-capacity measurements. To further examine the adequacy of the proposed model, we calculate the field dependence of the polarized terahertz spectra, which reproduces the experimental results: the spin-gapped excitation survives up to an onset field where the magnetic order disappears and the response in the high-field region is almost linear. Based on these numerical results, we argue that the low-energy magnetic excitation in α -RuCl3 is mainly characterized by interactions such as off-diagonal interactions and weak Heisenberg interactions between nearest-neighbor pairs, rather than by the strong Kitaev interactions.
Tantalum strength model incorporating temperature, strain rate and pressure
Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt
Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Incorporation of chemical kinetic models into process control
International Nuclear Information System (INIS)
Herget, C.J.; Frazer, J.W.
1981-01-01
An important consideration in chemical process control is to determine the precise rationing of reactant streams, particularly when a large time delay exists between the mixing of the reactants and the measurement of the product. In this paper, a method is described for incorporating chemical kinetic models into the control strategy in order to achieve optimum operating conditions. The system is first characterized by determining a reaction rate surface as a function of all input reactant concentrations over a feasible range. A nonlinear constrained optimization program is then used to determine the combination of reactants which produces the specified yield at minimum cost. This operating condition is then used to establish the nominal concentrations of the reactants. The actual operation is determined through a feedback control system employing a Smith predictor. The method is demonstrated on a laboratory bench scale enzyme reactor
Environment overwhelms both nature and nurture in a model spin glass
Middleton, A. Alan; Yang, Jie
We are interested in exploring what information determines the particular history of the glassy long term dynamics in a disordered material. We study the effect of initial configurations and the realization of stochastic dynamics on the long time evolution of configurations in a two-dimensional Ising spin glass model. The evolution of nearest neighbor correlations is computed using patchwork dynamics, a coarse-grained numerical heuristic for temporal evolution. The dependence of the nearest neighbor spin correlations at long time on both initial spin configurations and noise histories are studied through cross-correlations of long-time configurations and the spin correlations are found to be independent of both. We investigate how effectively rigid bond clusters coarsen. Scaling laws are used to study the convergence of configurations and the distribution of sizes of nearly rigid clusters. The implications of the computational results on simulations and phenomenological models of spin glasses are discussed. We acknowledge NSF support under DMR-1410937 (CMMT program).
Nearest neighbor affects G:C to A:T transitions induced by alkylating agents.
Glickman, B W; Horsfall, M J; Gordon, A J; Burns, P A
1987-01-01
The influence of local DNA sequence on the distribution of G:C to A:T transitions induced in the lacI gene of E. coli by a series of alkylating agents has been analyzed. In the case of nitrosoguanidine, two nitrosoureas and a nitrosamine, a strong preference for mutation at sites proceeded 5' by a purine base was noted. This preference was observed with both methyl and ethyl donors where the predicted common ultimate alkylating species is the alkyl diazonium ion. In contrast, this preference ...
PERBANDINGAN K-NEAREST NEIGHBOR DAN NAIVE BAYES UNTUK KLASIFIKASI TANAH LAYAK TANAM POHON JATI
Directory of Open Access Journals (Sweden)
Didik Srianto
2016-10-01
Full Text Available Data mining adalah proses menganalisa data dari perspektif yang berbeda dan menyimpulkannya menjadi informasi-informasi penting yang dapat dipakai untuk meningkatkan keuntungan, memperkecil biaya pengeluaran, atau bahkan keduanya. Secara teknis, data mining dapat disebut sebagai proses untuk menemukan korelasi atau pola dari ratusan atau ribuan field dari sebuah relasional database yang besar. Pada perum perhutani KPH SEMARANG saat ini masih menggunakan cara manual untuk menentukan jenis tanaman (jati / non jati. K-Nearest Neighbour atau k-NN merupakan algoritma data mining yang dapat digunakan untuk proses klasifikasi dan regresi. Naive bayes Classifier merupakan suatu teknik yang dapat digunakan untuk teknik klasifikasi. Pada penelitian ini k-NN dan Naive Bayes akan digunakan untuk mengklasifikasi data pohon jati dari perum perhutani KPH SEMARANG. Yang mana hasil klasifikasi dari k-NN dan Naive Bayes akan dibandingkan hasilnya. Pengujian dilakukan menggunakan software RapidMiner. Setelah dilakukan pengujian k-NN dianggap lebih baik dari Naife Bayes dengan akurasi 96.66% dan 82.63. Kata kunci -k-NN,Klasifikasi,Naive Bayes,Penanaman Pohon Jati
A Coupled k-Nearest Neighbor Algorithm for Multi-Label Classification
2015-05-22
classification, an image may contain several concepts simultaneously, such as beach, sunset and kangaroo . Such tasks are usually denoted as multi-label...informatics, a gene can belong to both metabolism and transcription classes; and in music categorization, a song may labeled as Mozart and sad. In the
Ghinita, Gabriel; Kalnis, Panos; Kantarcioǧlu, Murâ t; Bertino, Elisa
2010-01-01
Mobile devices with global positioning capabilities allow users to retrieve points of interest (POI) in their proximity. To protect user privacy, it is important not to disclose exact user coordinates to un-trusted entities that provide location-based services. Currently, there are two main approaches to protect the location privacy of users: (i) hiding locations inside cloaking regions (CRs) and (ii) encrypting location data using private information retrieval (PIR) protocols. Previous work focused on finding good trade-offs between privacy and performance of user protection techniques, but disregarded the important issue of protecting the POI dataset D. For instance, location cloaking requires large-sized CRs, leading to excessive disclosure of POIs (O({pipe}D{pipe}) in the worst case). PIR, on the other hand, reduces this bound to O(√{pipe}D{pipe}), but at the expense of high processing and communication overhead. We propose hybrid, two-step approaches for private location-based queries which provide protection for both the users and the database. In the first step, user locations are generalized to coarse-grained CRs which provide strong privacy. Next, a PIR protocol is applied with respect to the obtained query CR. To protect against excessive disclosure of POI locations, we devise two cryptographic protocols that privately evaluate whether a point is enclosed inside a rectangular region or a convex polygon. We also introduce algorithms to efficiently support PIR on dynamic POI sub-sets. We provide solutions for both approximate and exact NN queries. In the approximate case, our method discloses O(1) POI, orders of magnitude fewer than CR- or PIR-based techniques. For the exact case, we obtain optimal disclosure of a single POI, although with slightly higher computational overhead. Experimental results show that the hybrid approaches are scalable in practice, and outperform the pure-PIR approach in terms of computational and communication overhead. © 2010 Springer Science+Business Media, LLC.
Combining Fourier and lagged k-nearest neighbor imputation for biomedical time series data
Rahman, Shah Atiqur; Huang, Yuxiao; Claassen, Jan; Heintzman, Nathaniel; Kleinberg, Samantha
2015-01-01
Most clinical and biomedical data contain missing values. A patient’s record may be split across multiple institutions, devices may fail, and sensors may not be worn at all times. While these missing values are often ignored, this can lead to bias and error when the data are mined. Further, the data are not simply missing at random. Instead the measurement of a variable such as blood glucose may depend on its prior values as well as that of other variables. These dependencies exist across tim...
PENINGKATAN KECERDASAN COMPUTER PLAYER PADA GAME PERTARUNGAN BERBASIS K-NEAREST NEIGHBOR BERBOBOT
Directory of Open Access Journals (Sweden)
M Ihsan Alfani Putera
2018-02-01
Full Text Available Salah satu teknologi komputer yang berkembang dan perubahannya cukup pesat adalah game. Tujuan dibuatnya game adalah sebagai sarana hiburan dan memberikan kesenangan bagi penggunanya. Contoh elemen dalam pembuatan game yang penting adalah adanya tantangan yang seimbang sesuai level. Dalam hal ini, adanya kecerdasan buatan atau AI merupakan salah satu unsur yang diperlukan dalam pembentukan game. Penggunaan AI yang tidak beradaptasi ke strategi lawan akan mudah diprediksi dan repetitif. Jika AI terlalu pintar maka player akan kesulitan dalam memainkan game tersebut. Dengan keadaan seperti itu akan menurunkan tingkat enjoyment dari pemain. Oleh karena itu, dibutuhkan suatu metode AI yang dapat beradaptasi dengan kemampuan dari player yang bermain. Sehingga tingkat kesulitan yang dihadapi dapat mengikuti kemampuan pemainnya dan pengalaman enjoyment ketika bermain game terus terjaga. Pada penelitian sebelumnya, metode AI yang sering digunakan pada game berjenis pertarungan adalah K-NN. Namun metode tersebut menganggap semua atribut dalam game adalah sama sehingga hal ini mempengaruhi hasil learning AI menjadi kurang optimal.Penelitian ini mengusulkan metode untuk AI dengan menggunakan metode K-NN berbobot pada game berjenis pertarungan. Dimana, pembobotan tersebut dilakukan untuk memberikan pengaruh setiap atribut dengan bobot disesuaikan dengan aksi player. Dari hasil evaluasi yang dilakukan terhadap 50 kali pertandingan pada 3 skenario uji coba, metode yang diusulkan yaitu K-NN berbobot mampu menghasilkan tingkat kecerdasan AI dengan akurasi sebesar 51%. Sedangkan, metode sebelumnya yaitu K-NN tanpa bobot hanya menghasilkan tingkat kecerdasan AI sebesar 38% dan metode random menghasilkan tingkat kecerdasan AI sebesar 25%.
PENINGKATAN KECERDASAN COMPUTER PLAYER PADA GAME PERTARUNGAN BERBASIS K-NEAREST NEIGHBOR BERBOBOT
M Ihsan Alfani Putera; Darlis Heru Murti
2018-01-01
Salah satu teknologi komputer yang berkembang dan perubahannya cukup pesat adalah game. Tujuan dibuatnya game adalah sebagai sarana hiburan dan memberikan kesenangan bagi penggunanya. Contoh elemen dalam pembuatan game yang penting adalah adanya tantangan yang seimbang sesuai level. Dalam hal ini, adanya kecerdasan buatan atau AI merupakan salah satu unsur yang diperlukan dalam pembentukan game. Penggunaan AI yang tidak beradaptasi ke strategi lawan akan mudah diprediksi dan repetitif. Jika ...
Ghinita, Gabriel
2010-12-15
Mobile devices with global positioning capabilities allow users to retrieve points of interest (POI) in their proximity. To protect user privacy, it is important not to disclose exact user coordinates to un-trusted entities that provide location-based services. Currently, there are two main approaches to protect the location privacy of users: (i) hiding locations inside cloaking regions (CRs) and (ii) encrypting location data using private information retrieval (PIR) protocols. Previous work focused on finding good trade-offs between privacy and performance of user protection techniques, but disregarded the important issue of protecting the POI dataset D. For instance, location cloaking requires large-sized CRs, leading to excessive disclosure of POIs (O({pipe}D{pipe}) in the worst case). PIR, on the other hand, reduces this bound to O(√{pipe}D{pipe}), but at the expense of high processing and communication overhead. We propose hybrid, two-step approaches for private location-based queries which provide protection for both the users and the database. In the first step, user locations are generalized to coarse-grained CRs which provide strong privacy. Next, a PIR protocol is applied with respect to the obtained query CR. To protect against excessive disclosure of POI locations, we devise two cryptographic protocols that privately evaluate whether a point is enclosed inside a rectangular region or a convex polygon. We also introduce algorithms to efficiently support PIR on dynamic POI sub-sets. We provide solutions for both approximate and exact NN queries. In the approximate case, our method discloses O(1) POI, orders of magnitude fewer than CR- or PIR-based techniques. For the exact case, we obtain optimal disclosure of a single POI, although with slightly higher computational overhead. Experimental results show that the hybrid approaches are scalable in practice, and outperform the pure-PIR approach in terms of computational and communication overhead. © 2010 Springer Science+Business Media, LLC.
Nearest-Neighbor Interactions and Their Influence on the Structural Aspects of Dipeptides
Directory of Open Access Journals (Sweden)
Gunajyoti Das
2013-01-01
Full Text Available In this theoretical study, the role of the side chain moiety of C-terminal residue in influencing the structural and molecular properties of dipeptides is analyzed by considering a series of seven dipeptides. The C-terminal positions of the dipeptides are varied with seven different amino acid residues, namely. Val, Leu, Asp, Ser, Gln, His, and Pyl while their N-terminal positions are kept constant with Sec residues. Full geometry optimization and vibrational frequency calculations are carried out at B3LYP/6-311++G(d,p level in gas and aqueous phase. The stereo-electronic effects of the side chain moieties of C-terminal residues are found to influence the values of Φ and Ω dihedrals, planarity of the peptide planes, and geometry around the C7 α-carbon atoms of the dipeptides. The gas phase intramolecular H-bond combinations of the dipeptides are similar to those in aqueous phase. The theoretical vibrational spectra of the dipeptides reflect the nature of intramolecular H-bonds existing in the dipeptide structures. Solvation effects of aqueous environment are evident on the geometrical parameters related to the amide planes, dipole moments, HOMOLUMO energy gaps as well as thermodynamic stability of the dipeptides.
Directory of Open Access Journals (Sweden)
Fachruddin Fachruddin
2017-07-01
Full Text Available Software Effort Estimation adalah proses estimasi biaya perangkat lunak sebagai suatu proses penting dalam melakukan proyek perangkat lunak. Berbagai penelitian terdahulu telah melakukan estimasi usaha perangkat lunak dengan berbagai metode, baik metode machine learning maupun non machine learning. Penelitian ini mengadakan set eksperimen seleksi atribut pada parameter proyek menggunakan teknik k-nearest neighbours sebagai estimasinya dengan melakukan seleksi atribut menggunakan information gain dan mutual information serta bagaimana menemukan parameter proyek yang paling representif pada software effort estimation. Dataset software estimation effort yang digunakan pada eksperimen adalah yakni albrecht, china, kemerer dan mizayaki94 yang dapat diperoleh dari repositori data khusus Software Effort Estimation melalui url http://openscience.us/repo/effort/. Selanjutnya peneliti melakukan pembangunan aplikasi seleksi atribut untuk menyeleksi parameter proyek. Sistem ini menghasilkan dataset arff yang telah diseleksi. Aplikasi ini dibangun dengan bahasa java menggunakan IDE Netbean. Kemudian dataset yang telah di-generate merupakan parameter hasil seleksi yang akan dibandingkan pada saat melakukan Software Effort Estimation menggunakan tool WEKA . Seleksi Fitur berhasil menurunkan nilai error estimasi (yang diwakilkan oleh nilai RAE dan RMSE. Artinya bahwa semakin rendah nilai error (RAE dan RMSE maka semakin akurat nilai estimasi yang dihasilkan. Estimasi semakin baik setelah di lakukan seleksi fitur baik menggunakan information gain maupun mutual information. Dari nilai error yang dihasilkan maka dapat disimpulkan bahwa dataset yang dihasilkan seleksi fitur dengan metode information gain lebih baik dibanding mutual information namun, perbedaan keduanya tidak terlalu signifikan.
Nearest neighbor affects G:C to A:T transitions induced by alkylating agents.
Glickman, B W; Horsfall, M J; Gordon, A J; Burns, P A
1987-01-01
The influence of local DNA sequence on the distribution of G:C to A:T transitions induced in the lacI gene of E. coli by a series of alkylating agents has been analyzed. In the case of nitrosoguanidine, two nitrosoureas and a nitrosamine, a strong preference for mutation at sites proceeded 5' by a purine base was noted. This preference was observed with both methyl and ethyl donors where the predicted common ultimate alkylating species is the alkyl diazonium ion. In contrast, this preference was not seen following treatment with ethylmethanesulfonate. The observed preference for 5'PuG-3' site over 5'-PyG-3' sites corresponds well with alterations observed in the Ha-ras oncogene recovered after treatment with NMU. This indicates that the mutations recovered in the oncogenes are likely the direct consequence of the alkylation treatment and that the local sequence effects seen in E. coli also appear to occur in mammalian cells. PMID:3329097
Nearest neighbor affects G:C to A:T transitions induced by alkylating agents
Energy Technology Data Exchange (ETDEWEB)
Glickman, B.W.; Horsfall, M.J.; Gordon, A.J.E.; Burns, P.A.
1987-12-01
The influence of local DNA sequence on the distribution of G:C to A:T transitions induced in the lacI gene of E. coli by a series of alkylating agents has been analyzed. In the case of nitrosoguanidine, two nitrosoureas and a nitrosamine, a strong preference for mutation at sites proceeded 5' by a purine base was noted. This preferences was observed with both methyl and ethyl donors where the predicted common ultimate alkylating species in the alkyl diazonium ion. In contrast, this preferences was not seen following treatment with ethylmethanesulfonate. The observed preference for 5'PuG-3' site over 5'-PyG-3' sites corresponds well with alterations observed in the Ha-ras oncogene recovered after treatment with NMU. This indicates that the mutations recovered in the oncogenes are likely the direct consequence of the alkylation treatment and that the local sequence effects seen in E. coli also appear to occur in mammalian cells.
International Nuclear Information System (INIS)
Biddle, J.; Das Sarma, S.
2010-01-01
Localization properties of noninteracting quantum particles in one-dimensional incommensurate lattices are investigated with an exponential short-range hopping that is beyond the minimal nearest-neighbor tight-binding model. Energy dependent mobility edges are analytically predicted in this model and verified with numerical calculations. The results are then mapped to the continuum Schroedinger equation, and an approximate analytical expression for the localization phase diagram and the energy dependent mobility edges in the ground band is obtained.
DEFF Research Database (Denmark)
Schleger, P.; Hardy, W.N.; Casalta, H.
1994-01-01
A lattice-gas model for the high temperature oxygen-ordering thermodynamics in YBa2Cu3O6+x is presented, which assumes constant effective pair interactions between oxygen atoms and includes in a simple fashion the effect of the electron spin and charge degrees of freedom. This is done using...... a commonly utilized picture relating the creation of mobile electron holes and unpaired spins to the insertion of oxygen into the basal plane. The model is solved using the nearest-neighbor square approximation of the cluster-variation method. In addition, preliminary Monte Carlo results using next......-nearest-neighbor interactions are presented. The model is compared to experimental results for the thermodynamic response function, kT (partial derivative x/partial derivative mu)T (mu is the chemical potential), the number of monovalent copper atoms, and the fractional site occupancies. The model drastically improves...
Analytical results for entanglement in the five-qubit anisotropic Heisenberg model
International Nuclear Information System (INIS)
Wang Xiaoguang
2004-01-01
We solve the eigenvalue problem of the five-qubit anisotropic Heisenberg model, without use of Bethe's ansatz, and give analytical results for entanglement and mixedness of two nearest-neighbor qubits. The entanglement takes its maximum at Δ=1 (Δ>1) for the case of zero (finite) temperature with Δ being the anisotropic parameter. In contrast, the mixedness takes its minimum at Δ=1 (Δ>1) for the case of zero (finite) temperature
A stochastic MILP energy planning model incorporating power market dynamics
International Nuclear Information System (INIS)
Koltsaklis, Nikolaos E.; Nazos, Konstantinos
2017-01-01
Highlights: •Stochastic MILP model for the optimal energy planning of a power system. •Power market dynamics (offers/bids) are incorporated in the proposed model. •Monte Carlo method for capturing the uncertainty of some key parameters. •Analytical supply cost composition per power producer and activity. •Clean dark and spark spreads are calculated for each power unit. -- Abstract: This paper presents an optimization-based methodological approach to address the problem of the optimal planning of a power system at an annual level in competitive and uncertain power markets. More specifically, a stochastic mixed integer linear programming model (MILP) has been developed, combining advanced optimization techniques with Monte Carlo method in order to deal with uncertainty issues. The main focus of the proposed framework is the dynamic formulation of the strategy followed by all market participants in volatile market conditions, as well as detailed economic assessment of the power system’s operation. The applicability of the proposed approach has been tested on a real case study of the interconnected Greek power system, quantifying in detail all the relevant technical and economic aspects of the system’s operation. The proposed work identifies in the form of probability distributions the optimal power generation mix, electricity trade at a regional level, carbon footprint, as well as detailed total supply cost composition, according to the assumed market structure. The paper demonstrates that the proposed optimization approach is able to provide important insights into the appropriate energy strategies designed by market participants, as well as on the strategic long-term decisions to be made by investors and/or policy makers at a national and/or regional level, underscoring potential risks and providing appropriate price signals on critical energy projects under real market operating conditions.
Incorporating Context Dependency of Species Interactions in Species Distribution Models.
Lany, Nina K; Zarnetske, Phoebe L; Gouhier, Tarik C; Menge, Bruce A
2017-07-01
Species distribution models typically use correlative approaches that characterize the species-environment relationship using occurrence or abundance data for a single species. However, species distributions are determined by both abiotic conditions and biotic interactions with other species in the community. Therefore, climate change is expected to impact species through direct effects on their physiology and indirect effects propagated through their resources, predators, competitors, or mutualists. Furthermore, the sign and strength of species interactions can change according to abiotic conditions, resulting in context-dependent species interactions that may change across space or with climate change. Here, we incorporated the context dependency of species interactions into a dynamic species distribution model. We developed a multi-species model that uses a time-series of observational survey data to evaluate how abiotic conditions and species interactions affect the dynamics of three rocky intertidal species. The model further distinguishes between the direct effects of abiotic conditions on abundance and the indirect effects propagated through interactions with other species. We apply the model to keystone predation by the sea star Pisaster ochraceus on the mussel Mytilus californianus and the barnacle Balanus glandula in the rocky intertidal zone of the Pacific coast, USA. Our method indicated that biotic interactions between P. ochraceus and B. glandula affected B. glandula dynamics across >1000 km of coastline. Consistent with patterns from keystone predation, the growth rate of B. glandula varied according to the abundance of P. ochraceus in the previous year. The data and the model did not indicate that the strength of keystone predation by P. ochraceus varied with a mean annual upwelling index. Balanus glandula cover increased following years with high phytoplankton abundance measured as mean annual chlorophyll-a. M. californianus exhibited the same
Model for Volatile Incorporation into Soils and Dust on Mars
Clark, B. C.; Yen, A.
2006-12-01
Martian soils with high content of compounds of sulfur and chlorine are ubiquitous on Mars, having been found at all five landing sites. Sulfate and chloride salts are implicated by a variety of evidence, but few conclusive specific identifications have been made. Discovery of jarosite and Mg-Ca sulfates in outcrops at Meridiani Planum (MER mission) and regional-scale beds of kieserite and gypsum (Mars Express mission) notwithstanding, the sulfates in soils are uncertain. Chlorides or other Cl-containing minerals have not been uniquely identified directly by any method. Viking and Pathfinder missions found trends in the elemental analytical data consistent with MgSO4, but Viking results are biased by duricrust samples and Pathfinder by soil contamination of rock surfaces. The Mars Exploration Rovers (MER) missions have taken extensive data on soils with no confirmation of trends implicating any particular cation. In our model of martian dust and soil, the S and Cl are initially incorporated by condensation or chemisorption on grains directly from gas phase molecules in the atmosphere. It is shown by modeling that the coatings thus formed cannot quantitatively explain the apparent elemental composition of these materials, and therefore involve the migration of ions and formation of microscopic weathering rinds. Original cation inventories of unweathered particles are isochemically conserved. Exposed rock surfaces should also have micro rinds, depending upon the length of time of exposure. Martian soils may therefore have unusual chemical properties when interacting with aqueous layers or infused fluids. Potential ramifications to the quantitative accuracy of x-ray fluorescence and Moessbauer spectroscopy on unprocessed samples are also assessed.
Cloud Impacts on Pavement Temperature in Energy Balance Models
Walker, C. L.
2013-12-01
Forecast systems provide decision support for end-users ranging from the solar energy industry to municipalities concerned with road safety. Pavement temperature is an important variable when considering vehicle response to various weather conditions. A complex, yet direct relationship exists between tire and pavement temperatures. Literature has shown that as tire temperature increases, friction decreases which affects vehicle performance. Many forecast systems suffer from inaccurate radiation forecasts resulting in part from the inability to model different types of clouds and their influence on radiation. This research focused on forecast improvement by determining how cloud type impacts the amount of shortwave radiation reaching the surface and subsequent pavement temperatures. The study region was the Great Plains where surface solar radiation data were obtained from the High Plains Regional Climate Center's Automated Weather Data Network stations. Road pavement temperature data were obtained from the Meteorological Assimilation Data Ingest System. Cloud properties and radiative transfer quantities were obtained from the Clouds and Earth's Radiant Energy System mission via Aqua and Terra Moderate Resolution Imaging Spectroradiometer satellite products. An additional cloud data set was incorporated from the Naval Research Laboratory Cloud Classification algorithm. Statistical analyses using a modified nearest neighbor approach were first performed relating shortwave radiation variability with road pavement temperature fluctuations. Then statistical associations were determined between the shortwave radiation and cloud property data sets. Preliminary results suggest that substantial pavement forecasting improvement is possible with the inclusion of cloud-specific information. Future model sensitivity testing seeks to quantify the magnitude of forecast improvement.
Regional Calibration of SCS-CN L-THIA Model: Application for Ungauged Basins
Directory of Open Access Journals (Sweden)
Ji-Hong Jeon
2014-05-01
Full Text Available Estimating surface runoff for ungauged watershed is an important issue. The Soil Conservation Service Curve Number (SCS-CN method developed from long-term experimental data is widely used to estimate surface runoff from gaged or ungauged watersheds. Many modelers have used the documented SCS-CN parameters without calibration, sometimes resulting in significant errors in estimating surface runoff. Several methods for regionalization of SCS-CN parameters were evaluated. The regionalization methods include: (1 average; (2 land use area weighted average; (3 hydrologic soil group area weighted average; (4 area combined land use and hydrologic soil group weighted average; (5 spatial nearest neighbor; (6 inverse distance weighted average; and (7 global calibration method, and model performance for each method was evaluated with application to 14 watersheds located in Indiana. Eight watersheds were used for calibration and six watersheds for validation. For the validation results, the spatial nearest neighbor method provided the highest average Nash-Sutcliffe (NS value at 0.58 for six watersheds but it included the lowest NS value and variance of NS values of this method was the highest. The global calibration method provided the second highest average NS value at 0.56 with low variation of NS values. Although the spatial nearest neighbor method provided the highest average NS value, this method was not statistically different than other methods. However, the global calibration method was significantly different than other methods except the spatial nearest neighbor method. Therefore, we conclude that the global calibration method is appropriate to regionalize SCS-CN parameters for ungauged watersheds.
Chu, Weiqi; Li, Xiantao
2018-01-01
We present some estimates for the memory kernel function in the generalized Langevin equation, derived using the Mori-Zwanzig formalism from a one-dimensional lattice model, in which the particles interactions are through nearest and second nearest neighbors. The kernel function can be explicitly expressed in a matrix form. The analysis focuses on the decay properties, both spatially and temporally, revealing a power-law behavior in both cases. The dependence on the level of coarse-graining is also studied.
Shen, Ka
2018-04-01
We study magnon spectra at finite temperature in yttrium iron garnet using a tight-binding model with nearest-neighbor exchange interaction. The spin reduction due to thermal magnon excitation is taken into account via the mean field approximation to the local spin and is found to be different at two sets of iron atoms. The resulting temperature dependence of the spin wave gap shows good agreement with experiment. We find that only two magnon modes are relevant to the ferromagnetic resonance.
Optical phonons in cubic AlxGa1-xN approached by the modified random element isodisplacement model
International Nuclear Information System (INIS)
Liu, M.S.; Bursill, L.A.; Prawer, S.
1998-01-01
The behaviour of longitudinal and transverse optical phonons in cubic Al x Ga l-x N are derived theoretically as a function of the concentration x (0≤x≤1). The calculation is based on a Modified Random Element Isodisplacement model which considers the interactions from the nearest neighbor and second neighbor atoms. We find one-mode behavior in Al x Ga l-x N where the phonon frequency in general varies continuously and approximately linearly with x. (author)
J{sub 1x}-J{sub 1y}-J{sub 2} square-lattice anisotropic Heisenberg model
Energy Technology Data Exchange (ETDEWEB)
Pires, A.S.T., E-mail: antpires@frisica.ufmg.br
2017-08-01
Highlights: • We use the SU(3) Schwinger boson formalism. • We present the phase diagram at zero temperature. • We calculate the quadrupole structure factor. - Abstract: The spin one Heisenberg model with an easy-plane single-ion anisotropy and spatially anisotropic nearest-neighbor coupling, frustrated by a next-nearest neighbor interaction, is studied at zero temperature using a SU(3) Schwinger boson formalism (sometimes also referred to as flavor wave theory) in a mean field approximation. The local constraint is enforced by introducing a Lagrange multiplier. The enlarged Hilbert space of S = 1 spins lead to a nematic phase that is ubiquitous to S = 1 spins with single ion anisotropy. The phase diagram shows two magnetically ordered phase, separated by a quantum paramagnetic (nematic) phase.
Pineda, M.; Stamatakis, M.
2017-07-01
Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.
Hiebeler, David E; Millett, Nicholas E
2011-06-21
We investigate a spatial lattice model of a population employing dispersal to nearest and second-nearest neighbors, as well as long-distance dispersal across the landscape. The model is studied via stochastic spatial simulations, ordinary pair approximation, and triplet approximation. The latter method, which uses the probabilities of state configurations of contiguous blocks of three sites as its state variables, is demonstrated to be greatly superior to pair approximations for estimating spatial correlation information at various scales. Correlations between pairs of sites separated by arbitrary distances are estimated by constructing spatial Markov processes using the information from both approximations. These correlations demonstrate why pair approximation misses basic qualitative features of the model, such as decreasing population density as a large proportion of offspring are dropped on second-nearest neighbors, and why triplet approximation is able to include them. Analytical and numerical results show that, excluding long-distance dispersal, the initial growth rate of an invading population is maximized and the equilibrium population density is also roughly maximized when the population spreads its offspring evenly over nearest and second-nearest neighboring sites. Copyright © 2011 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Lambotte, Guillaume; Chartrand, Patrice
2011-01-01
Highlights: → We model the Na 2 O-SiO 2 -NaF-SiF 4 reciprocal system based on a comprehensive review of all available experimental data. → The assessment includes Na 2 O-SiO 2 and NaF-SiF 4 binary systems. → Improvements to the Modified Quasichemical Model in the Quadruplet Approximation are presented. → The very strong short-range ordering among first-nearest and second-nearest neighbors in this system is reproduced. → This work constitutes the first assessment for all compositions and temperatures of a reciprocal oxyfluoride system. - Abstract: All available thermodynamic and phase diagram data for the condensed phases of the ternary reciprocal system (NaF + SiF 4 + Na 2 O + SiO 2 ) have been critically assessed. Model parameters for the unary (SiF 4 ), the binary systems and the ternary reciprocal system have been found, which permit to reproduce the most reliable experimental data. The Modified Quasichemical Model in the Quadruplet Approximation was used for the oxyfluoride liquid solution, which exhibits strong first-nearest-neighbor and second-nearest-neighbor short-range ordering. This thermodynamic model takes into account both types of short-range ordering as well as the coupling between them. Model parameters have been estimated for the hypothetical high-temperature liquid SiF 4 .
Emergent 1d Ising Behavior in AN Elementary Cellular Automaton Model
Kassebaum, Paul G.; Iannacchione, Germano S.
The fundamental nature of an evolving one-dimensional (1D) Ising model is investigated with an elementary cellular automaton (CA) simulation. The emergent CA simulation employs an ensemble of cells in one spatial dimension, each cell capable of two microstates interacting with simple nearest-neighbor rules and incorporating an external field. The behavior of the CA model provides insight into the dynamics of coupled two-state systems not expressible by exact analytical solutions. For instance, state progression graphs show the causal dynamics of a system through time in relation to the system's entropy. Unique graphical analysis techniques are introduced through difference patterns, diffusion patterns, and state progression graphs of the 1D ensemble visualizing the evolution. All analyses are consistent with the known behavior of the 1D Ising system. The CA simulation and new pattern recognition techniques are scalable (in both dimension, complexity, and size) and have many potential applications such as complex design of materials, control of agent systems, and evolutionary mechanism design.
Bieniek, Maciej; Korkusiński, Marek; Szulakowska, Ludmiła; Potasz, Paweł; Ozfidan, Isil; Hawrylak, Paweł
2018-02-01
We present here the minimal tight-binding model for a single layer of transition metal dichalcogenides (TMDCs) MX 2(M , metal; X , chalcogen) which illuminates the physics and captures band nesting, massive Dirac fermions, and valley Landé and Zeeman magnetic field effects. TMDCs share the hexagonal lattice with graphene but their electronic bands require much more complex atomic orbitals. Using symmetry arguments, a minimal basis consisting of three metal d orbitals and three chalcogen dimer p orbitals is constructed. The tunneling matrix elements between nearest-neighbor metal and chalcogen orbitals are explicitly derived at K ,-K , and Γ points of the Brillouin zone. The nearest-neighbor tunneling matrix elements connect specific metal and sulfur orbitals yielding an effective 6 ×6 Hamiltonian giving correct composition of metal and chalcogen orbitals but not the direct gap at K points. The direct gap at K , correct masses, and conduction band minima at Q points responsible for band nesting are obtained by inclusion of next-neighbor Mo-Mo tunneling. The parameters of the next-nearest-neighbor model are successfully fitted to MX 2(M =Mo ; X =S ) density functional ab initio calculations of the highest valence and lowest conduction band dispersion along K -Γ line in the Brillouin zone. The effective two-band massive Dirac Hamiltonian for MoS2, Landé g factors, and valley Zeeman splitting are obtained.
75 FR 20265 - Airworthiness Directives; Liberty Aerospace Incorporated Model XL-2 Airplanes
2010-04-19
... Office, 1701 Columbia Avenue, College Park, Georgia 30337; telephone: (404) 474-5524; facsimile: (404... Airworthiness Directives; Liberty Aerospace Incorporated Model XL-2 Airplanes AGENCY: Federal Aviation...-08- 05, which applies to certain Liberty Aerospace Incorporated Model XL-2 airplanes. AD 2009-08-05...
Loss given default models incorporating macroeconomic variables for credit cards
Crook, J.; Bellotti, T.
2012-01-01
Based on UK data for major retail credit cards, we build several models of Loss Given Default based on account level data, including Tobit, a decision tree model, a Beta and fractional logit transformation. We find that Ordinary Least Squares models with macroeconomic variables perform best for forecasting Loss Given Default at the account and portfolio levels on independent hold-out data sets. The inclusion of macroeconomic conditions in the model is important, since it provides a means to m...
Incorporating Contagion in Portfolio Credit Risk Models Using Network Theory
Anagnostou, I.; Sourabh, S.; Kandhai, D.
2018-01-01
Portfolio credit risk models estimate the range of potential losses due to defaults or deteriorations in credit quality. Most of these models perceive default correlation as fully captured by the dependence on a set of common underlying risk factors. In light of empirical evidence, the ability of
Incorporating measurement error in n = 1 psychological autoregressive modeling
Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988
A statistical model for aggregating judgments by incorporating peer predictions
McCoy, John; Prelec, Drazen
2017-01-01
We propose a probabilistic model to aggregate the answers of respondents answering multiple-choice questions. The model does not assume that everyone has access to the same information, and so does not assume that the consensus answer is correct. Instead, it infers the most probable world state, even if only a minority vote for it. Each respondent is modeled as receiving a signal contingent on the actual world state, and as using this signal to both determine their own answer and predict the ...
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
Incorporating Responsiveness to Marketing Efforts in Brand Choice Modeling
Directory of Open Access Journals (Sweden)
Dennis Fok
2014-02-01
Full Text Available We put forward a brand choice model with unobserved heterogeneity that concerns responsiveness to marketing efforts. We introduce two latent segments of households. The first segment is assumed to respond to marketing efforts, while households in the second segment do not do so. Whether a specific household is a member of the first or the second segment at a specific purchase occasion is described by household-specific characteristics and characteristics concerning buying behavior. Households may switch between the two responsiveness states over time. When comparing the performance of our model with alternative choice models that account for various forms of heterogeneity for three different datasets, we find better face validity for our parameters. Our model also forecasts better.
Ground-state ordering of the J1-J2 model on the simple cubic and body-centered cubic lattices
Farnell, D. J. J.; Götze, O.; Richter, J.
2016-06-01
The J1-J2 Heisenberg model is a "canonical" model in the field of quantum magnetism in order to study the interplay between frustration and quantum fluctuations as well as quantum phase transitions driven by frustration. Here we apply the coupled cluster method (CCM) to study the spin-half J1-J2 model with antiferromagnetic nearest-neighbor bonds J1>0 and next-nearest-neighbor bonds J2>0 for the simple cubic (sc) and body-centered cubic (bcc) lattices. In particular, we wish to study the ground-state ordering of these systems as a function of the frustration parameter p =z2J2/z1J1 , where z1 (z2) is the number of nearest (next-nearest) neighbors. We wish to determine the positions of the phase transitions using the CCM and we aim to resolve the nature of the phase transition points. We consider the ground-state energy, order parameters, spin-spin correlation functions, as well as the spin stiffness in order to determine the ground-state phase diagrams of these models. We find a direct first-order phase transition at a value of p =0.528 from a state of nearest-neighbor Néel order to next-nearest-neighbor Néel order for the bcc lattice. For the sc lattice the situation is more subtle. CCM results for the energy, the order parameter, the spin-spin correlation functions, and the spin stiffness indicate that there is no direct first-order transition between ground-state phases with magnetic long-range order, rather it is more likely that two phases with antiferromagnetic long range are separated by a narrow region of a spin-liquid-like quantum phase around p =0.55 . Thus the strong frustration present in the J1-J2 Heisenberg model on the sc lattice may open a window for an unconventional quantum ground state in this three-dimensional spin model.
Modeling returns volatility: Realized GARCH incorporating realized risk measure
Jiang, Wei; Ruan, Qingsong; Li, Jianfeng; Li, Ye
2018-06-01
This study applies realized GARCH models by introducing several risk measures of intraday returns into the measurement equation, to model the daily volatility of E-mini S&P 500 index futures returns. Besides using the conventional realized measures, realized volatility and realized kernel as our benchmarks, we also use generalized realized risk measures, realized absolute deviation, and two realized tail risk measures, realized value-at-risk and realized expected shortfall. The empirical results show that realized GARCH models using the generalized realized risk measures provide better volatility estimation for the in-sample and substantial improvement in volatility forecasting for the out-of-sample. In particular, the realized expected shortfall performs best for all of the alternative realized measures. Our empirical results reveal that future volatility may be more attributable to present losses (risk measures). The results are robust to different sample estimation windows.
Incorporating pushing in exclusion-process models of cell migration.
Yates, Christian A; Parker, Andrew; Baker, Ruth E
2015-05-01
The macroscale movement behavior of a wide range of isolated migrating cells has been well characterized experimentally. Recently, attention has turned to understanding the behavior of cells in crowded environments. In such scenarios it is possible for cells to interact, inducing neighboring cells to move in order to make room for their own movements or progeny. Although the behavior of interacting cells has been modeled extensively through volume-exclusion processes, few models, thus far, have explicitly accounted for the ability of cells to actively displace each other in order to create space for themselves. In this work we consider both on- and off-lattice volume-exclusion position-jump processes in which cells are explicitly allowed to induce movements in their near neighbors in order to create space for themselves to move or proliferate into. We refer to this behavior as pushing. From these simple individual-level representations we derive continuum partial differential equations for the average occupancy of the domain. We find that, for limited amounts of pushing, comparison between the averaged individual-level simulations and the population-level model is nearly as good as in the scenario without pushing. Interestingly, we find that, in the on-lattice case, the diffusion coefficient of the population-level model is increased by pushing, whereas, for the particular off-lattice model that we investigate, the diffusion coefficient is reduced. We conclude, therefore, that it is important to consider carefully the appropriate individual-level model to use when representing complex cell-cell interactions such as pushing.
Incorporating spiritual beliefs into a cognitive model of worry.
Rosmarin, David H; Pirutinsky, Steven; Auerbach, Randy P; Björgvinsson, Thröstur; Bigda-Peyton, Joseph; Andersson, Gerhard; Pargament, Kenneth I; Krumrei, Elizabeth J
2011-07-01
Cognitive theory and research have traditionally highlighted the relevance of the core beliefs about oneself, the world, and the future to human emotions. For some individuals, however, core beliefs may also explicitly involve spiritual themes. In this article, we propose a cognitive model of worry, in which positive/negative beliefs about the Divine affect symptoms through the mechanism of intolerance of uncertainty. Using mediation analyses, we found support for our model across two studies, in particular, with regards to negative spiritual beliefs. These findings highlight the importance of assessing for spiritual alongside secular convictions when creating cognitive-behavioral case formulations in the treatment of religious individuals. © 2011 Wiley Periodicals, Inc.
Modelling toluene oxidation : Incorporation of mass transfer phenomena
Hoorn, J.A.A.; van Soolingen, J.; Versteeg, G. F.
The kinetics of the oxidation of toluene have been studied in close interaction with the gas-liquid mass transfer occurring in the reactor. Kinetic parameters for a simple model have been estimated on basis of experimental observations performed under industrial conditions. The conclusions for the
Incorporating pion effects into the naive quark model
International Nuclear Information System (INIS)
Nogami, Y.; Ohtuska, N.
1982-01-01
A hybrid of the naive nonrelativistic quark model and the Chew-Low model is proposed. The pion is treated as an elementary particle which interacts with the ''bare baryon'' or ''baryon core'' via the Chew-Low interaction. The baryon core, which is the source of the pion interaction, is described by the naive nonrelativistic quark model. It turns out that the baryon-core radius has to be as large as 0.8 fm, and consequently the cutoff momentum Λ for the pion interaction is < or approx. =3m/sub π/, m/sub π/ being the pion mass. Because of this small Λ (as compared with Λapprox. nucleon mass in the old Chew-Low model) the effects of the pion cloud are strongly suppressed. The baryon masses, baryon magnetic moments, and the nucleon charge radii can be reproduced quite well. However, we found it singularly difficult to fit the axial-vector weak decay constant g/sub A/
Do Knowledge-Component Models Need to Incorporate Representational Competencies?
Rau, Martina Angela
2017-01-01
Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…
Acoustic modeling for emotion recognition
Anne, Koteswara Rao; Vankayalapati, Hima Deepthi
2015-01-01
This book presents state of art research in speech emotion recognition. Readers are first presented with basic research and applications – gradually more advance information is provided, giving readers comprehensive guidance for classify emotions through speech. Simulated databases are used and results extensively compared, with the features and the algorithms implemented using MATLAB. Various emotion recognition models like Linear Discriminant Analysis (LDA), Regularized Discriminant Analysis (RDA), Support Vector Machines (SVM) and K-Nearest neighbor (KNN) and are explored in detail using prosody and spectral features, and feature fusion techniques.
Denys Yemshanov; Frank H Koch; Mark Ducey
2015-01-01
Uncertainty is inherent in model-based forecasts of ecological invasions. In this chapter, we explore how the perceptions of that uncertainty can be incorporated into the pest risk assessment process. Uncertainty changes a decision makerâs perceptions of risk; therefore, the direct incorporation of uncertainty may provide a more appropriate depiction of risk. Our...
Workforce scheduling: A new model incorporating human factors
Directory of Open Access Journals (Sweden)
Mohammed Othman
2012-12-01
Full Text Available Purpose: The majority of a company’s improvement comes when the right workers with the right skills, behaviors and capacities are deployed appropriately throughout a company. This paper considers a workforce scheduling model including human aspects such as skills, training, workers’ personalities, workers’ breaks and workers’ fatigue and recovery levels. This model helps to minimize the hiring, firing, training and overtime costs, minimize the number of fired workers with high performance, minimize the break time and minimize the average worker’s fatigue level.Design/methodology/approach: To achieve this objective, a multi objective mixed integer programming model is developed to determine the amount of hiring, firing, training and overtime for each worker type.Findings: The results indicate that the worker differences should be considered in workforce scheduling to generate realistic plans with minimum costs. This paper also investigates the effects of human fatigue and recovery on the performance of the production systems.Research limitations/implications: In this research, there are some assumptions that might affect the accuracy of the model such as the assumption of certainty of the demand in each period, and the linearity function of Fatigue accumulation and recovery curves. These assumptions can be relaxed in future work.Originality/value: In this research, a new model for integrating workers’ differences with workforce scheduling is proposed. To the authors' knowledge, it is the first time to study the effects of different important human factors such as human personality, skills and fatigue and recovery in the workforce scheduling process. This research shows that considering both technical and human factors together can reduce the costs in manufacturing systems and ensure the safety of the workers.
Incorporating grassland management in a global vegetation model
Chang, Jinfeng; Viovy, Nicolas; Vuichard, Nicolas; Ciais, Philippe; Wang, Tao; Cozic, Anne; Lardy, Romain; Graux, Anne-Isabelle; Klumpp, Katja; Martin, Raphael; Soussana, Jean-François
2013-04-01
Grassland is a widespread vegetation type, covering nearly one-fifth of the world's land surface (24 million km2), and playing a significant role in the global carbon (C) cycle. Most of grasslands in Europe are cultivated to feed animals, either directly by grazing or indirectly by grass harvest (cutting). A better understanding of the C fluxes from grassland ecosystems in response to climate and management requires not only field experiments but also the aid of simulation models. ORCHIDEE process-based ecosystem model designed for large-scale applications treats grasslands as being unmanaged, where C / water fluxes are only subject to atmospheric CO2 and climate changes. Our study describes how management of grasslands is included in the ORCHIDEE, and how management affects modeled grassland-atmosphere CO2 fluxes. The new model, ORCHIDEE-GM (Grassland Management) is capable with a management module inspired from a grassland model (PaSim, version 5.0), of accounting for two grassland management practices (cutting and grazing). The evaluation of the results of ORCHIDEE-GM compared with those of ORCHIDEE at 11 European sites equipped with eddy covariance and biometric measurements, show that ORCHIDEE-GM can capture realistically the cut-induced seasonal variation in biometric variables (LAI: Leaf Area Index; AGB: Aboveground Biomass) and in CO2 fluxes (GPP: Gross Primary Productivity; TER: Total Ecosystem Respiration; and NEE: Net Ecosystem Exchange). But improvements at grazing sites are only marginal in ORCHIDEE-GM, which relates to the difficulty in accounting for continuous grazing disturbance and its induced complex animal-vegetation interactions. Both NEE and GPP on monthly to annual timescales can be better simulated in ORCHIDEE-GM than in ORCHIDEE without management. At some sites, the model-observation misfit in ORCHIDEE-GM is found to be more related to ill-constrained parameter values than to model structure. Additionally, ORCHIDEE-GM is able to simulate
Incorporating Satellite Time-Series Data into Modeling
Gregg, Watson
2008-01-01
In situ time series observations have provided a multi-decadal view of long-term changes in ocean biology. These observations are sufficiently reliable to enable discernment of even relatively small changes, and provide continuous information on a host of variables. Their key drawback is their limited domain. Satellite observations from ocean color sensors do not suffer the drawback of domain, and simultaneously view the global oceans. This attribute lends credence to their use in global and regional model validation and data assimilation. We focus on these applications using the NASA Ocean Biogeochemical Model. The enhancement of the satellite data using data assimilation is featured and the limitation of tongterm satellite data sets is also discussed.
Incorporating Contagion in Portfolio Credit Risk Models Using Network Theory
Directory of Open Access Journals (Sweden)
Ioannis Anagnostou
2018-01-01
Full Text Available Portfolio credit risk models estimate the range of potential losses due to defaults or deteriorations in credit quality. Most of these models perceive default correlation as fully captured by the dependence on a set of common underlying risk factors. In light of empirical evidence, the ability of such a conditional independence framework to accommodate for the occasional default clustering has been questioned repeatedly. Thus, financial institutions have relied on stressed correlations or alternative copulas with more extreme tail dependence. In this paper, we propose a different remedy—augmenting systematic risk factors with a contagious default mechanism which affects the entire universe of credits. We construct credit stress propagation networks and calibrate contagion parameters for infectious defaults. The resulting framework is implemented on synthetic test portfolios wherein the contagion effect is shown to have a significant impact on the tails of the loss distributions.
Incorporation of intraocular scattering in schematic eye models
International Nuclear Information System (INIS)
Navarro, R.
1985-01-01
Beckmann's theory of scattering from rough surfaces is applied to obtain, from the experimental veiling glare functions, a diffuser that when placed at the pupil plane would produce the same scattering halo as the ocular media. This equivalent diffuser is introduced in a schematic eye model, and its influence on the point-spread function and the modulation-transfer function of the eye is analyzed
Constitutive modeling of coronary artery bypass graft with incorporated torsion
Czech Academy of Sciences Publication Activity Database
Horný, L.; Chlup, Hynek; Žitný, R.; Adámek, T.
2009-01-01
Roč. 49, č. 2 (2009), s. 273-277 ISSN 0543-5846 R&D Projects: GA ČR(CZ) GA106/08/0557 Institutional research plan: CEZ:AV0Z20760514 Keywords : coronary artery bypass graft * constitutive model * digital image correlation Subject RIV: BJ - Thermodynamics Impact factor: 0.439, year: 2009 http://web.tuke.sk/sjf-kamam/mmams2009/contents.pdf
Incorporation of ice sheet models into an Earth system model: Focus on methodology of coupling
Rybak, Oleg; Volodin, Evgeny; Morozova, Polina; Nevecherja, Artiom
2018-03-01
Elaboration of a modern Earth system model (ESM) requires incorporation of ice sheet dynamics. Coupling of an ice sheet model (ICM) to an AOGCM is complicated by essential differences in spatial and temporal scales of cryospheric, atmospheric and oceanic components. To overcome this difficulty, we apply two different approaches for the incorporation of ice sheets into an ESM. Coupling of the Antarctic ice sheet model (AISM) to the AOGCM is accomplished via using procedures of resampling, interpolation and assigning to the AISM grid points annually averaged meanings of air surface temperature and precipitation fields generated by the AOGCM. Surface melting, which takes place mainly on the margins of the Antarctic peninsula and on ice shelves fringing the continent, is currently ignored. AISM returns anomalies of surface topography back to the AOGCM. To couple the Greenland ice sheet model (GrISM) to the AOGCM, we use a simple buffer energy- and water-balance model (EWBM-G) to account for orographically-driven precipitation and other sub-grid AOGCM-generated quantities. The output of the EWBM-G consists of surface mass balance and air surface temperature to force the GrISM, and freshwater run-off to force thermohaline circulation in the oceanic block of the AOGCM. Because of a rather complex coupling procedure of GrIS compared to AIS, the paper mostly focuses on Greenland.
Models of microbiome evolution incorporating host and microbial selection.
Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen
2017-09-25
Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong
Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models
Duffy, Stephen F.
1997-01-01
Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, Ni
Incorporation particle creation and annihilation into Bohm's Pilot Wave model
Energy Technology Data Exchange (ETDEWEB)
Sverdlov, Roman [Raman Research Institute, C.V. Raman Avenue, Sadashiva Nagar, Bangalore, Karnataka, 560080 (India)
2011-07-08
The purpose of this paper is to come up with a Pilot Wave model of quantum field theory that incorporates particle creation and annihilation without sacrificing determinism; this theory is subsequently coupled with gravity.
INCORPORATION OF MECHANISTIC INFORMATION IN THE ARSENIC PBPK MODEL DEVELOPMENT PROCESS
INCORPORATING MECHANISTIC INSIGHTS IN A PBPK MODEL FOR ARSENICElaina M. Kenyon, Michael F. Hughes, Marina V. Evans, David J. Thomas, U.S. EPA; Miroslav Styblo, University of North Carolina; Michael Easterling, Analytical Sciences, Inc.A physiologically based phar...
High-Strain Rate Failure Modeling Incorporating Shear Banding and Fracture
2017-11-22
High Strain Rate Failure Modeling Incorporating Shear Banding and Fracture The views, opinions and/or findings contained in this report are those of...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS...Report as of 05-Dec-2017 Agreement Number: W911NF-13-1-0238 Organization: Columbia University Title: High Strain Rate Failure Modeling Incorporating
Incorporating the life course model into MCH nutrition leadership education and training programs.
Haughton, Betsy; Eppig, Kristen; Looney, Shannon M; Cunningham-Sabo, Leslie; Spear, Bonnie A; Spence, Marsha; Stang, Jamie S
2013-01-01
Life course perspective, social determinants of health, and health equity have been combined into one comprehensive model, the life course model (LCM), for strategic planning by US Health Resources and Services Administration's Maternal and Child Health Bureau. The purpose of this project was to describe a faculty development process; identify strategies for incorporation of the LCM into nutrition leadership education and training at the graduate and professional levels; and suggest broader implications for training, research, and practice. Nineteen representatives from 6 MCHB-funded nutrition leadership education and training programs and 10 federal partners participated in a one-day session that began with an overview of the models and concluded with guided small group discussions on how to incorporate them into maternal and child health (MCH) leadership training using obesity as an example. Written notes from group discussions were compiled and coded emergently. Content analysis determined the most salient themes about incorporating the models into training. Four major LCM-related themes emerged, three of which were about training: (1) incorporation by training grants through LCM-framed coursework and experiences for trainees, and similarly framed continuing education and skills development for professionals; (2) incorporation through collaboration with other training programs and state and community partners, and through advocacy; and (3) incorporation by others at the federal and local levels through policy, political, and prevention efforts. The fourth theme focused on anticipated challenges of incorporating the model in training. Multiple methods for incorporating the LCM into MCH training and practice are warranted. Challenges to incorporating include the need for research and related policy development.
Design ensemble machine learning model for breast cancer diagnosis.
Hsieh, Sheau-Ling; Hsieh, Sung-Huai; Cheng, Po-Hsun; Chen, Chi-Huang; Hsu, Kai-Ping; Lee, I-Shun; Wang, Zhenyu; Lai, Feipei
2012-10-01
In this paper, we classify the breast cancer of medical diagnostic data. Information gain has been adapted for feature selections. Neural fuzzy (NF), k-nearest neighbor (KNN), quadratic classifier (QC), each single model scheme as well as their associated, ensemble ones have been developed for classifications. In addition, a combined ensemble model with these three schemes has been constructed for further validations. The experimental results indicate that the ensemble learning performs better than individual single ones. Moreover, the combined ensemble model illustrates the highest accuracy of classifications for the breast cancer among all models.
Stochastic epidemic-type model with enhanced connectivity: exact solution
International Nuclear Information System (INIS)
Williams, H T; Mazilu, I; Mazilu, D A
2012-01-01
We present an exact analytical solution to a one-dimensional model of the susceptible–infected–recovered (SIR) epidemic type, with infection rates dependent on nearest-neighbor occupations. We use a quantum mechanical approach, transforming the master equation via a quantum spin operator formulation. We calculate exactly the time-dependent density of infected, recovered and susceptible populations for random initial conditions. Our results compare well with those of previous work, validating the model as a useful tool for additional and extended studies in this important area. Our model also provides exact solutions for the n-point correlation functions, and can be extended to more complex epidemic-type models
DEFF Research Database (Denmark)
Köster, Fritz; Hinrichsen, H.H.; St. John, Michael
2001-01-01
We investigate whether a process-oriented approach based on the results of field, laboratory, and modelling studies can be used to develop a stock-environment-recruitment model for Central Baltic cod (Gadus morhua). Based on exploratory statistical analysis, significant variables influencing...... cod in these areas, suggesting that key biotic and abiotic processes can be successfully incorporated into recruitment models....... survival of early life stages and varying systematically among spawning sites were incorporated into stock-recruitment models, first for major cod spawning sites and then combined for the entire Central Baltic. Variables identified included potential egg production by the spawning stock, abiotic conditions...
Incorporation of the capillary hysteresis model HYSTR into the numerical code TOUGH
International Nuclear Information System (INIS)
Niemi, A.; Bodvarsson, G.S.; Pruess, K.
1991-11-01
As part of the work performed to model flow in the unsaturated zone at Yucca Mountain Nevada, a capillary hysteresis model has been developed. The computer program HYSTR has been developed to compute the hysteretic capillary pressure -- liquid saturation relationship through interpolation of tabulated data. The code can be easily incorporated into any numerical unsaturated flow simulator. A complete description of HYSTR, including a brief summary of the previous hysteresis literature, detailed description of the program, and instructions for its incorporation into a numerical simulator are given in the HYSTR user's manual (Niemi and Bodvarsson, 1991a). This report describes the incorporation of HYSTR into the numerical code TOUGH (Transport of Unsaturated Groundwater and Heat; Pruess, 1986). The changes made and procedures for the use of TOUGH for hysteresis modeling are documented
Quantum decoration transformation for spin models
Energy Technology Data Exchange (ETDEWEB)
Braz, F.F.; Rodrigues, F.C.; Souza, S.M. de; Rojas, Onofre, E-mail: ors@dfi.ufla.br
2016-09-15
It is quite relevant the extension of decoration transformation for quantum spin models since most of the real materials could be well described by Heisenberg type models. Here we propose an exact quantum decoration transformation and also showing interesting properties such as the persistence of symmetry and the symmetry breaking during this transformation. Although the proposed transformation, in principle, cannot be used to map exactly a quantum spin lattice model into another quantum spin lattice model, since the operators are non-commutative. However, it is possible the mapping in the “classical” limit, establishing an equivalence between both quantum spin lattice models. To study the validity of this approach for quantum spin lattice model, we use the Zassenhaus formula, and we verify how the correction could influence the decoration transformation. But this correction could be useless to improve the quantum decoration transformation because it involves the second-nearest-neighbor and further nearest neighbor couplings, which leads into a cumbersome task to establish the equivalence between both lattice models. This correction also gives us valuable information about its contribution, for most of the Heisenberg type models, this correction could be irrelevant at least up to the third order term of Zassenhaus formula. This transformation is applied to a finite size Heisenberg chain, comparing with the exact numerical results, our result is consistent for weak xy-anisotropy coupling. We also apply to bond-alternating Ising–Heisenberg chain model, obtaining an accurate result in the limit of the quasi-Ising chain.
Quantum decoration transformation for spin models
International Nuclear Information System (INIS)
Braz, F.F.; Rodrigues, F.C.; Souza, S.M. de; Rojas, Onofre
2016-01-01
It is quite relevant the extension of decoration transformation for quantum spin models since most of the real materials could be well described by Heisenberg type models. Here we propose an exact quantum decoration transformation and also showing interesting properties such as the persistence of symmetry and the symmetry breaking during this transformation. Although the proposed transformation, in principle, cannot be used to map exactly a quantum spin lattice model into another quantum spin lattice model, since the operators are non-commutative. However, it is possible the mapping in the “classical” limit, establishing an equivalence between both quantum spin lattice models. To study the validity of this approach for quantum spin lattice model, we use the Zassenhaus formula, and we verify how the correction could influence the decoration transformation. But this correction could be useless to improve the quantum decoration transformation because it involves the second-nearest-neighbor and further nearest neighbor couplings, which leads into a cumbersome task to establish the equivalence between both lattice models. This correction also gives us valuable information about its contribution, for most of the Heisenberg type models, this correction could be irrelevant at least up to the third order term of Zassenhaus formula. This transformation is applied to a finite size Heisenberg chain, comparing with the exact numerical results, our result is consistent for weak xy-anisotropy coupling. We also apply to bond-alternating Ising–Heisenberg chain model, obtaining an accurate result in the limit of the quasi-Ising chain.
On Models with Uncountable Set of Spin Values on a Cayley Tree: Integral Equations
International Nuclear Information System (INIS)
Rozikov, Utkir A.; Eshkobilov, Yusup Kh.
2010-01-01
We consider models with nearest-neighbor interactions and with the set [0, 1] of spin values, on a Cayley tree of order k ≥ 1. We reduce the problem of describing the 'splitting Gibbs measures' of the model to the description of the solutions of some nonlinear integral equation. For k = 1 we show that the integral equation has a unique solution. In case k ≥ 2 some models (with the set [0, 1] of spin values) which have a unique splitting Gibbs measure are constructed. Also for the Potts model with uncountable set of spin values it is proven that there is unique splitting Gibbs measure.
Simulation of Forest Carbon Fluxes Using Model Incorporation and Data Assimilation
Min Yan; Xin Tian; Zengyuan Li; Erxue Chen; Xufeng Wang; Zongtao Han; Hong Sun
2016-01-01
This study improved simulation of forest carbon fluxes in the Changbai Mountains with a process-based model (Biome-BGC) using incorporation and data assimilation. Firstly, the original remote sensing-based MODIS MOD_17 GPP (MOD_17) model was optimized using refined input data and biome-specific parameters. The key ecophysiological parameters of the Biome-BGC model were determined through the Extended Fourier Amplitude Sensitivity Test (EFAST) sensitivity analysis. Then the optimized MOD_17 mo...
Intersite electron correlations in a Hubbard model on inhomogeneous lattices
International Nuclear Information System (INIS)
Takemori, Nayuta; Koga, Akihisa; Hafermann, Hartmut
2016-01-01
We study intersite electron correlations in the half-filled Hubbard model on square lattices with periodic and open boundary conditions by means of a real-space dual fermion approach. By calculating renormalization factors, we clarify that nearest-neighbor intersite correlations already significantly reduce the critical interaction. The Mott transition occurs at U/t ∼ 6.4, where U is the interaction strength and t is the hopping integral. This value is consistent with quantum Monte Carlo results. It shows the importance of short-range intersite correlations, which are taken into account in the framework of the real-space dual fermion approach. (paper)
Panda, Saswati; Sahoo, D. D.; Rout, G. C.
2018-04-01
We report here a tight binding model for colossal magnetoresistive (CMR) manganites to study the pseudo gap (PG) behavior near Fermi level. In the Kubo-Ohata type DE model, we consider first and second nearest neighbor interactions for transverse spin fluctuations in core band and hopping integrals in conduction band, in the presence of static band Jahn-Teller distortion. The model Hamiltonian is solved using Zubarev's Green's function technique. The electron density of states (DOS) is found out from the Green's functions. We observe clear PG near Fermi level in the electron DOS.
Incorporation of composite defects from ultrasonic NDE into CAD and FE models
Bingol, Onur Rauf; Schiefelbein, Bryan; Grandin, Robert J.; Holland, Stephen D.; Krishnamurthy, Adarsh
2017-02-01
Fiber-reinforced composites are widely used in aerospace industry due to their combined properties of high strength and low weight. However, owing to their complex structure, it is difficult to assess the impact of manufacturing defects and service damage on their residual life. While, ultrasonic testing (UT) is the preferred NDE method to identify the presence of defects in composites, there are no reasonable ways to model the damage and evaluate the structural integrity of composites. We have developed an automated framework to incorporate flaws and known composite damage automatically into a finite element analysis (FEA) model of composites, ultimately aiding in accessing the residual life of composites and make informed decisions regarding repairs. The framework can be used to generate a layer-by-layer 3D structural CAD model of the composite laminates replicating their manufacturing process. Outlines of structural defects, such as delaminations, are automatically detected from UT of the laminate and are incorporated into the CAD model between the appropriate layers. In addition, the framework allows for direct structural analysis of the resulting 3D CAD models with defects by automatically applying the appropriate boundary conditions. In this paper, we show a working proof-of-concept for the composite model builder with capabilities of incorporating delaminations between laminate layers and automatically preparing the CAD model for structural analysis using a FEA software.
A lattice model for influenza spreading.
Directory of Open Access Journals (Sweden)
Antonella Liccardo
Full Text Available We construct a stochastic SIR model for influenza spreading on a D-dimensional lattice, which represents the dynamic contact network of individuals. An age distributed population is placed on the lattice and moves on it. The displacement from a site to a nearest neighbor empty site, allows individuals to change the number and identities of their contacts. The dynamics on the lattice is governed by an attractive interaction between individuals belonging to the same age-class. The parameters, which regulate the pattern dynamics, are fixed fitting the data on the age-dependent daily contact numbers, furnished by the Polymod survey. A simple SIR transmission model with a nearest neighbors interaction and some very basic adaptive mobility restrictions complete the model. The model is validated against the age-distributed Italian epidemiological data for the influenza A(H1N1 during the [Formula: see text] season, with sensible predictions for the epidemiological parameters. For an appropriate topology of the lattice, we find that, whenever the accordance between the contact patterns of the model and the Polymod data is satisfactory, there is a good agreement between the numerical and the experimental epidemiological data. This result shows how rich is the information encoded in the average contact patterns of individuals, with respect to the analysis of the epidemic spreading of an infectious disease.
Incorporating Social Anxiety Into a Model of College Problem Drinking: Replication and Extension
Ham, Lindsay S.; Hope, Debra A.
2006-01-01
Although research has found an association between social anxiety and alcohol use in noncollege samples, results have been mixed for college samples. College students face many novel social situations in which they may drink to reduce social anxiety. In the current study, the authors tested a model of college problem drinking, incorporating social anxiety and related psychosocial variables among 228 undergraduate volunteers. According to structural equation modeling (SEM) results, social anxi...
PWR plant operator training used full scope simulator incorporated MAAP model
International Nuclear Information System (INIS)
Matsumoto, Y.; Tabuchi, T.; Yamashita, T.; Komatsu, Y.; Tsubouchi, K.; Banka, T.; Mochizuki, T.; Nishimura, K.; Iizuka, H.
2015-01-01
NTC makes an effort with the understanding of plant behavior of core damage accident as part of our advanced training. For the Fukushima Daiichi Nuclear Power Station accident, we introduced the MAAP model into PWR operator training full scope simulator and also made the Severe Accident Visual Display unit. From 2014, we will introduce new training program for a core damage accident with PWR operator training full scope simulator incorporated the MAAP model and the Severe Accident Visual Display unit. (author)
INCORPORATING MULTIPLE OBJECTIVES IN PLANNING MODELS OF LOW-RESOURCE FARMERS
Flinn, John C.; Jayasuriya, Sisira; Knight, C. Gregory
1980-01-01
Linear goal programming provides a means of formally incorporating the multiple goals of a household into the analysis of farming systems. Using this approach, the set of plans which come as close as possible to achieving a set of desired goals under conditions of land and cash scarcity are derived for a Filipino tenant farmer. A challenge in making LGP models empirically operational is the accurate definition of the goals of the farm household being modelled.
Modeling of the shape of infrared stimulated luminescence signals in feldspars
DEFF Research Database (Denmark)
Pagonis, Vasilis; Jain, Mayank; Murray, Andrew S.
2012-01-01
This paper presents a new empirical model describing infrared (IR) stimulation phenomena in feldspars. In the model electrons from the ground state of an electron trap are raised by infrared optical stimulation to the excited state, and subsequently recombine with a nearest-neighbor hole via...... corresponds to a fast rate of recombination processes taking place along the infrared stimulated luminescence (IRSL) curves. The subsequent decay of the simulated IRSL signal is characterized by a much slower recombination rate, which can be described by a power-law type of equation.Several simulations...
DEFF Research Database (Denmark)
Marinakis, Yannis; Dounias, Georgios; Jantzen, Jan
2009-01-01
The term pap-smear refers to samples of human cells stained by the so-called Papanicolaou method. The purpose of the Papanicolaou method is to diagnose pre-cancerous cell changes before they progress to invasive carcinoma. In this paper a metaheuristic algorithm is proposed in order to classify t...... other previously applied intelligent approaches....
Directory of Open Access Journals (Sweden)
Fittria Shofrotun Ni'mah
2018-03-01
Full Text Available Medicinal plants can be used as an alternative natural treatment, instead of chemical drugs. But because of too many types of plants and lack of knowledge, it will be difficult to identify these herbs. Computer assistance can be used to facilitate the identification of these herbs. This research proposes the identification of herbal plants based on leaf image using texture analysis. There are 10 types of herbal medicinal plants used in this study. The texture analysis used was GLCM by extracting contrast, correlation, energy, and homogeneity. Classification is done by KNN. The result of the experiment showed that the accuracy of identification using 9-fold cross-cross validation method was 83.33% using 9 subsets. Tumbuhan obat herbal bisa dijadikan sebagai alternatif pengobatan yang alami, selain obat-obatan kimia. Namun karena terlalu banyak jenis tumbuhan dan kurangnya pengetahuan, identifikasi tumbuhan berkhasiat akan sulit. Bantuan komputer dapat digunakan untuk memudahkan mengidentifikasi tumbuhan herbal tersebut. Penelitian ini mengusulkan identifikasi tumbuhan herbal berdasarkan citra daun menggunakan analisis tekstur. Ada 10 spesies tumbuhan obat herbal yang digunakan dalam penelitian ini. Analisis tekstur yang digunakan adalah GLCM dengan mengekstrak nilai kontras, korelasi, energi dan homogenitas. Klasifikasi dilakukan dengan KNN. Hasil percobaan menunjukkan akurasi identifikasi menggunakan metode 9-fold cross validation mencapai 83.33% dengan menggunakan 9 subset.
Nearest-neighbor Kitaev exchange blocked by charge order in electron doped $\\alpha$-RuCl$_{3}$
Koitzsch, A.; Habenicht, C.; Mueller, E.; Knupfer, M.; Buechner, B.; Kretschmer, S.; Richter, M.; Brink, J. van den; Boerrnert, F.; Nowak, D.; Isaeva, A.; Doert, Th.
2017-01-01
A quantum spin-liquid might be realized in $\\alpha$-RuCl$_{3}$, a honeycomb-lattice magnetic material with substantial spin-orbit coupling. Moreover, $\\alpha$-RuCl$_{3}$ is a Mott insulator, which implies the possibility that novel exotic phases occur upon doping. Here, we study the electronic structure of this material when intercalated with potassium by photoemission spectroscopy, electron energy loss spectroscopy, and density functional theory calculations. We obtain a stable stoichiometry...
A climatological model for risk computations incorporating site- specific dry deposition influences
International Nuclear Information System (INIS)
Droppo, J.G. Jr.
1991-07-01
A gradient-flux dry deposition module was developed for use in a climatological atmospheric transport model, the Multimedia Environmental Pollutant Assessment System (MEPAS). The atmospheric pathway model computes long-term average contaminant air concentration and surface deposition patterns surrounding a potential release site incorporating location-specific dry deposition influences. Gradient-flux formulations are used to incorporate site and regional data in the dry deposition module for this atmospheric sector-average climatological model. Application of these formulations provide an effective means of accounting for local surface roughness in deposition computations. Linkage to a risk computation module resulted in a need for separate regional and specific surface deposition computations. 13 refs., 4 figs., 2 tabs
Band structure and orbital character of monolayer MoS2 with eleven-band tight-binding model
Shahriari, Majid; Ghalambor Dezfuli, Abdolmohammad; Sabaeian, Mohammad
2018-02-01
In this paper, based on a tight-binding (TB) model, first we present the calculations of eigenvalues as band structure and then present the eigenvectors as probability amplitude for finding electron in atomic orbitals for monolayer MoS2 in the first Brillouin zone. In these calculations we are considering hopping processes between the nearest-neighbor Mo-S, the next nearest-neighbor in-plan Mo-Mo, and the next nearest-neighbor in-plan and out-of-plan S-S atoms in a three-atom based unit cell of two-dimensional rhombic MoS2. The hopping integrals have been solved in terms of Slater-Koster and crystal field parameters. These parameters are calculated by comparing TB model with the density function theory (DFT) in the high-symmetry k-points (i.e. the K- and Γ-points). In our TB model all the 4d Mo orbitals and the 3p S orbitals are considered and detailed analysis of the orbital character of each energy level at the main high-symmetry points of the Brillouin zone is described. In comparison with DFT calculations, our results of TB model show a very good agreement for bands near the Fermi level. However for other bands which are far from the Fermi level, some discrepancies between our TB model and DFT calculations are observed. Upon the accuracy of Slater-Koster and crystal field parameters, on the contrary of DFT, our model provide enough accuracy to calculate all allowed transitions between energy bands that are very crucial for investigating the linear and nonlinear optical properties of monolayer MoS2.
Directory of Open Access Journals (Sweden)
Ismail eAdeniran
2013-07-01
Full Text Available Introduction Genetic forms of the Short QT Syndrome (SQTS arise due to cardiac ion channel mutations leading to accelerated ventricular repolarisation, arrhythmias and sudden cardiac death. Results from experimental and simulation studies suggest that changes to refractoriness and tissue vulnerability produce a substrate favourable to re-entry. Potential electromechanical consequences of the SQTS are less well understood. The aim of this study was to utilize electromechanically coupled human ventricle models to explore electromechanical consequences of the SQTS. Methods and results: The Rice et al. mechanical model was coupled to the ten Tusscher et al. ventricular cell model. Previously validated K+ channel formulations for SQT variants 1 and 3 were incorporated. Functional effects of the SQTS mutations on transients, sarcomere length shortening and contractile force at the single cell level were evaluated with and without the consideration of stretch activated channel current (Isac. Without Isac, the SQTS mutations produced dramatic reductions in the amplitude of transients, sarcomere length shortening and contractile force. When Isac was incorporated, there was a considerable attenuation of the effects of SQTS-associated action potential shortening on Ca2+ transients, sarcomere shortening and contractile force. Single cell models were then incorporated into 3D human ventricular tissue models. The timing of maximum deformation was delayed in the SQTS setting compared to control. Conclusion: The incorporation of Isac appears to be an important consideration in modelling functional effects of SQT 1 and 3 mutations on cardiac electro-mechanical coupling. Whilst there is little evidence of profoundly impaired cardiac contractile function in SQTS patients, our 3D simulations correlate qualitatively with reported evidence for dissociation between ventricular repolarization and the end of mechanical systole.
Improving Watershed-Scale Hydrodynamic Models by Incorporating Synthetic 3D River Bathymetry Network
Dey, S.; Saksena, S.; Merwade, V.
2017-12-01
Digital Elevation Models (DEMs) have an incomplete representation of river bathymetry, which is critical for simulating river hydrodynamics in flood modeling. Generally, DEMs are augmented with field collected bathymetry data, but such data are available only at individual reaches. Creating a hydrodynamic model covering an entire stream network in the basin requires bathymetry for all streams. This study extends a conceptual bathymetry model, River Channel Morphology Model (RCMM), to estimate the bathymetry for an entire stream network for application in hydrodynamic modeling using a DEM. It is implemented at two large watersheds with different relief and land use characterizations: coastal Guadalupe River basin in Texas with flat terrain and a relatively urban White River basin in Indiana with more relief. After bathymetry incorporation, both watersheds are modeled using HEC-RAS (1D hydraulic model) and Interconnected Pond and Channel Routing (ICPR), a 2-D integrated hydrologic and hydraulic model. A comparison of the streamflow estimated by ICPR at the outlet of the basins indicates that incorporating bathymetry influences streamflow estimates. The inundation maps show that bathymetry has a higher impact on flat terrains of Guadalupe River basin when compared to the White River basin.
Crase, Beth; Liedloff, Adam; Vesk, Peter A; Fukuda, Yusuke; Wintle, Brendan A
2014-08-01
Species distribution models (SDMs) are widely used to forecast changes in the spatial distributions of species and communities in response to climate change. However, spatial autocorrelation (SA) is rarely accounted for in these models, despite its ubiquity in broad-scale ecological data. While spatial autocorrelation in model residuals is known to result in biased parameter estimates and the inflation of type I errors, the influence of unmodeled SA on species' range forecasts is poorly understood. Here we quantify how accounting for SA in SDMs influences the magnitude of range shift forecasts produced by SDMs for multiple climate change scenarios. SDMs were fitted to simulated data with a known autocorrelation structure, and to field observations of three mangrove communities from northern Australia displaying strong spatial autocorrelation. Three modeling approaches were implemented: environment-only models (most frequently applied in species' range forecasts), and two approaches that incorporate SA; autologistic models and residuals autocovariate (RAC) models. Differences in forecasts among modeling approaches and climate scenarios were quantified. While all model predictions at the current time closely matched that of the actual current distribution of the mangrove communities, under the climate change scenarios environment-only models forecast substantially greater range shifts than models incorporating SA. Furthermore, the magnitude of these differences intensified with increasing increments of climate change across the scenarios. When models do not account for SA, forecasts of species' range shifts indicate more extreme impacts of climate change, compared to models that explicitly account for SA. Therefore, where biological or population processes induce substantial autocorrelation in the distribution of organisms, and this is not modeled, model predictions will be inaccurate. These results have global importance for conservation efforts as inaccurate
Making a difference: incorporating theories of autonomy into models of informed consent.
Delany, C
2008-09-01
Obtaining patients' informed consent is an ethical and legal obligation in healthcare practice. Whilst the law provides prescriptive rules and guidelines, ethical theories of autonomy provide moral foundations. Models of practice of consent, have been developed in the bioethical literature to assist in understanding and integrating the ethical theory of autonomy and legal obligations into the clinical process of obtaining a patient's informed consent to treatment. To review four models of consent and analyse the way each model incorporates the ethical meaning of autonomy and how, as a consequence, they might change the actual communicative process of obtaining informed consent within clinical contexts. An iceberg framework of consent is used to conceptualise how ethical theories of autonomy are positioned and underpin the above surface, and visible clinical communication, including associated legal guidelines and ethical rules. Each model of consent is critically reviewed from the perspective of how it might shape the process of informed consent. All four models would alter the process of obtaining consent. Two models provide structure and guidelines for the content and timing of obtaining patients' consent. The two other models rely on an attitudinal shift in clinicians. They provide ideas for consent by focusing on underlying values, attitudes and meaning associated with the ethical meaning of autonomy. The paper concludes that models of practice that explicitly incorporate the underlying ethical meaning of autonomy as their basis, provide less prescriptive, but more theoretically rich guidance for healthcare communicative practices.
International Nuclear Information System (INIS)
Sotiralis, P.; Ventikos, N.P.; Hamann, R.; Golyshev, P.; Teixeira, A.P.
2016-01-01
This paper presents an approach that more adequately incorporates human factor considerations into quantitative risk analysis of ship operation. The focus is on the collision accident category, which is one of the main risk contributors in ship operation. The approach is based on the development of a Bayesian Network (BN) model that integrates elements from the Technique for Retrospective and Predictive Analysis of Cognitive Errors (TRACEr) and focuses on the calculation of the collision accident probability due to human error. The model takes into account the human performance in normal, abnormal and critical operational conditions and implements specific tasks derived from the analysis of the task errors leading to the collision accident category. A sensitivity analysis is performed to identify the most important contributors to human performance and ship collision. Finally, the model developed is applied to assess the collision risk of a feeder operating in Dover strait using the collision probability estimated by the developed BN model and an Event tree model for calculation of human, economic and environmental risks. - Highlights: • A collision risk model for the incorporation of human factors into quantitative risk analysis is proposed. • The model takes into account the human performance in different operational conditions leading to the collision. • The most important contributors to human performance and ship collision are identified. • The model developed is applied to assess the collision risk of a feeder operating in Dover strait.
Modeling fraud detection and the incorporation of forensic specialists in the audit process
DEFF Research Database (Denmark)
Sakalauskaite, Dominyka
Financial statement audits are still comparatively poor in fraud detection. Forensic specialists can play a significant role in increasing audit quality. In this paper, based on prior academic research, I develop a model of fraud detection and the incorporation of forensic specialists in the audit...... process. The intention of the model is to identify the reasons why the audit is weak in fraud detection and to provide the analytical framework to assess whether the incorporation of forensic specialists can help to improve it. The results show that such specialists can potentially improve the fraud...... detection in the audit, but might also cause some negative implications. Overall, even though fraud detection is one of the main topics in research there are very few studies done on the subject of how auditors co-operate with forensic specialists. Thus, the paper concludes with suggestions for further...
Directory of Open Access Journals (Sweden)
Anandakumari Chandrasekharan Sunil Sekhar
2016-05-01
Full Text Available Ultra-small gold nanoparticles incorporated in mesoporous silica thin films with accessible pore channels perpendicular to the substrate are prepared by a modified sol-gel method. The simple and easy spin coating technique is applied here to make homogeneous thin films. The surface characterization using FESEM shows crack-free films with a perpendicular pore arrangement. The applicability of these thin films as catalysts as well as a robust SERS active substrate for model catalysis study is tested. Compared to bare silica film our gold incorporated silica, GSM-23F gave an enhancement factor of 103 for RhB with a laser source 633 nm. The reduction reaction of p-nitrophenol with sodium borohydride from our thin films shows a decrease in peak intensity corresponding to –NO2 group as time proceeds, confirming the catalytic activity. Such model surfaces can potentially bridge the material gap between a real catalytic system and surface science studies.
Exact ground-state phase diagrams for the spin-3/2 Blume-Emery-Griffiths model
International Nuclear Information System (INIS)
Canko, Osman; Keskin, Mustafa; Deviren, Bayram
2008-01-01
We have calculated the exact ground-state phase diagrams of the spin-3/2 Ising model using the method that was proposed and applied to the spin-1 Ising model by Dublenych (2005 Phys. Rev. B 71 012411). The calculated, exact ground-state phase diagrams on the diatomic and triangular lattices with the nearest-neighbor (NN) interaction have been presented in this paper. We have obtained seven and 15 topologically different ground-state phase diagrams for J>0 and J 0 and J<0, respectively, the conditions for the existence of uniform and intermediate phases have also been found
Numerical study of the t-J model: Exact ground state and flux phases
International Nuclear Information System (INIS)
Hasegawa, Y.; Poilblanc, D.
1990-01-01
Strongly correlated 2D electrons described by the t-J model are investigated numerically. Exact ground state for one and two holes in a finite cluster with periodic boundary conditions are obtained by using the Lanczos algorithm. The effects of Coulomb repulsion of the holes on the nearest neighbor sites are taken into account. Commensurate flux phases are investigated for the same size of clusters. They are shown to be a good approximation for the ground state specially in the intermediate value of J/t. (author). 21 refs, 3 figs
The ground-state phase diagrams of the spin-3/2 Ising model
International Nuclear Information System (INIS)
Canko, Osman; Keskin, Mustafa
2003-01-01
The ground-state spin configurations are obtained for the spin-3/2 Ising model Hamiltonian with bilinear and biquadratic exchange interactions and a single-ion crystal field. The interactions are assumed to be only between nearest-neighbors. The calculated ground-state phase diagrams are presented on diatomic lattices, such as the square, honeycomb and sc lattices, and triangular lattice in the (Δ/z vertical bar J vertical bar ,K/ vertical bar J vertical bar) and (H/z vertical bar J vertical bar, K/ vertical bar J vertical bar) planes
Haddad, Tarek; Himes, Adam; Thompson, Laura; Irony, Telba; Nair, Rajesh
2017-01-01
Evaluation of medical devices via clinical trial is often a necessary step in the process of bringing a new product to market. In recent years, device manufacturers are increasingly using stochastic engineering models during the product development process. These models have the capability to simulate virtual patient outcomes. This article presents a novel method based on the power prior for augmenting a clinical trial using virtual patient data. To properly inform clinical evaluation, the virtual patient model must simulate the clinical outcome of interest, incorporating patient variability, as well as the uncertainty in the engineering model and in its input parameters. The number of virtual patients is controlled by a discount function which uses the similarity between modeled and observed data. This method is illustrated by a case study of cardiac lead fracture. Different discount functions are used to cover a wide range of scenarios in which the type I error rates and power vary for the same number of enrolled patients. Incorporation of engineering models as prior knowledge in a Bayesian clinical trial design can provide benefits of decreased sample size and trial length while still controlling type I error rate and power.
Directory of Open Access Journals (Sweden)
Wang Yanqing
2016-03-01
Full Text Available A good assignment of code reviewers can effectively utilize the intellectual resources, assure code quality and improve programmers’ skills in software development. However, little research on reviewer assignment of code review has been found. In this study, a code reviewer assignment model is created based on participants’ preference to reviewing assignment. With a constraint of the smallest size of a review group, the model is optimized to maximize review outcomes and avoid the negative impact of “mutual admiration society”. This study shows that the reviewer assignment strategies incorporating either the reviewers’ preferences or the authors’ preferences get much improvement than a random assignment. The strategy incorporating authors’ preference makes higher improvement than that incorporating reviewers’ preference. However, when the reviewers’ and authors’ preference matrixes are merged, the improvement becomes moderate. The study indicates that the majority of the participants have a strong wish to work with reviewers and authors having highest competence. If we want to satisfy the preference of both reviewers and authors at the same time, the overall improvement of learning outcomes may be not the best.
International Nuclear Information System (INIS)
Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.
2006-01-01
Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)
Global dynamics of a PDE model for aedes aegypti mosquitoe incorporating female sexual preference
Parshad, Rana
2011-01-01
In this paper we study the long time dynamics of a reaction diffusion system, describing the spread of Aedes aegypti mosquitoes, which are the primary cause of dengue infection. The system incorporates a control attempt via the sterile insect technique. The model incorporates female mosquitoes sexual preference for wild males over sterile males. We show global existence of strong solution for the system. We then derive uniform estimates to prove the existence of a global attractor in L-2(Omega), for the system. The attractor is shown to be L-infinity(Omega) regular and posess state of extinction, if the injection of sterile males is large enough. We also provide upper bounds on the Hausdorff and fractal dimensions of the attractor.
Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.
2015-12-01
The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.
Aryanpour, K.; Pickett, W. E.; Scalettar, R. T.
2006-01-01
We employ dynamical mean field theory (DMFT) with a Quantum Monte Carlo (QMC) atomic solver to investigate the finite temperature Mott transition in the Hubbard model with the nearest neighbor hopping on a triangular lattice at half-filling. We estimate the value of the critical interaction to be $U_c=12.0 \\pm 0.5$ in units of the hopping amplitude $t$ through the evolution of the magnetic moment, spectral function, internal energy and specific heat as the interaction $U$ and temperature $T$ ...
Troy, Tara J.; Ines, Amor V. M.; Lall, Upmanu; Robertson, Andrew W.
2013-04-01
Large-scale hydrologic models, such as the Variable Infiltration Capacity (VIC) model, are used for a variety of studies, from drought monitoring to projecting the potential impact of climate change on the hydrologic cycle decades in advance. The majority of these models simulates the natural hydrological cycle and neglects the effects of human activities such as irrigation, which can result in streamflow withdrawals and increased evapotranspiration. In some parts of the world, these activities do not significantly affect the hydrologic cycle, but this is not the case in south Asia where irrigated agriculture has a large water footprint. To address this gap, we incorporate a crop growth model and irrigation model into the VIC model in order to simulate the impacts of irrigated and rainfed agriculture on the hydrologic cycle over south Asia (Indus, Ganges, and Brahmaputra basin and peninsular India). The crop growth model responds to climate signals, including temperature and water stress, to simulate the growth of maize, wheat, rice, and millet. For the primarily rainfed maize crop, the crop growth model shows good correlation with observed All-India yields (0.7) with lower correlations for the irrigated wheat and rice crops (0.4). The difference in correlation is because irrigation provides a buffer against climate conditions, so that rainfed crop growth is more tied to climate than irrigated crop growth. The irrigation water demands induce hydrologic water stress in significant parts of the region, particularly in the Indus, with the streamflow unable to meet the irrigation demands. Although rainfall can vary significantly in south Asia, we find that water scarcity is largely chronic due to the irrigation demands rather than being intermittent due to climate variability.
Vigeant, Michelle C.
Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine first its effect in room acoustics computer models and secondly how to better incorporate the directional source characteristics into these models to improve auralizations. To increase the accuracy of room acoustics computer models, the source directivity of real sources, such as musical instruments, must be included in the models. The traditional method for incorporating source directivity into room acoustics computer models involves inputting the measured static directivity data taken every 10° in a sphere-shaped pattern around the source. This data can be entered into the room acoustics software to create a directivity balloon, which is used in the ray tracing algorithm to simulate the room impulse response. The first study in this dissertation shows that using directional sources over an omni-directional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. The room acoustics computer model was also validated in terms of accurately incorporating the input source directivity. A recently proposed technique for creating auralizations using a multi-channel source representation has been investigated with numerous subjective studies, applied to both solo instruments and an orchestra. The method of multi-channel auralizations involves obtaining multi-channel anechoic recordings of short melodies from various instruments and creating individual channel auralizations. These auralizations are then combined to create a total multi-channel auralization. Through many subjective studies, this process was shown to be effective in terms of improving the realism and source width of the auralizations in a number of cases, and also modeling different
A data-driven model for influenza transmission incorporating media effects.
Mitchell, Lewis; Ross, Joshua V
2016-10-01
Numerous studies have attempted to model the effect of mass media on the transmission of diseases such as influenza; however, quantitative data on media engagement has until recently been difficult to obtain. With the recent explosion of 'big data' coming from online social media and the like, large volumes of data on a population's engagement with mass media during an epidemic are becoming available to researchers. In this study, we combine an online dataset comprising millions of shared messages relating to influenza with traditional surveillance data on flu activity to suggest a functional form for the relationship between the two. Using this data, we present a simple deterministic model for influenza dynamics incorporating media effects, and show that such a model helps explain the dynamics of historical influenza outbreaks. Furthermore, through model selection we show that the proposed media function fits historical data better than other media functions proposed in earlier studies.
Towards a functional model of mental disorders incorporating the laws of thermodynamics.
Murray, George C; McKenzie, Karen
2013-05-01
The current paper presents the hypothesis that the understanding of mental disorders can be advanced by incorporating the laws of thermodynamics, specifically relating to energy conservation and energy transfer. These ideas, along with the introduction of the notion that entropic activities are symptomatic of inefficient energy transfer or disorder, were used to propose a model of understanding mental ill health as resulting from the interaction of entropy, capacity and work (environmental demands). The model was applied to Attention Deficit Hyperactivity Disorder, and was shown to be compatible with current thinking about this condition, as well as emerging models of mental disorders as complex networks. A key implication of the proposed model is that it argues that all mental disorders require a systemic functional approach, with the advantage that it offers a number of routes into the assessment, formulation and treatment for mental health problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Incorporating ligament laxity in a finite element model for the upper cervical spine.
Lasswell, Timothy L; Cronin, Duane S; Medley, John B; Rasoulinejad, Parham
2017-11-01
Predicting physiological range of motion (ROM) using a finite element (FE) model of the upper cervical spine requires the incorporation of ligament laxity. The effect of ligament laxity can be observed only on a macro level of joint motion and is lost once ligaments have been dissected and preconditioned for experimental testing. As a result, although ligament laxity values are recognized to exist, specific values are not directly available in the literature for use in FE models. The purpose of the current study is to propose an optimization process that can be used to determine a set of ligament laxity values for upper cervical spine FE models. Furthermore, an FE model that includes ligament laxity is applied, and the resulting ROM values are compared with experimental data for physiological ROM, as well as experimental data for the increase in ROM when a Type II odontoid fracture is introduced. The upper cervical spine FE model was adapted from a 50th percentile male full-body model developed with the Global Human Body Models Consortium (GHBMC). FE modeling was performed in LS-DYNA and LS-OPT (Livermore Software Technology Group) was used for ligament laxity optimization. Ordinate-based curve matching was used to minimize the mean squared error (MSE) between computed load-rotation curves and experimental load-rotation curves under flexion, extension, and axial rotation with pure moment loads from 0 to 3.5 Nm. Lateral bending was excluded from the optimization because the upper cervical spine was considered to be primarily responsible for flexion, extension, and axial rotation. Based on recommendations from the literature, four varying inputs representing laxity in select ligaments were optimized to minimize the MSE. Funding was provided by the Natural Sciences and Engineering Research Council of Canada as well as GHMBC. The present study was funded by the Natural Sciences and Engineering Research Council of Canada to support the work of one graduate student
Incorporating microbiota data into epidemiologic models: examples from vaginal microbiota research.
van de Wijgert, Janneke H; Jespers, Vicky
2016-05-01
Next generation sequencing and quantitative polymerase chain reaction technologies are now widely available, and research incorporating these methods is growing exponentially. In the vaginal microbiota (VMB) field, most research to date has been descriptive. The purpose of this article is to provide an overview of different ways in which next generation sequencing and quantitative polymerase chain reaction data can be used to answer clinical epidemiologic research questions using examples from VMB research. We reviewed relevant methodological literature and VMB articles (published between 2008 and 2015) that incorporated these methodologies. VMB data have been analyzed using ecologic methods, methods that compare the presence or relative abundance of individual taxa or community compositions between different groups of women or sampling time points, and methods that first reduce the complexity of the data into a few variables followed by the incorporation of these variables into traditional biostatistical models. To make future VMB research more clinically relevant (such as studying associations between VMB compositions and clinical outcomes and the effects of interventions on the VMB), it is important that these methods are integrated with rigorous epidemiologic methods (such as appropriate study designs, sampling strategies, and adjustment for confounding). Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
A local non-parametric model for trade sign inference
Blazejewski, Adam; Coggins, Richard
2005-03-01
We investigate a regularity in market order submission strategies for 12 stocks with large market capitalization on the Australian Stock Exchange. The regularity is evidenced by a predictable relationship between the trade sign (trade initiator), size of the trade, and the contents of the limit order book before the trade. We demonstrate this predictability by developing an empirical inference model to classify trades into buyer-initiated and seller-initiated. The model employs a local non-parametric method, k-nearest neighbor, which in the past was used successfully for chaotic time series prediction. The k-nearest neighbor with three predictor variables achieves an average out-of-sample classification accuracy of 71.40%, compared to 63.32% for the linear logistic regression with seven predictor variables. The result suggests that a non-linear approach may produce a more parsimonious trade sign inference model with a higher out-of-sample classification accuracy. Furthermore, for most of our stocks the observed regularity in market order submissions seems to have a memory of at least 30 trading days.
Self-Organized Criticality in an Anisotropic Earthquake Model
Li, Bin-Quan; Wang, Sheng-Jun
2018-03-01
We have made an extensive numerical study of a modified model proposed by Olami, Feder, and Christensen to describe earthquake behavior. Two situations were considered in this paper. One situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly not averagely and keeps itself to zero. The other situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly and keeps some energy for itself instead of reset to zero. Different boundary conditions were considered as well. By analyzing the distribution of earthquake sizes, we found that self-organized criticality can be excited only in the conservative case or the approximate conservative case in the above situations. Some evidence indicated that the critical exponent of both above situations and the original OFC model tend to the same result in the conservative case. The only difference is that the avalanche size in the original model is bigger. This result may be closer to the real world, after all, every crust plate size is different. Supported by National Natural Science Foundation of China under Grant Nos. 11675096 and 11305098, the Fundamental Research Funds for the Central Universities under Grant No. GK201702001, FPALAB-SNNU under Grant No. 16QNGG007, and Interdisciplinary Incubation Project of SNU under Grant No. 5
Shugar, Andrea
2017-04-01
Genetic counselors are trained health care professionals who effectively integrate both psychosocial counseling and information-giving into their practice. Preparing genetic counseling students for clinical practice is a challenging task, particularly when helping them develop effective and active counseling skills. Resistance to incorporating these skills may stem from decreased confidence, fear of causing harm or a lack of clarity of psycho-social goals. The author reflects on the personal challenges experienced in teaching genetic counselling students to work with psychological and social complexity, and proposes a Genetic Counseling Adaptation Continuum model and methodology to guide students in the use of advanced counseling skills.
Incorporation of detailed eye model into polygon-mesh versions of ICRP-110 reference phantoms.
Nguyen, Thang Tat; Yeom, Yeon Soo; Kim, Han Sung; Wang, Zhao Jun; Han, Min Cheol; Kim, Chan Hyeong; Lee, Jai Ki; Zankl, Maria; Petoussi-Henss, Nina; Bolch, Wesley E; Lee, Choonsik; Chung, Beom Sun
2015-11-21
The dose coefficients for the eye lens reported in ICRP 2010 Publication 116 were calculated using both a stylized model and the ICRP-110 reference phantoms, according to the type of radiation, energy, and irradiation geometry. To maintain consistency of lens dose assessment, in the present study we incorporated the ICRP-116 detailed eye model into the converted polygon-mesh (PM) version of the ICRP-110 reference phantoms. After the incorporation, the dose coefficients for the eye lens were calculated and compared with those of the ICRP-116 data. The results showed generally a good agreement between the newly calculated lens dose coefficients and the values of ICRP 2010 Publication 116. Significant differences were found for some irradiation cases due mainly to the use of different types of phantoms. Considering that the PM version of the ICRP-110 reference phantoms preserve the original topology of the ICRP-110 reference phantoms, it is believed that the PM version phantoms, along with the detailed eye model, provide more reliable and consistent dose coefficients for the eye lens.
Drzewiecki, Wojciech
2016-12-01
In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.
International Nuclear Information System (INIS)
Schick, W.C. Jr.; Milani, S.; Duncombe, E.
1980-03-01
A model has been devised for incorporating into the thermal feedback procedure of the PDQ few-group diffusion theory computer program the explicit calculation of depletion and temperature dependent fuel-rod shrinkage and swelling at each mesh point. The model determines the effect on reactivity of the change in hydrogen concentration caused by the variation in coolant channel area as the rods contract and expand. The calculation of fuel temperature, and hence of Doppler-broadened cross sections, is improved by correcting the heat transfer coefficient of the fuel-clad gap for the effects of clad creep, fuel densification and swelling, and release of fission-product gases into the gap. An approximate calculation of clad stress is also included in the model
Murphy, Kelly E.
2012-01-13
Fibroblasts and their activated phenotype, myofibroblasts, are the primary cell types involved in the contraction associated with dermal wound healing. Recent experimental evidence indicates that the transformation from fibroblasts to myofibroblasts involves two distinct processes: The cells are stimulated to change phenotype by the combined actions of transforming growth factor β (TGFβ) and mechanical tension. This observation indicates a need for a detailed exploration of the effect of the strong interactions between the mechanical changes and growth factors in dermal wound healing. We review the experimental findings in detail and develop a model of dermal wound healing that incorporates these phenomena. Our model includes the interactions between TGFβ and collagenase, providing a more biologically realistic form for the growth factor kinetics than those included in previous mechanochemical descriptions. A comparison is made between the model predictions and experimental data on human dermal wound healing and all the essential features are well matched. © 2012 Society for Mathematical Biology.
DEFF Research Database (Denmark)
Köster, Fritz; Hinrichsen, H.H.; St. John, Michael
2001-01-01
We investigate whether a process-oriented approach based on the results of field, laboratory, and modelling studies can be used to develop a stock-environment-recruitment model for Central Baltic cod (Gadus morhua). Based on exploratory statistical analysis, significant variables influencing...... affecting survival of eggs, predation by clupeids on eggs, larval transport, and cannibalism. Results showed that recruitment in the most important spawning area, the Bornholm Basin, during 1976-1995 was related to egg production; however, other factors affecting survival of the eggs (oxygen conditions......, predation) were also significant and when incorporated explained 69% of the variation in 0-group recruitment. In other spawning areas, variable hydrographic conditions did not allow for regular successful egg development. Hence, relatively simple models proved sufficient to predict recruitment of 0-group...
International Nuclear Information System (INIS)
Burr, G.W.; Harris, Todd L.; Babbitt, Wm. Randall; Jefferson, C. Michael
2004-01-01
We describe the incorporation of excitation-induced dephasing (EID) into the Maxwell-Bloch numerical simulation of photon echoes. At each time step of the usual numerical integration, stochastic frequency jumps of ions--caused by excitation of neighboring ions--is modeled by convolving each Bloch vector with the Bloch vectors of nearby frequency detunings. The width of this convolution kernel follows the instantaneous change in overall population, integrated over the simulated bandwidth. This approach is validated by extensive comparison against published and original experimental results. The enhanced numerical model is then used to investigate the accuracy of experiments designed to extrapolate to the intrinsic dephasing time T 2 from data taken in the presence of EID. Such a modeling capability offers improved understanding of experimental results, and should allow quantitative analysis of engineering tradeoffs in realistic optical coherent transient applications
Directory of Open Access Journals (Sweden)
Alexander Andrason
2015-12-01
Full Text Available The present paper demonstrates that insights from the affordances perspective can contribute to developing a more comprehensive model of grammaticalization. The authors argue that the grammaticalization process is afforded differently depending on the values of three contributing parameters: the factor (schematized as a qualitative-quantitative map or a wave of a gram, environment (understood as the structure of the stream along which the gram travels, and actor (narrowed to certain cognitive-epistemological capacities of the users, in particular to the fact of being a native speaker. By relating grammaticalization to these three parameters and by connecting it to the theory of optimization, the proposed model offers a better approximation to realistic cases of grammaticalization: The actor and environment are overtly incorporated into the model and divergences from canonical grammaticalization paths are both tolerated and explicable.
Murphy, Kelly E.; Hall, Cameron L.; Maini, Philip K.; McCue, Scott W.; McElwain, D. L. Sean
2012-01-01
Fibroblasts and their activated phenotype, myofibroblasts, are the primary cell types involved in the contraction associated with dermal wound healing. Recent experimental evidence indicates that the transformation from fibroblasts to myofibroblasts involves two distinct processes: The cells are stimulated to change phenotype by the combined actions of transforming growth factor β (TGFβ) and mechanical tension. This observation indicates a need for a detailed exploration of the effect of the strong interactions between the mechanical changes and growth factors in dermal wound healing. We review the experimental findings in detail and develop a model of dermal wound healing that incorporates these phenomena. Our model includes the interactions between TGFβ and collagenase, providing a more biologically realistic form for the growth factor kinetics than those included in previous mechanochemical descriptions. A comparison is made between the model predictions and experimental data on human dermal wound healing and all the essential features are well matched. © 2012 Society for Mathematical Biology.
Incorporating Yearly Derived Winter Wheat Maps Into Winter Wheat Yield Forecasting Model
Skakun, S.; Franch, B.; Roger, J.-C.; Vermote, E.; Becker-Reshef, I.; Justice, C.; Santamaría-Artigas, A.
2016-01-01
Wheat is one of the most important cereal crops in the world. Timely and accurate forecast of wheat yield and production at global scale is vital in implementing food security policy. Becker-Reshef et al. (2010) developed a generalized empirical model for forecasting winter wheat production using remote sensing data and official statistics. This model was implemented using static wheat maps. In this paper, we analyze the impact of incorporating yearly wheat masks into the forecasting model. We propose a new approach of producing in season winter wheat maps exploiting satellite data and official statistics on crop area only. Validation on independent data showed that the proposed approach reached 6% to 23% of omission error and 10% to 16% of commission error when mapping winter wheat 2-3 months before harvest. In general, we found a limited impact of using yearly winter wheat masks over a static mask for the study regions.
Barnett, Tony; Fournié, Guillaume; Gupta, Sunetra; Seeley, Janet
2015-01-01
Incorporation of 'social' variables into epidemiological models remains a challenge. Too much detail and models cease to be useful; too little and the very notion of infection - a highly social process in human populations - may be considered with little reference to the social. The French sociologist Émile Durkheim proposed that the scientific study of society required identification and study of 'social currents'. Such 'currents' are what we might today describe as 'emergent properties', specifiable variables appertaining to individuals and groups, which represent the perspectives of social actors as they experience the environment in which they live their lives. Here we review the ways in which one particular emergent property, hope, relevant to a range of epidemiological situations, might be used in epidemiological modelling of infectious diseases in human populations. We also indicate how such an approach might be extended to include a range of other potential emergent properties to represent complex social and economic processes bearing on infectious disease transmission.
Panda, Rudrashish; Sahu, Sivabrata; Rout, G. C.
2017-05-01
We communicate here a tight binding theoretical model study of the band filling effect on the charge gap in graphene-on-substrate. The Hamiltonian consists of nearest neighbor electron hopping and substrate induced gap. Besides this the Coulomb interaction is considered here within mean-field approximation in the paramagnetic limit. The electron occupancies at two sublattices are calculated by Green's function technique and are solved self consistently. Finally the charge gap i.e. Δ ¯=U [ - ] is calculated and computed numerically. The results are reported.
Are adverse effects incorporated in economic models? An initial review of current practice.
Craig, D; McDaid, C; Fonseca, T; Stock, C; Duffy, S; Woolacott, N
2009-12-01
To identify methodological research on the incorporation of adverse effects in economic models and to review current practice. Major electronic databases (Cochrane Methodology Register, Health Economic Evaluations Database, NHS Economic Evaluation Database, EconLit, EMBASE, Health Management Information Consortium, IDEAS, MEDLINE and Science Citation Index) were searched from inception to September 2007. Health technology assessment (HTA) reports commissioned by the National Institute for Health Research (NIHR) HTA programme and published between 2004 and 2007 were also reviewed. The reviews of methodological research on the inclusion of adverse effects in decision models and of current practice were carried out according to standard methods. Data were summarised in a narrative synthesis. Of the 719 potentially relevant references in the methodological research review, five met the inclusion criteria; however, they contained little information of direct relevance to the incorporation of adverse effects in models. Of the 194 HTA monographs published from 2004 to 2007, 80 were reviewed, covering a range of research and therapeutic areas. In total, 85% of the reports included adverse effects in the clinical effectiveness review and 54% of the decision models included adverse effects in the model; 49% included adverse effects in the clinical review and model. The link between adverse effects in the clinical review and model was generally weak; only 3/80 (manipulation. Of the models including adverse effects, 67% used a clinical adverse effects parameter, 79% used a cost of adverse effects parameter, 86% used one of these and 60% used both. Most models (83%) used utilities, but only two (2.5%) used solely utilities to incorporate adverse effects and were explicit that the utility captured relevant adverse effects; 53% of those models that included utilities derived them from patients on treatment and could therefore be interpreted as capturing adverse effects. In total
International Nuclear Information System (INIS)
Roberts, Stephen A.; Hendry, Jolyon H.
1998-01-01
Purpose: To investigate the role of intertumor heterogeneity in clinical tumor control datasets and the relationship to in vitro measurements of tumor biopsy samples. Specifically, to develop a modified linear-quadratic (LQ) model incorporating such heterogeneity that it is practical to fit to clinical tumor-control datasets. Methods and Materials: We developed a modified version of the linear-quadratic (LQ) model for tumor control, incorporating a (lagged) time factor to allow for tumor cell repopulation. We explicitly took into account the interpatient heterogeneity in clonogen number, radiosensitivity, and repopulation rate. Using this model, we could generate realistic TCP curves using parameter estimates consistent with those reported from in vitro studies, subject to the inclusion of a radiosensitivity (or dose)-modifying factor. We then demonstrated that the model was dominated by the heterogeneity in α (tumor radiosensitivity) and derived an approximate simplified model incorporating this heterogeneity. This simplified model is expressible in a compact closed form, which it is practical to fit to clinical datasets. Using two previously analysed datasets, we fit the model using direct maximum-likelihood techniques and obtained parameter estimates that were, again, consistent with the experimental data on the radiosensitivity of primary human tumor cells. This heterogeneity model includes the same number of adjustable parameters as the standard LQ model. Results: The modified model provides parameter estimates that can easily be reconciled with the in vitro measurements. The simplified (approximate) form of the heterogeneity model is a compact, closed-form probit function that can readily be fitted to clinical series by conventional maximum-likelihood methodology. This heterogeneity model provides a slightly better fit to the datasets than the conventional LQ model, with the same numbers of fitted parameters. The parameter estimates of the clinically
Incorporating remote sensing-based ET estimates into the Community Land Model version 4.5
Directory of Open Access Journals (Sweden)
D. Wang
2017-07-01
Full Text Available Land surface models bear substantial biases in simulating surface water and energy budgets despite the continuous development and improvement of model parameterizations. To reduce model biases, Parr et al. (2015 proposed a method incorporating satellite-based evapotranspiration (ET products into land surface models. Here we apply this bias correction method to the Community Land Model version 4.5 (CLM4.5 and test its performance over the conterminous US (CONUS. We first calibrate a relationship between the observational ET from the Global Land Evaporation Amsterdam Model (GLEAM product and the model ET from CLM4.5, and assume that this relationship holds beyond the calibration period. During the validation or application period, a simulation using the default CLM4.5 (CLM is conducted first, and its output is combined with the calibrated observational-vs.-model ET relationship to derive a corrected ET; an experiment (CLMET is then conducted in which the model-generated ET is overwritten with the corrected ET. Using the observations of ET, runoff, and soil moisture content as benchmarks, we demonstrate that CLMET greatly improves the hydrological simulations over most of the CONUS, and the improvement is stronger in the eastern CONUS than the western CONUS and is strongest over the Southeast CONUS. For any specific region, the degree of the improvement depends on whether the relationship between observational and model ET remains time-invariant (a fundamental hypothesis of the Parr et al. (2015 method and whether water is the limiting factor in places where ET is underestimated. While the bias correction method improves hydrological estimates without improving the physical parameterization of land surface models, results from this study do provide guidance for physically based model development effort.
A constitutive mechanical model for gas hydrate bearing sediments incorporating inelastic mechanisms
Sánchez, Marcelo
2016-11-30
Gas hydrate bearing sediments (HBS) are natural soils formed in permafrost and sub-marine settings where the temperature and pressure conditions are such that gas hydrates are stable. If these conditions shift from the hydrate stability zone, hydrates dissociate and move from the solid to the gas phase. Hydrate dissociation is accompanied by significant changes in sediment structure and strongly affects its mechanical behavior (e.g., sediment stiffenss, strength and dilatancy). The mechanical behavior of HBS is very complex and its modeling poses great challenges. This paper presents a new geomechanical model for hydrate bearing sediments. The model incorporates the concept of partition stress, plus a number of inelastic mechanisms proposed to capture the complex behavior of this type of soil. This constitutive model is especially well suited to simulate the behavior of HBS upon dissociation. The model was applied and validated against experimental data from triaxial and oedometric tests conducted on manufactured and natural specimens involving different hydrate saturation, hydrate morphology, and confinement conditions. Particular attention was paid to model the HBS behavior during hydrate dissociation under loading. The model performance was highly satisfactory in all the cases studied. It managed to properly capture the main features of HBS mechanical behavior and it also assisted to interpret the behavior of this type of sediment under different loading and hydrate conditions.
Incorporating vehicle mix in stimulus-response car-following models
Directory of Open Access Journals (Sweden)
Saidi Siuhi
2016-06-01
Full Text Available The objective of this paper is to incorporate vehicle mix in stimulus-response car-following models. Separate models were estimated for acceleration and deceleration responses to account for vehicle mix via both movement state and vehicle type. For each model, three sub-models were developed for different pairs of following vehicles including “automobile following automobile,” “automobile following truck,” and “truck following automobile.” The estimated model parameters were then validated against other data from a similar region and roadway. The results indicated that drivers' behaviors were significantly different among the different pairs of following vehicles. Also the magnitude of the estimated parameters depends on the type of vehicle being driven and/or followed. These results demonstrated the need to use separate models depending on movement state and vehicle type. The differences in parameter estimates confirmed in this paper highlight traffic safety and operational issues of mixed traffic operation on a single lane. The findings of this paper can assist transportation professionals to improve traffic simulation models used to evaluate the impact of different strategies on ameliorate safety and performance of highways. In addition, driver response time lag estimates can be used in roadway design to calculate important design parameters such as stopping sight distance on horizontal and vertical curves for both automobiles and trucks.
Fuzzy Logic-Based Model That Incorporates Personality Traits for Heterogeneous Pedestrians
Directory of Open Access Journals (Sweden)
Zhuxin Xue
2017-10-01
Full Text Available Most models designed to simulate pedestrian dynamical behavior are based on the assumption that human decision-making can be described using precise values. This study proposes a new pedestrian model that incorporates fuzzy logic theory into a multi-agent system to address cognitive behavior that introduces uncertainty and imprecision during decision-making. We present a concept of decision preferences to represent the intrinsic control factors of decision-making. To realize the different decision preferences of heterogeneous pedestrians, the Five-Factor (OCEAN personality model is introduced to model the psychological characteristics of individuals. Then, a fuzzy logic-based approach is adopted for mapping the relationships between the personality traits and the decision preferences. Finally, we have developed an application using our model to simulate pedestrian dynamical behavior in several normal or non-panic scenarios, including a single-exit room, a hallway with obstacles, and a narrowing passage. The effectiveness of the proposed model is validated with a user study. The results show that the proposed model can generate more reasonable and heterogeneous behavior in the simulation and indicate that individual personality has a noticeable effect on pedestrian dynamical behavior.
Li, Ping; Jiang, Li Jun; Bagci, Hakan
2018-01-01
It is well known that graphene demonstrates spatial dispersion properties, i.e., its conductivity is nonlocal and a function of spectral wave number (momentum operator) q. In this paper, to account for effects of spatial dispersion on transmission of high speed signals along graphene nano-ribbon (GNR) interconnects, a discontinuous Galerkin time-domain (DGTD) algorithm is proposed. The atomically-thick GNR is modeled using a nonlocal transparent surface impedance boundary condition (SIBC) incorporated into the DGTD scheme. Since the conductivity is a complicated function of q (and one cannot find an analytical Fourier transform pair between q and spatial differential operators), an exact time domain SIBC model cannot be derived. To overcome this problem, the conductivity is approximated by its Taylor series in spectral domain under low-q assumption. This approach permits expressing the time domain SIBC in the form of a second-order partial differential equation (PDE) in current density and electric field intensity. To permit easy incorporation of this PDE with the DGTD algorithm, three auxiliary variables, which degenerate the second-order (temporal and spatial) differential operators to first-order ones, are introduced. Regarding to the temporal dispersion effects, the auxiliary differential equation (ADE) method is utilized to eliminates the expensive temporal convolutions. To demonstrate the applicability of the proposed scheme, numerical results, which involve characterization of spatial dispersion effects on the transfer impedance matrix of GNR interconnects, are presented.
Li, Ping
2018-04-13
It is well known that graphene demonstrates spatial dispersion properties, i.e., its conductivity is nonlocal and a function of spectral wave number (momentum operator) q. In this paper, to account for effects of spatial dispersion on transmission of high speed signals along graphene nano-ribbon (GNR) interconnects, a discontinuous Galerkin time-domain (DGTD) algorithm is proposed. The atomically-thick GNR is modeled using a nonlocal transparent surface impedance boundary condition (SIBC) incorporated into the DGTD scheme. Since the conductivity is a complicated function of q (and one cannot find an analytical Fourier transform pair between q and spatial differential operators), an exact time domain SIBC model cannot be derived. To overcome this problem, the conductivity is approximated by its Taylor series in spectral domain under low-q assumption. This approach permits expressing the time domain SIBC in the form of a second-order partial differential equation (PDE) in current density and electric field intensity. To permit easy incorporation of this PDE with the DGTD algorithm, three auxiliary variables, which degenerate the second-order (temporal and spatial) differential operators to first-order ones, are introduced. Regarding to the temporal dispersion effects, the auxiliary differential equation (ADE) method is utilized to eliminates the expensive temporal convolutions. To demonstrate the applicability of the proposed scheme, numerical results, which involve characterization of spatial dispersion effects on the transfer impedance matrix of GNR interconnects, are presented.
Mezlini, Aziz M; Goldenberg, Anna
2017-10-01
Discovering genetic mechanisms driving complex diseases is a hard problem. Existing methods often lack power to identify the set of responsible genes. Protein-protein interaction networks have been shown to boost power when detecting gene-disease associations. We introduce a Bayesian framework, Conflux, to find disease associated genes from exome sequencing data using networks as a prior. There are two main advantages to using networks within a probabilistic graphical model. First, networks are noisy and incomplete, a substantial impediment to gene discovery. Incorporating networks into the structure of a probabilistic models for gene inference has less impact on the solution than relying on the noisy network structure directly. Second, using a Bayesian framework we can keep track of the uncertainty of each gene being associated with the phenotype rather than returning a fixed list of genes. We first show that using networks clearly improves gene detection compared to individual gene testing. We then show consistently improved performance of Conflux compared to the state-of-the-art diffusion network-based method Hotnet2 and a variety of other network and variant aggregation methods, using randomly generated and literature-reported gene sets. We test Hotnet2 and Conflux on several network configurations to reveal biases and patterns of false positives and false negatives in each case. Our experiments show that our novel Bayesian framework Conflux incorporates many of the advantages of the current state-of-the-art methods, while offering more flexibility and improved power in many gene-disease association scenarios.
Energy Technology Data Exchange (ETDEWEB)
Sullivan, P.; Eurek, K.; Margolis, R.
2014-07-01
Because solar power is a rapidly growing component of the electricity system, robust representations of solar technologies should be included in capacity-expansion models. This is a challenge because modeling the electricity system--and, in particular, modeling solar integration within that system--is a complex endeavor. This report highlights the major challenges of incorporating solar technologies into capacity-expansion models and shows examples of how specific models address those challenges. These challenges include modeling non-dispatchable technologies, determining which solar technologies to model, choosing a spatial resolution, incorporating a solar resource assessment, and accounting for solar generation variability and uncertainty.
DEFF Research Database (Denmark)
Hedegaard, Karsten; Balyk, Olexandr
2013-01-01
Individual compression heat pumps constitute a potentially valuable resource in supporting wind power integration due to their economic competitiveness and possibilities for flexible operation. When analysing the system benefits of flexible heat pump operation, effects on investments should...... be taken into account. In this study, we present a model that facilitates analysing individual heat pumps and complementing heat storages in integration with the energy system, while optimising both investments and operation. The model incorporates thermal building dynamics and covers various heat storage...... of operating heat pumps flexibly. This includes prioritising heat pump operation for hours with low marginal electricity production costs, and peak load shaving resulting in a reduced need for peak and reserve capacity investments....
A MULTI-RESOLUTION FUSION MODEL INCORPORATING COLOR AND ELEVATION FOR SEMANTIC SEGMENTATION
Directory of Open Access Journals (Sweden)
W. Zhang
2017-05-01
Full Text Available In recent years, the developments for Fully Convolutional Networks (FCN have led to great improvements for semantic segmentation in various applications including fused remote sensing data. There is, however, a lack of an in-depth study inside FCN models which would lead to an understanding of the contribution of individual layers to specific classes and their sensitivity to different types of input data. In this paper, we address this problem and propose a fusion model incorporating infrared imagery and Digital Surface Models (DSM for semantic segmentation. The goal is to utilize heterogeneous data more accurately and effectively in a single model instead of to assemble multiple models. First, the contribution and sensitivity of layers concerning the given classes are quantified by means of their recall in FCN. The contribution of different modalities on the pixel-wise prediction is then analyzed based on visualization. Finally, an optimized scheme for the fusion of layers with color and elevation information into a single FCN model is derived based on the analysis. Experiments are performed on the ISPRS Vaihingen 2D Semantic Labeling dataset. Comprehensive evaluations demonstrate the potential of the proposed approach.
Modelling and Simulation of a Manipulator with Stable Viscoelastic Grasping Incorporating Friction
Directory of Open Access Journals (Sweden)
A. Khurshid
2016-12-01
Full Text Available Design, dynamics and control of a humanoid robotic hand based on anthropological dimensions, with joint friction, is modelled, simulated and analysed in this paper by using computer aided design and multibody dynamic simulation. Combined joint friction model is incorporated in the joints. Experimental values of coefficient of friction of grease lubricated sliding contacts representative of manipulator joints are presented. Human fingers deform to the shape of the grasped object (enveloping grasp at the area of interaction. A mass-spring-damper model of the grasp is developed. The interaction of the viscoelastic gripper of the arm with objects is analysed by using Bond Graph modelling method. Simulations were conducted for several material parameters. These results of the simulation are then used to develop a prototype of the proposed gripper. Bond graph model is experimentally validated by using the prototype. The gripper is used to successfully transport soft and fragile objects. This paper provides information on optimisation of friction and its inclusion in both dynamic modelling and simulation to enhance mechanical efficiency.
Givens, J.; Padowski, J.; Malek, K.; Guzman, C.; Boll, J.; Adam, J. C.; Witinok-Huber, R.
2017-12-01
In the face of climate change and multi-scalar governance objectives, achieving resilience of food-energy-water (FEW) systems requires interdisciplinary approaches. Through coordinated modeling and management efforts, we study "Innovations in the Food-Energy-Water Nexus (INFEWS)" through a case-study in the Columbia River Basin. Previous research on FEW system management and resilience includes some attention to social dynamics (e.g., economic, governance); however, more research is needed to better address social science perspectives. Decisions ultimately taken in this river basin would occur among stakeholders encompassing various institutional power structures including multiple U.S. states, tribal lands, and sovereign nations. The social science lens draws attention to the incompatibility between the engineering definition of resilience (i.e., return to equilibrium or a singular stable state) and the ecological and social system realities, more explicit in the ecological interpretation of resilience (i.e., the ability of a system to move into a different, possibly more resilient state). Social science perspectives include but are not limited to differing views on resilience as normative, system persistence versus transformation, and system boundary issues. To expand understanding of resilience and objectives for complex and dynamic systems, concepts related to inequality, heterogeneity, power, agency, trust, values, culture, history, conflict, and system feedbacks must be more tightly integrated into FEW research. We identify gaps in knowledge and data, and the value and complexity of incorporating social components and processes into systems models. We posit that socio-biophysical system resilience modeling would address important complex, dynamic social relationships, including non-linear dynamics of social interactions, to offer an improved understanding of sustainable management in FEW systems. Conceptual modeling that is presented in our study, represents
Liu, S.; Ng, G. H. C.
2017-12-01
The global plant database has revealed that plant traits can vary more within a plant functional type (PFT) than among different PFTs, indicating that the current paradigm in ecohydrogical models of specifying fixed parameters based solely on plant functional type (PFT) could potentially bias simulations. Although some recent modeling studies have attempted to incorporate this observed plant trait variability, many failed to consider uncertainties due to sparse global observation, or they omitted spatial and/or temporal variability in the traits. Here we present a stochastic parameterization for prognostic vegetation simulations that are stochastic in time and space in order to represent plant trait plasticity - the process by which trait differences arise. We have developed the new PFT parameterization within the Community Land Model 4.5 (CLM 4.5) and tested the method for a desert shrubland watershed in the Mojave Desert, where fixed parameterizations cannot represent acclimation to desert conditions. Spatiotemporally correlated plant trait parameters were first generated based on TRY statistics and were then used to implement ensemble runs for the study area. The new PFT parameterization was then further conditioned on field measurements of soil moisture and remotely sensed observations of leaf-area-index to constrain uncertainties in the sparse global database. Our preliminary results show that incorporating data-conditioned, variable PFT parameterizations strongly affects simulated soil moisture and water fluxes, compared with default simulations. The results also provide new insights about correlations among plant trait parameters and between traits and environmental conditions in the desert shrubland watershed. Our proposed stochastic PFT parameterization method for ecohydrological models has great potential in advancing our understanding of how terrestrial ecosystems are predicted to adapt to variable environmental conditions.
A model to incorporate organ deformation in the evaluation of dose/volume relationship
International Nuclear Information System (INIS)
Yan, D.; Jaffray, D.; Wong, J.; Brabbins, D.; Martinez, A. A.
1997-01-01
Purpose: Measurements of internal organ motion have demonstrated that daily organ deformation exists during the course of radiation treatment. However, a model to evaluate the resultant dose delivered to a daily deformed organ remains a difficult challenge. Current methods which model such organ deformation as rigid body motion in the dose calculation for treatment planning evaluation are incorrect and misleading. In this study, a new model for treatment planning evaluation is introduced which incorporates patient specific information of daily organ deformation and setup variation. The model was also used to retrospectively analyze the actual treatment data measured using daily CT scans for 5 patients with prostate treatment. Methods and Materials: The model assumes that for each patient, the organ of interest can be measured during the first few treatment days. First, the volume of each organ is delineated from each of the daily measurements and cumulated in a 3D bit-map. A tissue occupancy distribution is then constructed with the 50% isodensity representing the mean, or effective, organ volume. During the course of treatment, each voxel in the effective organ volume is assumed to move inside a local 3D neighborhood with a specific distribution function. The neighborhood and the distribution function are deduced from the positions and shapes of the organ in the first few measurements using the biomechanics model of viscoelastic body. For each voxel, the local distribution function is then convolved with the spatial dose distribution. The latter includes also the variation in dose due to daily setup error. As a result, the cumulative dose to the voxel incorporates the effects of daily setup variation and organ deformation. A ''variation adjusted'' dose volume histogram, aDVH, for the effective organ volume can then be constructed for the purpose of treatment evaluation and optimization. Up to 20 daily CT scans and daily portal images for 5 patients with prostate
Incorporating time-delays in S-System model for reverse engineering genetic networks.
Chowdhury, Ahsan Raja; Chetty, Madhu; Vinh, Nguyen Xuan
2013-06-18
In any gene regulatory network (GRN), the complex interactions occurring amongst transcription factors and target genes can be either instantaneous or time-delayed. However, many existing modeling approaches currently applied for inferring GRNs are unable to represent both these interactions simultaneously. As a result, all these approaches cannot detect important interactions of the other type. S-System model, a differential equation based approach which has been increasingly applied for modeling GRNs, also suffers from this limitation. In fact, all S-System based existing modeling approaches have been designed to capture only instantaneous interactions, and are unable to infer time-delayed interactions. In this paper, we propose a novel Time-Delayed S-System (TDSS) model which uses a set of delay differential equations to represent the system dynamics. The ability to incorporate time-delay parameters in the proposed S-System model enables simultaneous modeling of both instantaneous and time-delayed interactions. Furthermore, the delay parameters are not limited to just positive integer values (corresponding to time stamps in the data), but can also take fractional values. Moreover, we also propose a new criterion for model evaluation exploiting the sparse and scale-free nature of GRNs to effectively narrow down the search space, which not only reduces the computation time significantly but also improves model accuracy. The evaluation criterion systematically adapts the max-min in-degrees and also systematically balances the effect of network accuracy and complexity during optimization. The four well-known performance measures applied to the experimental studies on synthetic networks with various time-delayed regulations clearly demonstrate that the proposed method can capture both instantaneous and delayed interactions correctly with high precision. The experiments carried out on two well-known real-life networks, namely IRMA and SOS DNA repair network in
Schumann, Andreas; Oppel, Henning
2017-04-01
To represent the hydrological behaviour of catchments a model should reproduce/reflect the hydrologically most relevant catchment characteristics. These are heterogeneously distributed within a watershed but often interrelated and subject of a certain spatial organisation. Since common models are mostly based on fundamental assumptions about hydrological processes, the reduction of variance of catchment properties as well as the incorporation of the spatial organisation of the catchment is desirable. We have developed a method that combines the idea of the width-function used for determination of the geomorphologic unit hydrograph with information about soil or topography. With this method we are able to assess the spatial organisation of selected catchment characteristics. An algorithm was developed that structures a watershed into sub-basins and other spatial units to minimise its heterogeneity. The outcomes of this algorithm are used for the spatial setup of a semi-distributed model. Since the spatial organisation of a catchment is not bound to a single characteristic, we have to embed information of multiple catchment properties. For this purpose we applied a fuzzy-based method to combine the spatial setup for multiple single characteristics into a union, optimal spatial differentiation. Utilizing this method, we are able to propose a spatial structure for a semi-distributed hydrological model, comprising the definition of sub-basins and a zonal classification within each sub-basin. Besides the improved spatial structuring, the performed analysis ameliorates modelling in another way. The spatial variability of catchment characteristics, which is considered by a minimum of heterogeneity in the zones, can be considered in a parameter constrained calibration scheme in a case study both options were used to explore the benefits of incorporating the spatial organisation and derived parameter constraints for the parametrisation of a HBV-96 model. We use two benchmark
Simulation of Forest Carbon Fluxes Using Model Incorporation and Data Assimilation
Directory of Open Access Journals (Sweden)
Min Yan
2016-07-01
Full Text Available This study improved simulation of forest carbon fluxes in the Changbai Mountains with a process-based model (Biome-BGC using incorporation and data assimilation. Firstly, the original remote sensing-based MODIS MOD_17 GPP (MOD_17 model was optimized using refined input data and biome-specific parameters. The key ecophysiological parameters of the Biome-BGC model were determined through the Extended Fourier Amplitude Sensitivity Test (EFAST sensitivity analysis. Then the optimized MOD_17 model was used to calibrate the Biome-BGC model by adjusting the sensitive ecophysiological parameters. Once the best match was found for the 10 selected forest plots for the 8-day GPP estimates from the optimized MOD_17 and from the Biome-BGC, the values of sensitive ecophysiological parameters were determined. The calibrated Biome-BGC model agreed better with the eddy covariance (EC measurements (R2 = 0.87, RMSE = 1.583 gC·m−2·d−1 than the original model did (R2 = 0.72, RMSE = 2.419 gC·m−2·d−1. To provide a best estimate of the true state of the model, the Ensemble Kalman Filter (EnKF was used to assimilate five years (of eight-day periods between 2003 and 2007 of Global LAnd Surface Satellite (GLASS LAI products into the calibrated Biome-BGC model. The results indicated that LAI simulated through the assimilated Biome-BGC agreed well with GLASS LAI. GPP performances obtained from the assimilated Biome-BGC were further improved and verified by EC measurements at the Changbai Mountains forest flux site (R2 = 0.92, RMSE = 1.261 gC·m−2·d−1.
International Nuclear Information System (INIS)
Mazzarella, G.; Giampaolo, S. M.; Illuminati, F.
2006-01-01
For systems of interacting, ultracold spin-zero neutral bosonic atoms, harmonically trapped and subject to an optical lattice potential, we derive an Extended Bose Hubbard (EBH) model by developing a systematic expansion for the Hamiltonian of the system in powers of the lattice parameters and of a scale parameter, the lattice attenuation factor. We identify the dominant terms that need to be retained in realistic experimental conditions, up to nearest-neighbor interactions and nearest-neighbor hoppings conditioned by the on-site occupation numbers. In the mean field approximation, we determine the free energy of the system and study the phase diagram both at zero and at finite temperature. At variance with the standard on site Bose Hubbard model, the zero-temperature phase diagram of the EBH model possesses a dual structure in the Mott insulating regime. Namely, for specific ranges of the lattice parameters, a density wave phase characterizes the system at integer fillings, with domains of alternating mean occupation numbers that are the atomic counterparts of the domains of staggered magnetizations in an antiferromagnetic phase. We show as well that in the EBH model, a zero-temperature quantum phase transition to pair superfluidity is, in principle, possible, but completely suppressed at the lowest order in the lattice attenuation factor. Finally, we determine the possible occurrence of the different phases as a function of the experimentally controllable lattice parameters
International Nuclear Information System (INIS)
Hedegaard, Karsten; Balyk, Olexandr
2013-01-01
Individual compression heat pumps constitute a potentially valuable resource in supporting wind power integration due to their economic competitiveness and possibilities for flexible operation. When analysing the system benefits of flexible heat pump operation, effects on investments should be taken into account. In this study, we present a model that facilitates analysing individual heat pumps and complementing heat storages in integration with the energy system, while optimising both investments and operation. The model incorporates thermal building dynamics and covers various heat storage options: passive heat storage in the building structure via radiator heating, active heat storage in concrete floors via floor heating, and use of thermal storage tanks for space heating and hot water. It is shown that the model is well qualified for analysing possibilities and system benefits of operating heat pumps flexibly. This includes prioritising heat pump operation for hours with low marginal electricity production costs, and peak load shaving resulting in a reduced need for peak and reserve capacity investments. - Highlights: • Model optimising heat pumps and heat storages in integration with the energy system. • Optimisation of both energy system investments and operation. • Heat storage in building structure and thermal storage tanks included. • Model well qualified for analysing system benefits of flexible heat pump operation. • Covers peak load shaving and operation prioritised for low electricity prices
International Nuclear Information System (INIS)
Sutheerawatthana, Pitch; Minato, Takayuki
2010-01-01
The response of a social group is a missing element in the formal impact assessment model. Previous discussion of the involvement of social groups in an intervention has mainly focused on the formation of the intervention. This article discusses the involvement of social groups in a different way. A descriptive model is proposed by incorporating a social group's response into the concept of second- and higher-order effects. The model is developed based on a cause-effect relationship through the observation of phenomena in case studies. The model clarifies the process by which social groups interact with a lower-order effect and then generate a higher-order effect in an iterative manner. This study classifies social groups' responses into three forms-opposing, modifying, and advantage-taking action-and places them in six pathways. The model is expected to be used as an analytical tool for investigating and identifying impacts in the planning stage and as a framework for monitoring social groups' responses during the implementation stage of a policy, plan, program, or project (PPPPs).
Moreno-Amat, Elena; Rubiales, Juan Manuel; Morales-Molino, César; García-Amorena, Ignacio
2017-08-01
The increasing development of species distribution models (SDMs) using palaeodata has created new prospects to address questions of evolution, ecology and biogeography from wider perspectives. Palaeobotanical data provide information on the past distribution of taxa at a given time and place and its incorporation on modelling has contributed to advancing the SDM field. This has allowed, for example, to calibrate models under past climate conditions or to validate projected models calibrated on current species distributions. However, these data also bear certain shortcomings when used in SDMs that may hinder the resulting ecological outcomes and eventually lead to misleading conclusions. Palaeodata may not be equivalent to present data, but instead frequently exhibit limitations and biases regarding species representation, taxonomy and chronological control, and their inclusion in SDMs should be carefully assessed. The limitations of palaeobotanical data applied to SDM studies are infrequently discussed and often neglected in the modelling literature; thus, we argue for the more careful selection and control of these data. We encourage authors to use palaeobotanical data in their SDMs studies and for doing so, we propose some recommendations to improve the robustness, reliability and significance of palaeo-SDM analyses.
Turner, Sean; Galelli, Stefano; Wilcox, Karen
2015-04-01
Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating
International Nuclear Information System (INIS)
Huang, Zhibin; Mayr, Nina A.; Lo, Simon S.; Wang, Jian Z.; Jia Guang; Yuh, William T. C.; Johnke, Roberta
2012-01-01
Purpose: It has been conventionally assumed that the repair rate for sublethal damage (SLD) remains constant during the entire radiation course. However, increasing evidence from animal studies suggest that this may not the case. Rather, it appears that the repair rate for radiation-induced SLD slows down with increasing time. Such a slowdown in repair would suggest that the exponential repair pattern would not necessarily accurately predict repair process. As a result, the purpose of this study was to investigate a new generalized linear-quadratic (LQ) model incorporating a repair pattern with reciprocal time. The new formulas were tested with published experimental data. Methods: The LQ model has been widely used in radiation therapy, and the parameter G in the surviving fraction represents the repair process of sublethal damage with T r as the repair half-time. When a reciprocal pattern of repair process was adopted, a closed form of G was derived analytically for arbitrary radiation schemes. The published animal data adopted to test the reciprocal formulas. Results: A generalized LQ model to describe the repair process in a reciprocal pattern was obtained. Subsequently, formulas for special cases were derived from this general form. The reciprocal model showed a better fit to the animal data than the exponential model, particularly for the ED50 data (reduced χ 2 min of 2.0 vs 4.3, p = 0.11 vs 0.006), with the following gLQ parameters: α/β = 2.6-4.8 Gy, T r = 3.2-3.9 h for rat feet skin, and α/β = 0.9 Gy, T r = 1.1 h for rat spinal cord. Conclusions: These results of repair process following a reciprocal time suggest that the generalized LQ model incorporating the reciprocal time of sublethal damage repair shows a better fit than the exponential repair model. These formulas can be used to analyze the experimental and clinical data, where a slowing-down repair process appears during the course of radiation therapy.
Matthews, S.; Lovell, M.; Davies, S. J.; Pritchard, T.; Sirju, C.; Abdelkarim, A.
2012-12-01
Heterolithic or 'shaly' sandstone reservoirs constitute a significant proportion of hydrocarbon resources. Petroacoustic models (a combination of petrophysics and rock physics) enhance the ability to extract reservoir properties from seismic data, providing a connection between seismic and fine-scale rock properties. By incorporating sedimentological observations these models can be better constrained and improved. Petroacoustic modelling is complicated by the unpredictable effects of clay minerals and clay-sized particles on geophysical properties. Such effects are responsible for erroneous results when models developed for "clean" reservoirs - such as Gassmann's equation (Gassmann, 1951) - are applied to heterolithic sandstone reservoirs. Gassmann's equation is arguably the most popular petroacoustic modelling technique in the hydrocarbon industry and is used to model elastic effects of changing reservoir fluid saturations. Successful implementation of Gassmann's equation requires well-constrained drained rock frame properties, which in heterolithic sandstones are heavily influenced by reservoir sedimentology, particularly clay distribution. The prevalent approach to categorising clay distribution is based on the Thomas - Stieber model (Thomas & Stieber, 1975), this approach is inconsistent with current understanding of 'shaly sand' sedimentology and omits properties such as sorting and grain size. The novel approach presented here demonstrates that characterising reservoir sedimentology constitutes an important modelling phase. As well as incorporating sedimentological constraints, this novel approach also aims to improve drained frame moduli estimates through more careful consideration of Gassmann's model assumptions and limitations. A key assumption of Gassmann's equation is a pore space in total communication with movable fluids. This assumption is often violated by conventional applications in heterolithic sandstone reservoirs where effective porosity, which
Directory of Open Access Journals (Sweden)
Stefan Fürtinger
2014-11-01
Full Text Available Assessing brain activity during complex voluntary motor behaviors that require the recruitment of multiple neural sites is a field of active research. Our current knowledge is primarily based on human brain imaging studies that have clear limitations in terms of temporal and spatial resolution. We developed a physiologically informed non-linear multi-compartment stochastic neural model to simulate functional brain activity coupled with neurotransmitter release during complex voluntary behavior, such as speech production. Due to its state-dependent modulation of neural firing, dopaminergic neurotransmission plays a key role in the organization of functional brain circuits controlling speech and language and thus has been incorporated in our neural population model. A rigorous mathematical proof establishing existence and uniqueness of solutions to the proposed model as well as a computationally efficient strategy to numerically approximate these solutions are presented. Simulated brain activity during the resting state and sentence production was analyzed using functional network connectivity, and graph theoretical techniques were employed to highlight differences between the two conditions. We demonstrate that our model successfully reproduces characteristic changes seen in empirical data between the resting state and speech production, and dopaminergic neurotransmission evokes pronounced changes in modeled functional connectivity by acting on the underlying biological stochastic neural model. Specifically, model and data networks in both speech and rest conditions share task-specific network features: both the simulated and empirical functional connectivity networks show an increase in nodal influence and segregation in speech over the resting state. These commonalities confirm that dopamine is a key neuromodulator of the functional connectome of speech control. Based on reproducible characteristic aspects of empirical data, we suggest a number
Energy Technology Data Exchange (ETDEWEB)
Tucker, Susan L., E-mail: sltucker@mdanderson.org [Department of Bioinformatics and Computational Biology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li Minghuan [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China); Xu Ting; Gomez, Daniel [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Yuan Xianglin [Department of Oncology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan (China); Yu Jinming [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China); Liu Zhensheng; Yin Ming; Guan Xiaoxiang; Wang Lie; Wei Qingyi [Department of Epidemiology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Mohan, Radhe [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Vinogradskiy, Yevgeniy [University of Colorado School of Medicine, Aurora, Colorado (United States); Martel, Mary [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Liao Zhongxing [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)
2013-01-01
Purpose: To determine whether single-nucleotide polymorphisms (SNPs) in genes associated with DNA repair, cell cycle, transforming growth factor-{beta}, tumor necrosis factor and receptor, folic acid metabolism, and angiogenesis can significantly improve the fit of the Lyman-Kutcher-Burman (LKB) normal-tissue complication probability (NTCP) model of radiation pneumonitis (RP) risk among patients with non-small cell lung cancer (NSCLC). Methods and Materials: Sixteen SNPs from 10 different genes (XRCC1, XRCC3, APEX1, MDM2, TGF{beta}, TNF{alpha}, TNFR, MTHFR, MTRR, and VEGF) were genotyped in 141 NSCLC patients treated with definitive radiation therapy, with or without chemotherapy. The LKB model was used to estimate the risk of severe (grade {>=}3) RP as a function of mean lung dose (MLD), with SNPs and patient smoking status incorporated into the model as dose-modifying factors. Multivariate analyses were performed by adding significant factors to the MLD model in a forward stepwise procedure, with significance assessed using the likelihood-ratio test. Bootstrap analyses were used to assess the reproducibility of results under variations in the data. Results: Five SNPs were selected for inclusion in the multivariate NTCP model based on MLD alone. SNPs associated with an increased risk of severe RP were in genes for TGF{beta}, VEGF, TNF{alpha}, XRCC1 and APEX1. With smoking status included in the multivariate model, the SNPs significantly associated with increased risk of RP were in genes for TGF{beta}, VEGF, and XRCC3. Bootstrap analyses selected a median of 4 SNPs per model fit, with the 6 genes listed above selected most often. Conclusions: This study provides evidence that SNPs can significantly improve the predictive ability of the Lyman MLD model. With a small number of SNPs, it was possible to distinguish cohorts with >50% risk vs <10% risk of RP when they were exposed to high MLDs.
Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei
2016-04-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models tend to contain a large number of poorly defined and spatially varying model parameters which are therefore computationally expensive to calibrate. Insufficient data can result in model parameter and structural equifinality, particularly when calibration is reliant on catchment outlet discharge behaviour alone. Evaluating spatial patterns of internal hydrological behaviour has the potential to reveal simulations that, whilst consistent with measured outlet discharge, are qualitatively dissimilar to our perceptual understanding of how the system should behave. We argue that such understanding, which may be derived from stakeholder knowledge across different catchments for certain process dynamics, is a valuable source of information to help reject non-behavioural models, and therefore identify feasible model structures and parameters. The challenge, however, is to convert different sources of often qualitative and/or semi-qualitative information into robust quantitative constraints of model states and fluxes, and combine these sources of information together to reject models within an efficient calibration framework. Here we present the development of a framework to incorporate different sources of data to efficiently calibrate distributed catchment models. For each source of information, an interval or inequality is used to define the behaviour of the catchment system. These intervals are then combined to produce a hyper-volume in state space, which is used to identify behavioural models. We apply the methodology to calibrate the Penn State Integrated Hydrological Model (PIHM) at the Wye catchment, Plynlimon, UK. Outlet discharge behaviour is successfully simulated when perceptual understanding of relative groundwater levels between lowland peat, upland peat
Incorporation of defects into the central atoms model of a metallic glass
International Nuclear Information System (INIS)
Lass, Eric A.; Zhu Aiwu; Shiflet, G.J.; Joseph Poon, S.
2011-01-01
The central atoms model (CAM) of a metallic glass is extended to incorporate thermodynamically stable defects, similar to vacancies in a crystalline solid, within the amorphous structure. A bond deficiency (BD), which is the proposed defect present in all metallic glasses, is introduced into the CAM equations. Like vacancies in a crystalline solid, BDs are thermodynamically stable entities because of the increase in entropy associated with their creation, and there is an equilibrium concentration present in the glassy phase. When applied to Cu-Zr and Ni-Zr binary metallic glasses, the concentration of thermally induced BDs surrounding Zr atoms reaches a relatively constant value at the glass transition temperature, regardless of composition within a given glass system. Using this 'critical' defect concentration, the predicted temperatures at which the glass transition is expected to occur are in good agreement with the experimentally determined glass transition temperatures for both alloy systems.
Hawkins, Roland B
2018-01-01
An expression for the surviving fraction of a replicating population of cells exposed to permanently incorporated radionuclide is derived from the microdosimetric-kinetic model. It includes dependency on total implant dose, linear energy transfer (LET), decay rate of the radionuclide, the repair rate of potentially lethal lesions in DNA and the volume doubling time of the target population. This is used to obtain an expression for the biologically effective dose ( BED α / β ) based on the minimum survival achieved by the implant that is equivalent to, and can be compared and combined with, the BED α / β calculated for a fractionated course of radiation treatment. Approximate relationships are presented that are useful in the calculation of BED α / β for alpha- or beta-emitting radionuclides with half-life significantly greater than, or nearly equal to, the approximately 1-h repair half-life of radiation-induced potentially lethal lesions.
Ma, Songling; Hwang, Sungbo; Lee, Sehan; Acree, William E; No, Kyoung Tai
2018-04-23
To describe the physically realistic solvation free energy surface of a molecule in a solvent, a generalized version of the solvation free energy density (G-SFED) calculation method has been developed. In the G-SFED model, the contribution from the hydrogen bond (HB) between a solute and a solvent to the solvation free energy was calculated as the product of the acidity of the donor and the basicity of the acceptor of an HB pair. The acidity and basicity parameters of a solute were derived using the summation of acidities and basicities of the respective acidic and basic functional groups of the solute, and that of the solvent was experimentally determined. Although the contribution of HBs to the solvation free energy could be evenly distributed to grid points on the surface of a molecule, the G-SFED model was still inadequate to describe the angle dependency of the HB of a solute with a polarizable continuum solvent. To overcome this shortcoming of the G-SFED model, the contribution of HBs was formulated using the geometric parameters of the grid points described in the HB coordinate system of the solute. We propose an HB angle dependency incorporated into the G-SFED model, i.e., the G-SFED-HB model, where the angular-dependent acidity and basicity densities are defined and parametrized with experimental data. The G-SFED-HB model was then applied to calculate the solvation free energies of organic molecules in water, various alcohols and ethers, and the log P values of diverse organic molecules, including peptides and a protein. Both the G-SFED model and the G-SFED-HB model reproduced the experimental solvation free energies with similar accuracy, whereas the distributions of the SFED on the molecular surface calculated by the G-SFED and G-SFED-HB models were quite different, especially for molecules having HB donors or acceptors. Since the angle dependency of HBs was included in the G-SFED-HB model, the SFED distribution of the G-SFED-HB model is well described
Directory of Open Access Journals (Sweden)
Stuart Bartlett
2017-08-01
Full Text Available The lattice Boltzmann method is an efficient computational fluid dynamics technique that can accurately model a broad range of complex systems. As well as single-phase fluids, it can simulate thermohydrodynamic systems and passive scalar advection. In recent years, it also gained attention as a means of simulating chemical phenomena, as interest in self-organization processes increased. This paper will present a widely-used and versatile lattice Boltzmann model that can simultaneously incorporate fluid dynamics, heat transfer, buoyancy-driven convection, passive scalar advection, chemical reactions and enthalpy changes. All of these effects interact in a physically accurate framework that is simple to code and readily parallelizable. As well as a complete description of the model equations, several example systems will be presented in order to demonstrate the accuracy and versatility of the method. New simulations, which analyzed the effect of a reversible reaction on the transport properties of a convecting fluid, will also be described in detail. This extra chemical degree of freedom was utilized by the system to augment its net heat flux. The numerical method outlined in this paper can be readily deployed for a vast range of complex flow problems, spanning a variety of scientific disciplines.
Incorporation of a Wind Generator Model into a Dynamic Power Flow Analysis
Directory of Open Access Journals (Sweden)
Angeles-Camacho C.
2011-07-01
Full Text Available Wind energy is nowadays one of the most cost-effective and practical options for electric generation from renewable resources. However, increased penetration of wind generation causes the power networks to be more depend on, and vulnerable to, the varying wind speed. Modeling is a tool which can provide valuable information about the interaction between wind farms and the power network to which they are connected. This paper develops a realistic characterization of a wind generator. The wind generator model is incorporated into an algorithm to investigate its contribution to the stability of the power network in the time domain. The tool obtained is termed dynamic power flow. The wind generator model takes on account the wind speed and the reactive power consumption by induction generators. Dynamic power flow analysis is carried-out using real wind data at 10-minute time intervals collected for one meteorological station. The generation injected at one point into the network provides active power locally and is found to reduce global power losses. However, the power supplied is time-varying and causes fluctuations in voltage magnitude and power fl ows in transmission lines.
International Nuclear Information System (INIS)
Ma, Jie; Wang, Bo; Zhao, Shunli; Wu, Guangxin; Zhang, Jieyu; Yang, Zhiliang
2016-01-01
We have extended the dendritic growth model first proposed by Boettinger, Coriell and Trivedi (here termed EBCT) for microstructure simulations of rapidly solidified non-dilute alloys. The temperature-dependent distribution coefficient, obtained from calculations of phase equilibria, and the continuous growth model (CGM) were adopted in the present EBCT model to describe the solute trapping behaviors. The temperature dependence of the physical properties, which were not used in previous dendritic growth models, were also considered in the present EBCT model. These extensions allow the present EBCT model to be used for microstructure simulations of non-dilute alloys. The comparison of the present EBCT model with the BCT model proves that the considerations of the distribution coefficient and physical properties are necessary for microstructure simulations, especially for small particles with high undercoolings. Finally, the EBCT model was incorporated into the cellular automaton-finite element (CAFE) model to simulate microstructures of gas-atomized ASP30 high speed steel particles that were then compared with experimental results. Both the simulated and experimental results reveal that a columnar dendritic microstructure preferentially forms in small particles and an equiaxed microstructure forms otherwise. The applications of the present EBCT model provide a convenient way to predict the microstructure of non-dilute alloys. - Highlights: • A dendritic growth model was developed considering non-equilibrium distribution coefficient. • The physical properties with temperature dependence were considered in the extended model. • The extended model can be used to non-dilute alloys and the extensions are necessary in small particles. • Microstructure of ASP30 steel was investigated using the present model and verified by experiment.
Energy Technology Data Exchange (ETDEWEB)
Ma, Jie; Wang, Bo [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Zhao, Shunli [Research Institute, Baoshan Iron & Steel Co., Ltd, Shanghai 201900 (China); Wu, Guangxin [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Zhang, Jieyu, E-mail: zjy6162@staff.shu.edu.cn [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China); Yang, Zhiliang [State Key Laboratory of Advanced Special Steel, Shanghai University, Shanghai 200072 (China); Shanghai Engineering Technology Research Center of Special Casting, Shanghai 201605 (China)
2016-05-25
We have extended the dendritic growth model first proposed by Boettinger, Coriell and Trivedi (here termed EBCT) for microstructure simulations of rapidly solidified non-dilute alloys. The temperature-dependent distribution coefficient, obtained from calculations of phase equilibria, and the continuous growth model (CGM) were adopted in the present EBCT model to describe the solute trapping behaviors. The temperature dependence of the physical properties, which were not used in previous dendritic growth models, were also considered in the present EBCT model. These extensions allow the present EBCT model to be used for microstructure simulations of non-dilute alloys. The comparison of the present EBCT model with the BCT model proves that the considerations of the distribution coefficient and physical properties are necessary for microstructure simulations, especially for small particles with high undercoolings. Finally, the EBCT model was incorporated into the cellular automaton-finite element (CAFE) model to simulate microstructures of gas-atomized ASP30 high speed steel particles that were then compared with experimental results. Both the simulated and experimental results reveal that a columnar dendritic microstructure preferentially forms in small particles and an equiaxed microstructure forms otherwise. The applications of the present EBCT model provide a convenient way to predict the microstructure of non-dilute alloys. - Highlights: • A dendritic growth model was developed considering non-equilibrium distribution coefficient. • The physical properties with temperature dependence were considered in the extended model. • The extended model can be used to non-dilute alloys and the extensions are necessary in small particles. • Microstructure of ASP30 steel was investigated using the present model and verified by experiment.
Mills, Kyle; Tamblyn, Isaac
2018-03-01
We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4 ×4 Ising model. Using its success at this task, we motivate the study of the larger 8 ×8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration.
Quantum entanglement and criticality of the antiferromagnetic Heisenberg model in an external field
International Nuclear Information System (INIS)
Liu Guanghua; Li Ruoyan; Tian Guangshan
2012-01-01
By Lanczos exact diagonalization and the infinite time-evolving block decimation (iTEBD) technique, the two-site entanglement as well as the bipartite entanglement, the ground state energy, the nearest-neighbor correlations, and the magnetization in the antiferromagnetic Heisenberg (AFH) model under an external field are investigated. With increasing external field, the small size system shows some distinct upward magnetization stairsteps, accompanied synchronously with some downward two-site entanglement stairsteps. In the thermodynamic limit, the two-site entanglement, as well as the bipartite entanglement, the ground state energy, the nearest-neighbor correlations, and the magnetization are calculated, and the critical magnetic field h c = 2.0 is determined exactly. Our numerical results show that the quantum entanglement is sensitive to the subtle changing of the ground state, and can be used to describe the magnetization and quantum phase transition. Based on the discontinuous behavior of the first-order derivative of the entanglement entropy and fidelity per site, we think that the quantum phase transition in this model should belong to the second-order category. Furthermore, in the magnon existence region (h < 2.0), a logarithmically divergent behavior of block entanglement which can be described by a free bosonic field theory is observed, and the central charge c is determined to be 1. (paper)
International Nuclear Information System (INIS)
Musho, M.K.; Kozak, J.J.
1984-01-01
A method is presented for calculating exactly the relative width (sigma 2 )/sup 1/2// , the skewness γ 1 , and the kurtosis γ 2 characterizing the probability distribution function for three random-walk models of diffusion-controlled processes. For processes in which a diffusing coreactant A reacts irreversibly with a target molecule B situated at a reaction center, three models are considered. The first is the traditional one of an unbiased, nearest-neighbor random walk on a d-dimensional periodic/confining lattice with traps; the second involves the consideration of unbiased, non-nearest-neigh bor (i.e., variable-step length) walks on the same d-dimensional lattice; and, the third deals with the case of a biased, nearest-neighbor walk on a d-dimensional lattice (wherein a walker experiences a potential centered at the deep trap site of the lattice). Our method, which has been described in detail elsewhere [P.A. Politowicz and J. J. Kozak, Phys. Rev. B 28, 5549 (1983)] is based on the use of group theoretic arguments within the framework of the theory of finite Markov processes
International Nuclear Information System (INIS)
Koltsaklis, Nikolaos E.; Georgiadis, Michael C.
2015-01-01
Highlights: • A short-term structured investment planning model has been developed. • Unit commitment problem is incorporated into the long-term planning horizon. • Inherent intermittency of renewables is modelled in a comprehensive way. • The impact of CO_2 emission pricing in long-term investment decisions is quantified. • The evolution of system’s marginal price is evaluated for all the planning horizon. - Abstract: This work presents a generic mixed integer linear programming (MILP) model that integrates the unit commitment problem (UCP), i.e., daily energy planning with the long-term generation expansion planning (GEP) framework. Typical daily constraints at an hourly level such as start-up and shut-down related decisions (start-up type, minimum up and down time, synchronization, soak and desynchronization time constraints), ramping limits, system reserve requirements are combined with representative yearly constraints such as power capacity additions, power generation bounds of each unit, peak reserve requirements, and energy policy issues (renewables penetration limits, CO_2 emissions cap and pricing). For modelling purposes, a representative day (24 h) of each month over a number of years has been employed in order to determine the optimal capacity additions, electricity market clearing prices, and daily operational planning of the studied power system. The model has been tested on an illustrative case study of the Greek power system. Our approach aims to provide useful insight into strategic and challenging decisions to be determined by investors and/or policy makers at a national and/or regional level by providing the optimal energy roadmap under real operating and design constraints.
Pal, David; Jaffe, Peter
2015-04-01
Estimates of global CH4 emissions from wetlands indicate that wetlands are the largest natural source of CH4 to the atmosphere. In this paper, we propose that there is a missing component to these models that should be addressed. CH4 is produced in wetland sediments from the microbial degradation of organic carbon through multiple fermentation steps and methanogenesis pathways. There are multiple sources of carbon for methananogenesis; in vegetated wetland sediments, microbial communities consume root exudates as a major source of organic carbon. In many methane models propionate is used as a model carbon molecule. This simple sugar is fermented into acetate and H2, acetate is transformed to methane and CO2, while the H2 and CO2 are used to form an additional CH4 molecule. The hydrogenotrophic pathway involves the equilibrium of two dissolved gases, CH4 and H2. In an effort to limit CH4 emissions from wetlands, there has been growing interest in finding ways to limit plant transport of soil gases through root systems. Changing planted species, or genetically modifying new species of plants may control this transport of soil gases. While this may decrease the direct emissions of methane, there is little understanding about how H2 dynamics may feedback into overall methane production. The results of an incubation study were combined with a new model of propionate degradation for methanogenesis that also examines other natural parameters (i.e. gas transport through plants). This presentation examines how we would expect this model to behave in a natural field setting with changing sulfate and carbon loading schemes. These changes can be controlled through new plant species and other management practices. Next, we compare the behavior of two variations of this model, with or without the incorporation of H2 interactions, with changing sulfate, carbon loading and root volatilization. Results show that while the models behave similarly there may be a discrepancy of nearly
Dzul, Maria C.; Yackulic, Charles B.; Korman, Josh
2017-01-01
Autonomous passive integrated transponder (PIT) tag antenna systems continuously detect individually marked organisms at one or more fixed points over long time periods. Estimating abundance using data from autonomous antennae can be challenging, because these systems do not detect unmarked individuals. Here we pair PIT antennae data from a tributary with mark-recapture sampling data in a mainstem river to estimate the number of fish moving from the mainstem to the tributary. We then use our model to estimate abundance of non-native rainbow trout Oncorhynchus mykiss that move from the Colorado River to the Little Colorado River (LCR), the latter of which is important spawning and rearing habitat for federally-endangered humpback chub Gila cypha. We estimate 226 rainbow trout (95% CI: 127-370) entered the LCR from October 2013-April 2014. We discuss the challenges of incorporating detections from autonomous PIT antenna systems into mark-recapture population models, particularly in regards to using information about spatial location to estimate movement and detection probabilities.
Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye
2014-01-01
This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Directory of Open Access Journals (Sweden)
Yanhua Jiang
2014-09-01
Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.
Wang, Zi Shuai; Sha, Wei E. I.; Choy, Wallace C. H.
2016-12-01
Modeling the charge-generation process is highly important to understand device physics and optimize power conversion efficiency of bulk-heterojunction organic solar cells (OSCs). Free carriers are generated by both ultrafast exciton delocalization and slow exciton diffusion and dissociation at the heterojunction interface. In this work, we developed a systematic numerical simulation to describe the charge-generation process by a modified drift-diffusion model. The transport, recombination, and collection of free carriers are incorporated to fully capture the device response. The theoretical results match well with the state-of-the-art high-performance organic solar cells. It is demonstrated that the increase of exciton delocalization ratio reduces the energy loss in the exciton diffusion-dissociation process, and thus, significantly improves the device efficiency, especially for the short-circuit current. By changing the exciton delocalization ratio, OSC performances are comprehensively investigated under the conditions of short-circuit and open-circuit. Particularly, bulk recombination dependent fill factor saturation is unveiled and understood. As a fundamental electrical analysis of the delocalization mechanism, our work is important to understand and optimize the high-performance OSCs.
Khan, Tanvir R.; Perlinger, Judith A.
2017-10-01
the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.
Directory of Open Access Journals (Sweden)
T. R. Khan
2017-10-01
µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm, relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.
Spinal motor control system incorporates an internal model of limb dynamics.
Shimansky, Y P
2000-10-01
The existence and utilization of an internal representation of the controlled object is one of the most important features of the functioning of neural motor control systems. This study demonstrates that this property already exists at the level of the spinal motor control system (SMCS), which is capable of generating motor patterns for reflex rhythmic movements, such as locomotion and scratching, without the aid of the peripheral afferent feedback, but substantially modifies the generated activity in response to peripheral afferent stimuli. The SMCS is presented as an optimal control system whose optimality requires that it incorporate an internal model (IM) of the controlled object's dynamics. A novel functional mechanism for the integration of peripheral sensory signals with the corresponding predictive output from the IM, the summation of information precision (SIP) is proposed. In contrast to other models in which the correction of the internal representation of the controlled object's state is based on the calculation of a mismatch between the internal and external information sources, the SIP mechanism merges the information from these sources in order to optimize the precision of the controlled object's state estimate. It is demonstrated, based on scratching in decerebrate cats as an example of the spinal control of goal-directed movements, that the results of computer modeling agree with the experimental observations related to the SMCS's reactions to phasic and tonic peripheral afferent stimuli. It is also shown that the functional requirements imposed by the mathematical model of the SMCS comply with the current knowledge about the related properties of spinal neuronal circuitry. The crucial role of the spinal presynaptic inhibition mechanism in the neuronal implementation of SIP is elucidated. Important differences between the IM and a state predictor employed for compensating for a neural reflex time delay are discussed.
Truncated Calogero-Sutherland models
Pittman, S. M.; Beau, M.; Olshanii, M.; del Campo, A.
2017-05-01
A one-dimensional quantum many-body system consisting of particles confined in a harmonic potential and subject to finite-range two-body and three-body inverse-square interactions is introduced. The range of the interactions is set by truncation beyond a number of neighbors and can be tuned to interpolate between the Calogero-Sutherland model and a system with nearest and next-nearest neighbors interactions discussed by Jain and Khare. The model also includes the Tonks-Girardeau gas describing impenetrable bosons as well as an extension with truncated interactions. While the ground state wave function takes a truncated Bijl-Jastrow form, collective modes of the system are found in terms of multivariable symmetric polynomials. We numerically compute the density profile, one-body reduced density matrix, and momentum distribution of the ground state as a function of the range r and the interaction strength.
Andre, B. J.; Rajaram, H.; Silverstein, J.
2010-12-01
diffusion model at the scale of a single rock is developed incorporating the proposed kinetic rate expressions. Simulations of initiation, washout and AMD flows are discussed to gain a better understanding of the role of porosity, effective diffusivity and reactive surface area in generating AMD. Simulations indicate that flow boundary conditions control generation of acid rock drainage as porosity increases.
Phase diagram of the Kondo-Heisenberg model on honeycomb lattice with geometrical frustration
Li, Huan; Song, Hai-Feng; Liu, Yu
2016-11-01
We calculated the phase diagram of the Kondo-Heisenberg model on a two-dimensional honeycomb lattice with both nearest-neighbor and next-nearest-neighbor antiferromagnetic spin exchanges, to investigate the interplay between RKKY and Kondo interactions in the presence of magnetic frustration. Within a mean-field decoupling technology in slave-fermion representation, we derived the zero-temperature phase diagram as a function of Kondo coupling J k and frustration strength Q. The geometrical frustration can destroy the magnetic order, driving the original antiferromagnetic (AF) phase to non-magnetic valence bond solids (VBS). In addition, we found two distinct VBS. As J k is increased, a phase transition from AF to Kondo paramagnetic (KP) phase occurs, without the intermediate phase coexisting AF order with Kondo screening found in square lattice systems. In the KP phase, the enhancement of frustration weakens the Kondo screening effect, resulting in a phase transition from KP to VBS. We also found a process to recover the AF order from VBS by increasing J k in a wide range of frustration strength. Our work may provide predictions for future experimental observation of new processes of quantum phase transitions in frustrated heavy-fermion compounds.
Phase transitions in an Ising model for monolayers of coadsorbed atoms
International Nuclear Information System (INIS)
Lee, H.H.; Landau, D.P.
1979-01-01
A Monte Carlo method is used to study a simple S=1 Ising (lattice-gas) model appropriate for monolayers composed of two kinds of atoms on cubic metal substrates H = K/sub nn/ Σ/sub nn/ S 2 /sub i/zS 2 /sub j/z + J/sub nnn/ Σ/sub nnn/ S/sub i/zS/sub j/z + Δ Σ/sub i/ S 2 /sub i/z (where nn denotes nearest-neighbor and nnn next-nearest-neighbor pairs). The phase diagram is determined over a wide range of Δ and T for K/sub nn//J/sub nnn/=1/4. For small (or negative) Δ we find an antiferromagnetic 2 x 1 ordered phase separated from the disordered state by a line of second-order phase transitions. The 2 x 1 phase is separated by a line of first-order transitions from a c (2 x 2) phase which appears for larger Δ. The 2 x 1 and c (2 x 2) phases become simultaneously critical at a bicritical point and the phase boundary of the c (2 x 2) → disordered transition shows a tricritical point
From localized to extended states in a time-dependent quantum model
International Nuclear Information System (INIS)
Jose, J.V.
1986-01-01
The problem of a particle inside a rigid box with one of the walls oscillating periodically in time is studied quantum mechanically. In the classical limit, this model was introduced by Fermi in the context of cosmic ray physics. The classical solutions can go from being quasiperiodic to chaotic, as a function of the amplitude of the wall oscillation. In the quantum case, the authors calculate the spectral properties of the corresponding evolution operator, i.e.: the quasi-energy eigenvalues and eigenvectors. The specific form of the wall oscillation, e.g. iota(t) = √ 1 + 2δabsolute value of t, with absolute value of t ≤ 1/2, and iota(t + 1) = iota(t), is essential to the solutions presented here. It is found that as h increases with δ fixed, the nearest neighbor separation between quasi-energy eigenvalues changes from showing no energy level repulsion to energy level repulsion. This transition, from Poisson-like statistics to Gaussian-Orthogonal-Ensemble-like statistics is tested by looking at the distribution of quasi-energy level nearest neighbor separations and the Δ/sub e/(L) statistics. these results are also correlated to a transition between localized to extended states in energy space. The possible relevance of the results presented here to experiments in quasi-one-dimensional atoms is also discussed
Charge-spin-orbital dynamics of one-dimensional two-orbital Hubbard model
Energy Technology Data Exchange (ETDEWEB)
Onishi, Hiroaki [Advanced Science Research Center, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195 (Japan)
2010-01-15
We study the real-time evolution of a charge-excited state in a one-dimensional e{sub g}-orbital degenerate Hubbard model, by a time-dependent density-matrix renormalization group method. Considering a chain along the z direction, electrons hop between adjacent 3z{sup 2}-r{sup 2} orbitals, while x{sup 2}-y{sup 2} orbitals are localized. For the charge-excited state, a holon-doublon pair is introduced into the ground state at quarter filling. At initial time, there is no electron in a holon site, while a pair of electrons occupies 3z{sup 2}-r{sup 2} orbital in a doublon site. As the time evolves, the holon motion is governed by the nearest-neighbor hopping, but the electron pair can transfer between 3z{sup 2}-r{sup 2} orbital and x{sup 2}-y{sup 2} orbital through the pair hopping in addition to the nearest-neighbor hopping. Thus holon and doublon propagate at different speed due to the pair hopping that is characteristic of multi-orbital systems.
Ground state properties of a spin chain within Heisenberg model with a single lacking spin site
International Nuclear Information System (INIS)
Mebrouki, M.
2011-01-01
The ground state and first excited state energies of an antiferromagnetic spin-1/2 chain with and without a single lacking spin site are computed using exact diagonalization method, within the Heisenberg model. In order to keep both parts of a spin chain with a lacking site connected, next nearest neighbors interactions are then introduced. Also, the Density Matrix Renormalization Group (DMRG) method is used, to investigate ground state energies of large system sizes; which permits us to inquire about the effect of large system sizes on energies. Other quantum quantities such as fidelity and correlation functions are also studied and compared in both cases. - Research highlights: → In this paper we compute ground state and first excited state energies of a spin chain with and without a lacking spin site. The next nearest neighbors are introduced with the antiferromagnetic Heisenberg spin-half. → Exact diagonalization is used for small systems, where DMRG method is used to compute energies for large systems. Other quantities like quantum fidelity and correlation are also computed. → Results are presented in figures with comments. → E 0 /N is computed in a function of N for several values of J 2 and for both systems. First excited energies are also investigated.
Directory of Open Access Journals (Sweden)
Wenzhi Wang
2016-07-01
Full Text Available Modeling the random fiber distribution of a fiber-reinforced composite is of great importance for studying the progressive failure behavior of the material on the micro scale. In this paper, we develop a new algorithm for generating random representative volume elements (RVEs with statistical equivalent fiber distribution against the actual material microstructure. The realistic statistical data is utilized as inputs of the new method, which is archived through implementation of the probability equations. Extensive statistical analysis is conducted to examine the capability of the proposed method and to compare it with existing methods. It is found that the proposed method presents a good match with experimental results in all aspects including the nearest neighbor distance, nearest neighbor orientation, Ripley’s K function, and the radial distribution function. Finite element analysis is presented to predict the effective elastic properties of a carbon/epoxy composite, to validate the generated random representative volume elements, and to provide insights of the effect of fiber distribution on the elastic properties. The present algorithm is shown to be highly accurate and can be used to generate statistically equivalent RVEs for not only fiber-reinforced composites but also other materials such as foam materials and particle-reinforced composites.
Ahmad, Khurshid; Waris, Muhammad; Hayat, Maqsood
2016-06-01
Mitochondrion is the key organelle of eukaryotic cell, which provides energy for cellular activities. Submitochondrial locations of proteins play crucial role in understanding different biological processes such as energy metabolism, program cell death, and ionic homeostasis. Prediction of submitochondrial locations through conventional methods are expensive and time consuming because of the large number of protein sequences generated in the last few decades. Therefore, it is intensively desired to establish an automated model for identification of submitochondrial locations of proteins. In this regard, the current study is initiated to develop a fast, reliable, and accurate computational model. Various feature extraction methods such as dipeptide composition (DPC), Split Amino Acid Composition, and Composition and Translation were utilized. In order to overcome the issue of biasness, oversampling technique SMOTE was applied to balance the datasets. Several classification learners including K-Nearest Neighbor, Probabilistic Neural Network, and support vector machine (SVM) are used. Jackknife test is applied to assess the performance of classification algorithms using two benchmark datasets. Among various classification algorithms, SVM achieved the highest success rates in conjunction with the condensed feature space of DPC, which are 95.20 % accuracy on dataset SML3-317 and 95.11 % on dataset SML3-983. The empirical results revealed that our proposed model obtained the highest results so far in the literatures. It is anticipated that our proposed model might be useful for future studies.
Modeling ready biodegradability of fragrance materials.
Ceriani, Lidia; Papa, Ester; Kovarich, Simona; Boethling, Robert; Gramatica, Paola
2015-06-01
In the present study, quantitative structure activity relationships were developed for predicting ready biodegradability of approximately 200 heterogeneous fragrance materials. Two classification methods, classification and regression tree (CART) and k-nearest neighbors (kNN), were applied to perform the modeling. The models were validated with multiple external prediction sets, and the structural applicability domain was verified by the leverage approach. The best models had good sensitivity (internal ≥80%; external ≥68%), specificity (internal ≥80%; external 73%), and overall accuracy (≥75%). Results from the comparison with BIOWIN global models, based on group contribution method, show that specific models developed in the present study perform better in prediction than BIOWIN6, in particular for the correct classification of not readily biodegradable fragrance materials. © 2015 SETAC.
Paulsen, H.; Ilyina, T.; Six, K. D.
2016-02-01
Marine nitrogen fixers play a fundamental role in the oceanic nitrogen and carbon cycles by providing a major source of `new' nitrogen to the euphotic zone that supports biological carbon export and sequestration. Furthermore, nitrogen fixers may regionally have a direct impact on ocean physics and hence the climate system as they form extensive surface mats which can increase light absorption and surface albedo and reduce the momentum input by wind. Resulting alterations in temperature and stratification may feed back on nitrogen fixers' growth itself.We incorporate nitrogen fixers as a prognostic 3D tracer in the ocean biogeochemical component (HAMOCC) of the Max Planck Institute Earth system model and assess for the first time the impact of related bio-physical feedbacks on biogeochemistry and the climate system.The model successfully reproduces recent estimates of global nitrogen fixation rates, as well as the observed distribution of nitrogen fixers, covering large parts of the tropical and subtropical oceans. First results indicate that including bio-physical feedbacks has considerable effects on the upper ocean physics in this region. Light absorption by nitrogen fixers leads locally to surface heating, subsurface cooling, and mixed layer depth shoaling in the subtropical gyres. As a result, equatorial upwelling is increased, leading to surface cooling at the equator. This signal is damped by the effect of the reduced wind stress due to the presence of cyanobacteria mats, which causes a reduction in the wind-driven circulation, and hence a reduction in equatorial upwelling. The increase in surface albedo due to nitrogen fixers has only inconsiderable effects. The response of nitrogen fixers' growth to the alterations in temperature and stratification varies regionally. Simulations with the fully coupled Earth system model are in progress to assess the implications of the biologically induced changes in upper ocean physics for the global climate system.
Taatgen, Niels A.; de Weerd, Harmen; Reitter, David; Ritter, Frank
2016-01-01
We present a Swift re-implementation of the ACT-R cognitive architecture, which can be used to quickly build iOS Apps that incorporate an ACT-R model as a core feature. We discuss how this implementation can be used in an example model, and explore the breadth of possibilities by presenting six Apps
Jung, Jae Yup
2013-01-01
This study tested a newly developed model of the cognitive decision-making processes of senior high school students related to university entry. The model incorporated variables derived from motivation theory (i.e. expectancy-value theory and the theory of reasoned action), literature on cultural orientation and occupational considerations. A…
Peter J. Gould; Constance A. Harrington; Bradley J. St Clair
2011-01-01
Models to predict budburst and other phenological events in plants are needed to forecast how climate change may impact ecosystems and for the development of mitigation strategies. Differences among genotypes are important to predicting phenological events in species that show strong clinal variation in adaptive traits. We present a model that incorporates the effects...
A diagnostic model incorporating P50 sensory gating and neuropsychological tests for schizophrenia.
Directory of Open Access Journals (Sweden)
Jia-Chi Shan
Full Text Available OBJECTIVES: Endophenotypes in schizophrenia research is a contemporary approach to studying this heterogeneous mental illness, and several candidate neurophysiological markers (e.g. P50 sensory gating and neuropsychological tests (e.g. Continuous Performance Test (CPT and Wisconsin Card Sorting Test (WCST have been proposed. However, the clinical utility of a single marker appears to be limited. In the present study, we aimed to construct a diagnostic model incorporating P50 sensory gating with other neuropsychological tests in order to improve the clinical utility. METHODS: We recruited clinically stable outpatients meeting DSM-IV criteria of schizophrenia and age- and gender-matched healthy controls. Participants underwent P50 sensory gating experimental sessions and batteries of neuropsychological tests, including CPT, WCST and Wechsler Adult Intelligence Scale Third Edition (WAIS-III. RESULTS: A total of 106 schizophrenia patients and 74 healthy controls were enrolled. Compared with healthy controls, the patient group had significantly a larger S2 amplitude, and thus poorer P50 gating ratio (gating ratio = S2/S1. In addition, schizophrenia patients had a poorer performance on neuropsychological tests. We then developed a diagnostic model by using multivariable logistic regression analysis to differentiate patients from healthy controls. The final model included the following covariates: abnormal P50 gating (defined as P50 gating ratio >0.4, three subscales derived from the WAIS-III (Arithmetic, Block Design, and Performance IQ, sensitivity index from CPT and smoking status. This model had an adequate accuracy (concordant percentage = 90.4%; c-statistic = 0.904; Hosmer-Lemeshow Goodness-of-Fit Test, p = 0.64>0.05. CONCLUSION: To the best of our knowledge, this is the largest study to date using P50 sensory gating in subjects of Chinese ethnicity and the first to use P50 sensory gating along with other neuropsychological tests
Stevens, Andrew W.; Gelfenbaum, Guy; Elias, Edwin; Jones, Craig
2008-01-01
lab with Sedflume, an apparatus for measuring sediment erosion-parameters. In this report, we present results of the characterization of fine-grained sediment erodibility within Capitol Lake. The erodibility data were incorporated into the previously developed hydrodynamic and sediment transport model. Model simulations using the measured erodibility parameters were conducted to provide more robust estimates of the overall magnitudes and spatial patterns of sediment transport resulting from restoration of the Deschutes Estuary.
Incorporating human-water dynamics in a hyper-resolution land surface model
Vergopolan, N.; Chaney, N.; Wanders, N.; Sheffield, J.; Wood, E. F.
2017-12-01
The increasing demand for water, energy, and food is leading to unsustainable groundwater and surface water exploitation. As a result, the human interactions with the environment, through alteration of land and water resources dynamics, need to be reflected in hydrologic and land surface models (LSMs). Advancements in representing human-water dynamics still leave challenges related to the lack of water use data, water allocation algorithms, and modeling scales. This leads to an over-simplistic representation of human water use in large-scale models; this is in turn leads to an inability to capture extreme events signatures and to provide reliable information at stakeholder-level spatial scales. The emergence of hyper-resolution models allows one to address these challenges by simulating the hydrological processes and interactions with the human impacts at field scales. We integrated human-water dynamics into HydroBlocks - a hyper-resolution, field-scale resolving LSM. HydroBlocks explicitly solves the field-scale spatial heterogeneity of land surface processes through interacting hydrologic response units (HRUs); and its HRU-based model parallelization allows computationally efficient long-term simulations as well as ensemble predictions. The implemented human-water dynamics include groundwater and surface water abstraction to meet agricultural, domestic and industrial water demands. Furthermore, a supply-demand water allocation scheme based on relative costs helps to determine sectoral water use requirements and tradeoffs. A set of HydroBlocks simulations over the Midwest United States (daily, at 30-m spatial resolution for 30 years) are used to quantify the irrigation impacts on water availability. The model captures large reductions in total soil moisture and water table levels, as well as spatiotemporal changes in evapotranspiration and runoff peaks, with their intensity related to the adopted water management strategy. By incorporating human-water dynamics in
Incorporation of GRACE Data into a Bayesian Model for Groundwater Drought Monitoring
Slinski, K.; Hogue, T. S.; McCray, J. E.; Porter, A.
2015-12-01
Groundwater drought, defined as the sustained occurrence of below average availability of groundwater, is marked by below average water levels in aquifers and reduced flows to groundwater-fed rivers and wetlands. The impact of groundwater drought on ecosystems, agriculture, municipal water supply, and the energy sector is an increasingly important global issue. However, current drought monitors heavily rely on precipitation and vegetative stress indices to characterize the timing, duration, and severity of drought events. The paucity of in situ observations of aquifer levels is a substantial obstacle to the development of systems to monitor groundwater drought in drought-prone areas, particularly in developing countries. Observations from the NASA/German Space Agency's Gravity Recovery and Climate Experiment (GRACE) have been used to estimate changes in groundwater storage over areas with sparse point measurements. This study incorporates GRACE total water storage observations into a Bayesian framework to assess the performance of a probabilistic model for monitoring groundwater drought based on remote sensing data. Overall, it is hoped that these methods will improve global drought preparedness and risk reduction by providing information on groundwater drought necessary to manage its impacts on ecosystems, as well as on the agricultural, municipal, and energy sectors.
Energy Technology Data Exchange (ETDEWEB)
Galan, S.F. [Dpto. de Inteligencia Artificial, E.T.S.I. Informatica (UNED), Juan del Rosal, 16, 28040 Madrid (Spain)]. E-mail: seve@dia.uned.es; Mosleh, A. [2100A Marie Mount Hall, Materials and Nuclear Engineering Department, University of Maryland, College Park, MD 20742 (United States)]. E-mail: mosleh@umd.edu; Izquierdo, J.M. [Area de Modelado y Simulacion, Consejo de Seguridad Nuclear, Justo Dorado, 11, 28040 Madrid (Spain)]. E-mail: jmir@csn.es
2007-08-15
The {omega}-factor approach is a method that explicitly incorporates organizational factors into Probabilistic safety assessment of nuclear power plants. Bayesian networks (BNs) are the underlying formalism used in this approach. They have a structural part formed by a graph whose nodes represent organizational variables, and a parametric part that consists of conditional probabilities, each of them quantifying organizational influences between one variable and its parents in the graph. The aim of this paper is twofold. First, we discuss some important limitations of current procedures in the {omega}-factor approach for either assessing conditional probabilities from experts or estimating them from data. We illustrate the discussion with an example that uses data from Licensee Events Reports of nuclear power plants for the estimation task. Second, we introduce significant improvements in the way BNs for the {omega}-factor approach can be constructed, so that parameter acquisition becomes easier and more intuitive. The improvements are based on the use of noisy-OR gates as model of multicausal interaction between each BN node and its parents.
International Nuclear Information System (INIS)
Galan, S.F.; Mosleh, A.; Izquierdo, J.M.
2007-01-01
The ω-factor approach is a method that explicitly incorporates organizational factors into Probabilistic safety assessment of nuclear power plants. Bayesian networks (BNs) are the underlying formalism used in this approach. They have a structural part formed by a graph whose nodes represent organizational variables, and a parametric part that consists of conditional probabilities, each of them quantifying organizational influences between one variable and its parents in the graph. The aim of this paper is twofold. First, we discuss some important limitations of current procedures in the ω-factor approach for either assessing conditional probabilities from experts or estimating them from data. We illustrate the discussion with an example that uses data from Licensee Events Reports of nuclear power plants for the estimation task. Second, we introduce significant improvements in the way BNs for the ω-factor approach can be constructed, so that parameter acquisition becomes easier and more intuitive. The improvements are based on the use of noisy-OR gates as model of multicausal interaction between each BN node and its parents
A Novel Hybrid Similarity Calculation Model
Directory of Open Access Journals (Sweden)
Xiaoping Fan
2017-01-01
Full Text Available This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.
The Sznajd Model with Team Work
Li, Hong-Jun; Lin, Lu-Zi; Sun, He; He, Ming-Feng
In 2000, Sznajd-weron and Sznajd introduced a model for the simulation of a closed democratic community with a two-party system, and it is found that a closed community has to evolve either to a dictatorship or a stalemate state. In this paper, we continued to study on this model. All the neighboring individuals holding the same opinion is defined as a team, which will influence its nearest neighbor's decision and realize the opinion evolution. After some time-steps, a steady state appeared and the stalemate state in original model is eliminated. Moreover, the demand of time-steps has decreased dramatically. In addition, we also analyzed the effect of the various dispersal degree of the initial opinion on the opinion converging at the probability of one steady state. Finally we analyzed the effect of noise on convergence and found that the ability of anti-noise was increased about 1000 times compared with Sznajd model.
Ahern, Verity; Klein, Linda; Bentvelzen, Adam; Garlan, Karen; Jeffery, Heather
2011-04-01
Many radiation oncology registrars have no exposure to paediatrics during their training. To address this, the Paediatric Special Interest Group of the Royal Australian and New Zealand College of Radiologists has convened a biennial teaching course since 1997. The 2009 course incorporated the use of a Structured, Clinical, Objective-Referenced, Problem-orientated, Integrated and Organized (SCORPIO) teaching model for small group tutorials. This study evaluates whether the paediatric radiation oncology curriculum can be adapted to the SCORPIO teaching model and to evaluate the revised course from the registrars' perspective. Teaching and learning resources included a pre-course reading list, a lecture series programme and a SCORPIO workshop. Three evaluation instruments were developed: an overall Course Evaluation Survey for all participants, a SCORPIO Workshop Survey for registrars and a Teacher's SCORPIO Workshop Survey. Forty-five radiation oncology registrars, 14 radiation therapists and five paediatric oncology registrars attended. Seventy-three per cent (47/64) of all participants completed the Course Evaluation Survey and 95% (38/40) of registrars completed the SCORPIO Workshop Survey. All teachers completed the Teacher's SCORPIO Survey (10/10). The overall educational experience was rated as good or excellent by 93% (43/47) of respondents. Ratings of satisfaction with lecture sessions were predominantly good or excellent. Registrars gave the SCORPIO workshop high ratings on each of 10 aspects of quality, with 82% allocating an excellent rating overall for the SCORPIO activity. Both registrars and teachers recommended more time for the SCORPIO stations. The 2009 course met the educational needs of the radiation oncology registrars and the SCORPIO workshop was a highly valued educational component. © 2011 The Authors. Journal of Medical Imaging and Radiation Oncology © 2011 The Royal Australian and New Zealand College of Radiologists.
A sequence-dependent rigid-base model of DNA
Gonzalez, O.; Petkevičiutė, D.; Maddocks, J. H.
2013-02-01
A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can
A sequence-dependent rigid-base model of DNA.
Gonzalez, O; Petkevičiūtė, D; Maddocks, J H
2013-02-07
A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can
Global dynamics of a PDE model for aedes aegypti mosquitoe incorporating female sexual preference
Parshad, Rana; Agusto, Folashade B.
2011-01-01
In this paper we study the long time dynamics of a reaction diffusion system, describing the spread of Aedes aegypti mosquitoes, which are the primary cause of dengue infection. The system incorporates a control attempt via the sterile insect
International Nuclear Information System (INIS)
Kulik, D.A.
2005-01-01
Full text of publication follows: Computer-aided surface complexation models (SCM) tend to replace the classic adsorption isotherm (AI) analysis in describing mineral-water interface reactions such as radionuclide sorption onto (hydr) oxides and clays. Any site-binding SCM based on the mole balance of surface sites, in fact, reproduces the (competitive) Langmuir isotherm, optionally amended with electrostatic Coulomb's non-ideal term. In most SCM implementations, it is difficult to incorporate real-surface phenomena (site heterogeneity, lateral interactions, surface condensation) described in classic AI approaches other than Langmuir's. Thermodynamic relations between SCMs and AIs that remained obscure in the past have been recently clarified using new definitions of standard and reference states of surface species [1,2]. On this basis, a method for separating the Langmuir AI into ideal (linear) and non-ideal parts [2] was applied to multi-dentate Langmuir, Frumkin, and BET isotherms. The aim of this work was to obtain the surface activity coefficient terms that make the SCM site mole balance constraints obsolete and, in this way, extend thermodynamic SCMs to cover sorption phenomena described by the respective AIs. The multi-dentate Langmuir term accounts for the site saturation with n-dentate surface species, as illustrated on modeling bi-dentate U VI complexes on goethite or SiO 2 surfaces. The Frumkin term corrects for the lateral interactions of the mono-dentate surface species; in particular, it has the same form as the Coulombic term of the constant-capacitance EDL combined with the Langmuir term. The BET term (three parameters) accounts for more than a monolayer adsorption up to the surface condensation; it can potentially describe the surface precipitation of nickel and other cations on hydroxides and clay minerals. All three non-ideal terms (in GEM SCMs implementation [1,2]) by now are used for non-competing surface species only. Upon 'surface dilution
An integrated modeling approach to age invariant face recognition
Alvi, Fahad Bashir; Pears, Russel
2015-03-01
This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.
The media effect in Axelrod's model explained
Peres, L. R.; Fontanari, J. F.
2011-11-01
We revisit the problem of introducing an external global field —the mass media— in Axelrod's model of social dynamics, where in addition to their nearest neighbors, the agents can interact with a virtual neighbor whose cultural features are fixed from the outset. The finding that this apparently homogenizing field actually increases the cultural diversity has been considered a puzzle since the phenomenon was first reported more than a decade ago. Here we offer a simple explanation for it, which is based on the pedestrian observation that Axelrod's model exhibits more cultural diversity, i.e., more distinct cultural domains, when the agents are allowed to interact solely with the media field than when they can interact with their neighbors as well. In this perspective, it is the local homogenizing interactions that work towards making the absorbing configurations less fragmented as compared with the extreme situation in which the agents interact with the media only.
A short introduction to fibonacci anyon models
International Nuclear Information System (INIS)
Trebst, Simon; Wang, Zhenghan; Troyer, Matthias; Ludwig, Andreas W.W.
2009-01-01
We discuss how to construct models of interacting anyons by generalizing quantum spin Hamiltonians to anyonic degrees of freedom. The simplest interactions energetically favor pairs of anyones to fuse into the trivial ('identity') channel, similar to the quantum Heisenberg model favoring pairs of spins to form spin singlets. We present an introduction to the theory of anyons and discuss in detail how basis sets and matrix representations of the interaction terms can be obtained, using non-Abelian Fibonacci anyones as example. Besides discussing the 'golden chain', a one-dimensional system of anyons with nearest neighbor interactions, we also present the derivation of more complicated interaction terms, such as three-anyon interactions in the spirit of the Majumdar-Ghosh spin chain, longer range interactions and two-leg ladders. We also discuss generalizations to anyons with general non-Abelian SU(2) k statistics. The k→∞ limit of the latter yields ordinary SU(2) spin chains. (author)
State-space prediction model for chaotic time series
Alparslan, A. K.; Sayar, M.; Atilgan, A. R.
1998-08-01
A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.
International Nuclear Information System (INIS)
Chatterjee, Bishu; Sharp, Peter A.
2006-01-01
Electric transmission and other rate cases use a form of the discounted cash flow model with a single long-term growth rate to estimate rates of return on equity. It cannot incorporate information about the appropriate time horizon for which analysts' estimates of earnings growth have predictive powers. Only a non-constant growth model can explicitly recognize the importance of the time horizon in an ROE calculation. (author)
Strategies for Incorporating Women-Specific Sexuality Education into Addiction Treatment Models
James, Raven
2007-01-01
This paper advocates for the incorporation of a women-specific sexuality curriculum in the addiction treatment process to aid in sexual healing and provide for aftercare issues. Sexuality in addiction treatment modalities is often approached from a sex-negative stance, or that of sexual victimization. Sexual issues are viewed as addictive in and…
Speech emotion recognition based on statistical pitch model
Institute of Scientific and Technical Information of China (English)
WANG Zhiping; ZHAO Li; ZOU Cairong
2006-01-01
A modified Parzen-window method, which keep high resolution in low frequencies and keep smoothness in high frequencies, is proposed to obtain statistical model. Then, a gender classification method utilizing the statistical model is proposed, which have a 98% accuracy of gender classification while long sentence is dealt with. By separation the male voice and female voice, the mean and standard deviation of speech training samples with different emotion are used to create the corresponding emotion models. Then the Bhattacharyya distance between the test sample and statistical models of pitch, are utilized for emotion recognition in speech.The normalization of pitch for the male voice and female voice are also considered, in order to illustrate them into a uniform space. Finally, the speech emotion recognition experiment based on K Nearest Neighbor shows that, the correct rate of 81% is achieved, where it is only 73.85%if the traditional parameters are utilized.
Modeling and knowledge acquisition processes using case-based inference
Directory of Open Access Journals (Sweden)
Ameneh Khadivar
2017-03-01
Full Text Available The method of acquisition and presentation of the organizational Process Knowledge has considered by many KM researches. In this research a model for process knowledge acquisition and presentation has been presented by using the approach of Case Base Reasoning. The validation of the presented model was evaluated by conducting an expert panel. Then a software has been developed based on the presented model and implemented in Eghtesad Novin Bank of Iran. In this company, based on the stages of the presented model, first the knowledge intensive processes has been identified, then the Process Knowledge was stored in a knowledge base in the format of problem/solution/consequent .The retrieval of the knowledge was done based on the similarity of the nearest neighbor algorithm. For validating of the implemented system, results of the system has compared by the results of the decision making of the expert of the process.
Stripe order from the perspective of the Hubbard model
Energy Technology Data Exchange (ETDEWEB)
Devereaux, Thomas Peter
2018-03-01
A microscopic understanding of the strongly correlated physics of the cuprates must account for the translational and rotational symmetry breaking that is present across all cuprate families, commonly in the form of stripes. Here we investigate emergence of stripes in the Hubbard model, a minimal model believed to be relevant to the cuprate superconductors, using determinant quantum Monte Carlo (DQMC) simulations at finite temperatures and density matrix renormalization group (DMRG) ground state calculations. By varying temperature, doping, and model parameters, we characterize the extent of stripes throughout the phase diagram of the Hubbard model. Our results show that including the often neglected next-nearest-neighbor hopping leads to the absence of spin incommensurability upon electron-doping and nearly half-filled stripes upon hole-doping. The similarities of these findings to experimental results on both electron and hole-doped cuprate families support a unified description across a large portion of the cuprate phase diagram.
Gómez-Puig, Marta; Singh, Manish Kumar; Sosvilla Rivero, Simón, 1961-
2018-01-01
This paper highlights the role of multilateral creditors (i.e., the ECB, IMF, ESM etc.) and their preferred creditor status in explaining the sovereign default risk of peripheral euro area (EA) countries. Incorporating lessons from sovereign debt crises in general, and from the Greek debt restructuring in particular, we define the priority structure of sovereigns' creditors that is most relevant for peripheral EA countries in severe crisis episodes. This new priority structure of creditors, t...
Bezbaruah, Achintya N; Zhang, Tian C
2009-01-01
It has been long established that plants play major roles in a treatment wetland. However, the role of plants has not been incorporated into wetland models. This study tries to incorporate wetland plants into a biochemical oxygen demand (BOD) model so that the relative contributions of the aerobic and anaerobic processes to meeting BOD can be quantitatively determined. The classical dissolved oxygen (DO) deficit model has been modified to simulate the DO curve for a field subsurface flow constructed wetland (SFCW) treating municipal wastewater. Sensitivities of model parameters have been analyzed. Based on the model it is predicted that in the SFCW under study about 64% BOD are degraded through aerobic routes and 36% is degraded anaerobically. While not exhaustive, this preliminary work should serve as a pointer for further research in wetland model development and to determine the values of some of the parameters used in the modified DO deficit and associated BOD model. It should be noted that nitrogen cycle and effects of temperature have not been addressed in these models for simplicity of model formulation. This paper should be read with this caveat in mind.
International Nuclear Information System (INIS)
Jennissen, J.J.
1981-01-01
The mathematical/empirical model developed in this paper helps to determine the incorporated radioactivity from the measured photometric values and the exposure time T. Possible errors of autoradiography due to the exposure time or the preparation are taken into consideration by the empirical model. It is shown that the error of appr. 400% appearing in the sole comparison of the measured photometric values can be corrected. The model is valid for neuroanatomy as optical nerves, i.e. neuroanatomical material, were used to develop it. Its application also to the other sections of the central nervous system seems to be justified due to the reduction of errors thus achieved. (orig.) [de
Chaotic and stable perturbed maps: 2-cycles and spatial models
Braverman, E.; Haroutunian, J.
2010-06-01
As the growth rate parameter increases in the Ricker, logistic and some other maps, the models exhibit an irreversible period doubling route to chaos. If a constant positive perturbation is introduced, then the Ricker model (but not the classical logistic map) experiences period doubling reversals; the break of chaos finally gives birth to a stable two-cycle. We outline the maps which demonstrate a similar behavior and also study relevant discrete spatial models where the value in each cell at the next step is defined only by the values at the cell and its nearest neighbors. The stable 2-cycle in a scalar map does not necessarily imply 2-cyclic-type behavior in each cell for the spatial generalization of the map.
A molecular-thermodynamic model for polyelectrolyte solutions
Energy Technology Data Exchange (ETDEWEB)
Jiang, J.; Liu, H.; Hu, Y. [Thermodynamics Research Laboratory, East China University of Science and Technology, Shanghai 200237 (China); Prausnitz, J.M. [Department of Chemical Engineering, University of California, Berkeley, and Chemical Sciences Division, Lawrence Berkeley Laboratory, University of California, Berkeley, California 94720 (United States)
1998-01-01
Polyelectrolyte solutions are modeled as freely tangent-jointed, charged hard-sphere chains and corresponding counterions in a continuum medium with permitivity {var_epsilon}. By adopting the sticky-point model, the Helmholtz function for polyelectrolyte solutions is derived through the r-particle cavity-correlation function (CCF) for chains of sticky, charged hard spheres. The r-CCF is approximated by a product of effective nearest-neighbor two-particle CCFs; these are determined from the hypernetted-chain and mean-spherical closures (HNC/MSA) inside and outside the hard core, respectively, for the integral equation theory for electrolytes. The colligative properties are given as explicit functions of a scaling parameter {Gamma} that can be estimated by a simple iteration procedure. Osmotic pressures, osmotic coefficients, and activity coefficients are calculated for model solutions with various chain lengths. They are in good agreement with molecular simulation and experimental results. {copyright} {ital 1998 American Institute of Physics.}
Directory of Open Access Journals (Sweden)
Roman Bauer
Full Text Available Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computational model to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert differential susceptibility throughout the population. Overall, our model supports the hypothesis that glioma is caused by randomly-occurring oncogenic mutations within the neural stem cell population. Based on this model, we assess the influence of the (experimentally indicated decrease in the number of neural stem cells and increase of cell division rate during aging. Our model provides multiple testable predictions, and suggests that different temporal sequences of oncogenic mutations can lead to tumorigenesis. Finally, we conclude that four or five oncogenic mutations are sufficient for the formation of glioma.
Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward
2017-08-01
Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.
Science and Technology Text Mining Basic Concepts
National Research Council Canada - National Science Library
Losiewicz, Paul
2003-01-01
...). It then presents some of the most widely used data and text mining techniques, including clustering and classification methods, such as nearest neighbor, relational learning models, and genetic...
A mechano-regulatory bone-healing model incorporating cell-phenotype specific activity
Isaksson, H.E.; Donkelaar, van C.C.; Huiskes, R.; Ito, K.
2008-01-01
Phenomenological computational models of tissue regeneration and bone healing have been only partially successful in predicting experimental observations. This may be a result of simplistic modeling of cellular activity. Furthermore, phenomenological models are limited when considering the effects
Energy Technology Data Exchange (ETDEWEB)
Sullivan, T.J.
1992-09-01
A project was initiated in March, 1992 to (1) incorporate a rigorous organic acid representation, based on empirical data and geochemical considerations, into the MAGIC model of acidification response, and (2) test the revised model using three sets of independent data. After six months of performance, the project is on schedule and the majority of the tasks outlined for Year 1 have been successfully completed. Major accomplishments to data include development of the organic acid modeling approach, using data from the Adirondack Lakes Survey Corporation (ALSC), and coupling the organic acid model with MAGIC for chemical hindcast comparisons. The incorporation of an organic acid representation into MAGIC can account for much of the discrepancy earlier observed between MAGIC hindcasts and paleolimnological reconstructions of preindustrial pH and alkalinity for 33 statistically-selected Adirondack lakes. Additional work is on-going for model calibration and testing with data from two whole-catchment artificial acidification projects. Results obtained thus far are being prepared as manuscripts for submission to the peer-reviewed scientific literature.
Energy Technology Data Exchange (ETDEWEB)
Fang, Yuan, E-mail: yuan.fang@fda.hhs.gov [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 and Department of Electrical and Computer Engineering, The University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Karim, Karim S. [Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Badano, Aldo [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)
2014-01-15
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/μm, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/μm. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation
International Nuclear Information System (INIS)
Fang, Yuan; Karim, Karim S.; Badano, Aldo
2014-01-01
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/μm, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/μm. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation
Arul, V; Masilamoni, J G; Jesudason, E P; Jaji, P J; Inayathullah, M; Dicky John, D G; Vignesh, S; Jayakumar, R
2012-05-01
Impaired wound healing in diabetes is a well-documented phenomenon. Emerging data favor the involvement of free radicals in the pathogenesis of diabetic wound healing. We investigated the beneficial role of the sustained release of reactive oxygen species (ROS) in diabetic dermal wound healing. In order to achieve the sustained delivery of ROS in the wound bed, we have incorporated glucose oxidase in the collagen matrix (GOIC), which is applied to the healing diabetic wound. Our in vitro proteolysis studies on incorporated GOIC show increased stability against the proteases in the collagen matrix. In this study, GOIC film and collagen film (CF) are used as dressing material on the wound of streptozotocin-induced diabetic rats. A significant increase in ROS (p < 0.05) was observed in the fibroblast of GOIC group during the inflammation period compared to the CF and control groups. This elevated level up regulated the antioxidant status in the granulation tissue and improved cellular proliferation in the GOIC group. Interestingly, our biochemical parameters nitric oxide, hydroxyproline, uronic acid, protein, and DNA content in the healing wound showed that there is an increase in proliferation of cells in GOIC when compared to the control and CF groups. In addition, evidence from wound contraction and histology reveals faster healing in the GOIC group. Our observations document that GOIC matrices could be effectively used for diabetic wound healing therapy.
Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan
2015-10-01
. Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies.
Topological order in an exactly solvable 3D spin model
International Nuclear Information System (INIS)
Bravyi, Sergey; Leemhuis, Bernhard; Terhal, Barbara M.
2011-01-01
Research highlights: RHtriangle We study exactly solvable spin model with six-qubit nearest neighbor interactions on a 3D face centered cubic lattice. RHtriangle The ground space of the model exhibits topological quantum order. RHtriangle Elementary excitations can be geometrically described as the corners of rectangular-shaped membranes. RHtriangle The ground space can encode 4g qubits where g is the greatest common divisor of the lattice dimensions. RHtriangle Logical operators acting on the encoded qubits are described in terms of closed strings and closed membranes. - Abstract: We study a 3D generalization of the toric code model introduced recently by Chamon. This is an exactly solvable spin model with six-qubit nearest-neighbor interactions on an FCC lattice whose ground space exhibits topological quantum order. The elementary excitations of this model which we call monopoles can be geometrically described as the corners of rectangular-shaped membranes. We prove that the creation of an isolated monopole separated from other monopoles by a distance R requires an operator acting on Ω(R 2 ) qubits. Composite particles that consist of two monopoles (dipoles) and four monopoles (quadrupoles) can be described as end-points of strings. The peculiar feature of the model is that dipole-type strings are rigid, that is, such strings must be aligned with face-diagonals of the lattice. For periodic boundary conditions the ground space can encode 4g qubits where g is the greatest common divisor of the lattice dimensions. We describe a complete set of logical operators acting on the encoded qubits in terms of closed strings and closed membranes.
Directory of Open Access Journals (Sweden)
Conor P. McGowan
2017-10-01
Full Text Available Assessment of a species' status is a key part of management decision making for endangered and threatened species under the U.S. Endangered Species Act. Predicting the future state of the species is an essential part of species status assessment, and projection models can play an important role in developing predictions. We built a stochastic simulation model that incorporated parametric and environmental uncertainty to predict the probable future status of the Sonoran desert tortoise in the southwestern United States and North Central Mexico. Sonoran desert tortoise was a Candidate species for listing under the Endangered Species Act, and decision makers wanted to use model predictions in their decision making process. The model accounted for future habitat loss and possible effects of climate change induced droughts to predict future population growth rates, abundances, and quasi-extinction probabilities. Our model predicts that the population will likely decline over the next few decades, but there is very low probability of quasi-extinction less than 75 years into the future. Increases in drought frequency and intensity may increase extinction risk for the species. Our model helped decision makers predict and characterize uncertainty about the future status of the species in their listing decision. We incorporated complex ecological processes (e.g., climate change effects on tortoises in transparent and explicit ways tailored to support decision making processes related to endangered species.
McGowan, Conor P.; Allan, Nathan; Servoss, Jeff; Hedwall, Shaula J.; Wooldridge, Brian
2017-01-01
Assessment of a species' status is a key part of management decision making for endangered and threatened species under the U.S. Endangered Species Act. Predicting the future state of the species is an essential part of species status assessment, and projection models can play an important role in developing predictions. We built a stochastic simulation model that incorporated parametric and environmental uncertainty to predict the probable future status of the Sonoran desert tortoise in the southwestern United States and North Central Mexico. Sonoran desert tortoise was a Candidate species for listing under the Endangered Species Act, and decision makers wanted to use model predictions in their decision making process. The model accounted for future habitat loss and possible effects of climate change induced droughts to predict future population growth rates, abundances, and quasi-extinction probabilities. Our model predicts that the population will likely decline over the next few decades, but there is very low probability of quasi-extinction less than 75 years into the future. Increases in drought frequency and intensity may increase extinction risk for the species. Our model helped decision makers predict and characterize uncertainty about the future status of the species in their listing decision. We incorporated complex ecological processes (e.g., climate change effects on tortoises) in transparent and explicit ways tailored to support decision making processes related to endangered species.
The transverse spin-1 Ising model with random interactions
Energy Technology Data Exchange (ETDEWEB)
Bouziane, Touria [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco)], E-mail: touria582004@yahoo.fr; Saber, Mohammed [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco); Dpto. Fisica Aplicada I, EUPDS (EUPDS), Plaza Europa, 1, San Sebastian 20018 (Spain)
2009-01-15
The phase diagrams of the transverse spin-1 Ising model with random interactions are investigated using a new technique in the effective field theory that employs a probability distribution within the framework of the single-site cluster theory based on the use of exact Ising spin identities. A model is adopted in which the nearest-neighbor exchange couplings are independent random variables distributed according to the law P(J{sub ij})=p{delta}(J{sub ij}-J)+(1-p){delta}(J{sub ij}-{alpha}J). General formulae, applicable to lattices with coordination number N, are given. Numerical results are presented for a simple cubic lattice. The possible reentrant phenomenon displayed by the system due to the competitive effects between exchange interactions occurs for the appropriate range of the parameter {alpha}.
a Model for Brand Competition Within a Social Network
Huerta-Quintanilla, R.; Canto-Lugo, E.; Rodríguez-Achach, M.
An agent-based model was built representing an economic environment in which m brands are competing for a product market. These agents represent companies that interact within a social network in which a certain agent persuades others to update or shift their brands; the brands of the products they are using. Decision rules were established that caused each agent to react according to the economic benefits it would receive; they updated/shifted only if it was beneficial. Each agent can have only one of the m possible brands, and she can interact with its two nearest neighbors and another set of agents which are chosen according to a particular set of rules in the network topology. An absorbing state was always reached in which a single brand monopolized the network (known as condensation). The condensation time varied as a function of model parameters is studied including an analysis of brand competition using different networks.
Role of spin-orbit coupling in the Kugel-Khomskii model on the honeycomb lattice
Koga, Akihisa; Nakauchi, Shiryu; Nasu, Joji
2018-03-01
We study the effective spin-orbital model for honeycomb-layered transition metal compounds, applying the second-order perturbation theory to the three-orbital Hubbard model with the anisotropic hoppings. This model is reduced to the Kitaev model in the strong spin-orbit coupling limit. Combining the cluster mean-field approximations with the exact diagonalization, we treat the Kugel-Khomskii type superexchange interaction and spin-orbit coupling on an equal footing to discuss ground-state properties. We find that a zigzag ordered state is realized in the model within nearest-neighbor interactions. We clarify how the ordered state competes with the nonmagnetic state, which is adiabatically connected to the quantum spin liquid state realized in a strong spin-orbit coupling limit. Thermodynamic properties are also addressed. The present paper should provide another route to account for the Kitaev-based magnetic properties in candidate materials.
Simplex network modeling for press-molded ceramic bodies incorporated with granite waste
International Nuclear Information System (INIS)
Pedroti, L.G.; Vieira, C.M.F.; Alexandre, J.; Monteiro, S.N.; Xavier, G.C.
2012-01-01
Extrusion of a clay body is the most commonly applied process in the ceramic industries for manufacturing structural block. Nowadays, the assembly of such blocks through a fitting system that facilitates the final mounting is gaining attention owing to the saving in material and reducing in the cost of the building construction. In this work, the ideal composition of clay bodies incorporated with granite powder waste was investigated for the production of press-molded ceramic blocks. An experimental design was applied to determine the optimum properties and microstructures involving not only the precursors compositions but also the press and temperature conditions. Press load from 15 ton and temperatures from 850 to 1050°C were considered. The results indicated that varying mechanical strength of 2 MPa to 20 MPa and varying water absorption of 19% to 30%. (author)
Lizarralde, I; Fernández-Arévalo, T; Brouckaert, C; Vanrolleghem, P; Ikumi, D S; Ekama, G A; Ayesa, E; Grau, P
2015-05-01
This paper introduces a new general methodology for incorporating physico-chemical and chemical transformations into multi-phase wastewater treatment process models in a systematic and rigorous way under a Plant-Wide modelling (PWM) framework. The methodology presented in this paper requires the selection of the relevant biochemical, chemical and physico-chemical transformations taking place and the definition of the mass transport for the co-existing phases. As an example a mathematical model has been constructed to describe a system for biological COD, nitrogen and phosphorus removal, liquid-gas transfer, precipitation processes, and chemical reactions. The capability of the model has been tested by comparing simulated and experimental results for a nutrient removal system with sludge digestion. Finally, a scenario analysis has been undertaken to show the potential of the obtained mathematical model to study phosphorus recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.
Using stochastic models to incorporate spatial and temporal variability [Exercise 14
Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke
2003-01-01
To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...
A Mass Balance Model for Designing Green Roof Systems that Incorporate a Cistern for Re-Use
Directory of Open Access Journals (Sweden)
Manoj Chopra
2012-11-01
Full Text Available Green roofs, which have been used for several decades in many parts of the world, offer a unique and sustainable approach to stormwater management. Within this paper, evidence is presented on water retention for an irrigated green roof system. The presented green roof design results in a water retention volume on site. A first principle mass balance computer model is introduced to assist with the design of these green roof systems which incorporate a cistern to capture and reuse runoff waters for irrigation of the green roof. The model is used to estimate yearly stormwater retention volume for different cistern storage volumes. Additionally, the Blaney and Criddle equation is evaluated for estimation of monthly evapotranspiration rates for irrigated systems and incorporated into the model. This is done so evapotranspiration rates can be calculated for regions where historical data does not exist, allowing the model to be used anywhere historical weather data are available. This model is developed and discussed within this paper as well as compared to experimental results.
Incorporating Cold Cap Behavior in a Joule-heated Waste Glass Melter Model
Energy Technology Data Exchange (ETDEWEB)
Varija Agarwal; Donna Post Guillen
2013-08-01
In this paper, an overview of Joule-heated waste glass melters used in the vitrification of high level waste (HLW) is presented, with a focus on the cold cap region. This region, in which feed-to-glass conversion reactions occur, is critical in determining the melting properties of any given glass melter. An existing 1D computer model of the cold cap, implemented in MATLAB, is described in detail. This model is a standalone model that calculates cold cap properties based on boundary conditions at the top and bottom of the cold cap. Efforts to couple this cold cap model with a 3D STAR-CCM+ model of a Joule-heated melter are then described. The coupling is being implemented in ModelCenter, a software integration tool. The ultimate goal of this model is to guide the specification of melter parameters that optimize glass quality and production rate.
Directory of Open Access Journals (Sweden)
Natalya Pya
2016-02-01
Full Text Available Background: Measurements of tree heights and diameters are essential in forest assessment and modelling. Tree heights are used for estimating timber volume, site index and other important variables related to forest growth and yield, succession and carbon budget models. However, the diameter at breast height (dbh can be more accurately obtained and at lower cost, than total tree height. Hence, generalized height-diameter (h-d models that predict tree height from dbh, age and other covariates are needed. For a more flexible but biologically plausible estimation of covariate effects we use shape constrained generalized additive models as an extension of existing h-d model approaches. We use causal site parameters such as index of aridity to enhance the generality and causality of the models and to enable predictions under projected changeable climatic conditions. Methods: We develop unconstrained generalized additive models (GAM and shape constrained generalized additive models (SCAM for investigating the possible effects of tree-specific parameters such as tree age, relative diameter at breast height, and site-specific parameters such as index of aridity and sum of daily mean temperature during vegetation period, on the h-d relationship of forests in Lower Saxony, Germany. Results: Some of the derived effects, e.g. effects of age, index of aridity and sum of daily mean temperature have significantly non-linear pattern. The need for using SCAM results from the fact that some of the model effects show partially implausible patterns especially at the boundaries of data ranges. The derived model predicts monotonically increasing levels of tree height with increasing age and temperature sum and decreasing aridity and social rank of a tree within a stand. The definition of constraints leads only to marginal or minor decline in the model statistics like AIC. An observed structured spatial trend in tree height is modelled via 2-dimensional surface
Zeng, Huawei; Botnen, James H; Johnson, Luann K
2008-01-01
Assessing the ability of a selenium (Se) sample to induce cellular glutathione peroxidase (GPx) activity in Se-deficient animals is the most commonly used method to determine Se bioavailability. Our goal is to establish a Se-deficient cell culture model with differential incorporation of Se chemical forms into GPx, which may complement the in vivo studies. In the present study, we developed a Se-deficient Caco-2 cell model with a serum gradual reduction method. It is well recognized that selenomethionine (SeMet) is the major nutritional source of Se; therefore, SeMet, selenite, or methylselenocysteine (SeMSC) was added to cell culture media with different concentrations and treatment time points. We found that selenite and SeMSC induced GPx more rapidly than SeMet. However, SeMet was better retained as it is incorporated into proteins in place of methionine; compared with 8-, 24-, or 48-h treatment, 72-h Se treatment was a more sensitive time point to measure the potential of GPx induction in all tested concentrations. Based on induction of GPx activity, the cellular bioavailability of Se from an extract of selenobroccoli after a simulated gastrointestinal digestion was comparable with that of SeMSC and SeMet. These in vitro data are, for the first time, consistent with previous published data regarding selenite and SeMet bioavailability in animal models and Se chemical speciation studies with broccoli. Thus, Se-deficient Caco-2 cell model with differential incorporation of chemical or food forms of Se into GPx provides a new tool to study the cellular mechanisms of Se bioavailability.
Phosphorus vacancy cluster model for phosphorus diffusion gettering of metals in Si
Energy Technology Data Exchange (ETDEWEB)
Chen, Renyu; Trzynadlowski, Bart; Dunham, Scott T. [Department of Electrical Engineering, University of Washington, Seattle, Washington 98195 (United States)
2014-02-07
In this work, we develop models for the gettering of metals in silicon by high phosphorus concentration. We first performed ab initio calculations to determine favorable configurations of complexes involving phosphorus and transition metals (Fe, Cu, Cr, Ni, Ti, Mo, and W). Our ab initio calculations found that the P{sub 4}V cluster, a vacancy surrounded by 4 nearest-neighbor phosphorus atoms, which is the most favorable inactive P species in heavily doped Si, strongly binds metals such as Cu, Cr, Ni, and Fe. Based on the calculated binding energies, we build continuum models to describe the P deactivation and Fe gettering processes with model parameters calibrated against experimental data. In contrast to previous models assuming metal-P{sub 1}V or metal-P{sub 2}V as the gettered species, the binding of metals to P{sub 4}V satisfactorily explains the experimentally observed strong gettering behavior at high phosphorus concentrations.
Interaction of a single mode field cavity with the 1D XY model: Energy spectrum
International Nuclear Information System (INIS)
Tonchev, H; Donkov, A A; Chamati, H
2016-01-01
In this work we use the fundamental in quantum optics Jaynes-Cummings model to study the response of spin 1/2chain to a single mode of a laser light falling on one of the spins, a focused interaction model between the light and the spin chain. For the spin-spin interaction along the chain we use the XY model. We report here the exact analytical results, obtained with the help of a computer algebra system, for the energy spectrum in this model for chains of up to 4 spins with nearest neighbors interactions, either for open or cyclic chain configurations. Varying the sign and magnitude of the spin exchange coupling relative to the light-spin interaction we have investigated both cases of ferromagnetic or antiferromagnetic spin chains. (paper)
Bestley, Sophie; Jonsen, Ian D; Hindell, Mark A; Guinet, Christophe; Charrassin, Jean-Benoît
2013-01-07
A fundamental goal in animal ecology is to quantify how environmental (and other) factors influence individual movement, as this is key to understanding responsiveness of populations to future change. However, quantitative interpretation of individual-based telemetry data is hampered by the complexity of, and error within, these multi-dimensional data. Here, we present an integrative hierarchical Bayesian state-space modelling approach where, for the first time, the mechanistic process model for the movement state of animals directly incorporates both environmental and other behavioural information, and observation and process model parameters are estimated within a single model. When applied to a migratory marine predator, the southern elephant seal (Mirounga leonina), we find the switch from directed to resident movement state was associated with colder water temperatures, relatively short dive bottom time and rapid descent rates. The approach presented here can have widespread utility for quantifying movement-behaviour (diving or other)-environment relationships across species and systems.
Directory of Open Access Journals (Sweden)
Dirk Temme
2008-12-01
Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.
Incorporating Protein Biosynthesis into the Saccharomyces cerevisiae Genome-scale Metabolic Model
DEFF Research Database (Denmark)
Olivares Hernandez, Roberto
Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been construc......Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been...
International Nuclear Information System (INIS)
McLoughlin, R.F.; Ryan, M.V.; Heuston, P.M.; McCoy, C.T.; Masterson, J.B.
1992-01-01
The purpose of this study was to construct and evaluate a statistical model for the quantitative analysis of computed tomographic brain images. Data were derived from standard sections in 34 normal studies. A model representing the intercranial pure tissue and partial volume areas, with allowance for beam hardening, was developed. The average percentage error in estimation of areas, derived from phantom tests using the model, was 28.47%. We conclude that our model is not sufficiently accurate to be of clinical use, even though allowance was made for partial volume and beam hardening effects. (author)
Directory of Open Access Journals (Sweden)
Rogier Westerhoff
2018-01-01
Full Text Available A nationwide model of groundwater recharge for New Zealand (NGRM, as described in this paper, demonstrated the benefits of satellite data and global models to improve the spatial definition of recharge and the estimation of recharge uncertainty. NGRM was inspired by the global-scale WaterGAP model but with the key development of rainfall recharge calculation on scales relevant to national- and catchment-scale studies (i.e., a 1 km × 1 km cell size and a monthly timestep in the period 2000–2014 provided by satellite data (i.e., MODIS-derived evapotranspiration, AET and vegetation in combination with national datasets of rainfall, elevation, soil and geology. The resulting nationwide model calculates groundwater recharge estimates, including their uncertainty, consistent across the country, which makes the model unique compared to all other New Zealand estimates targeted towards groundwater recharge. At the national scale, NGRM estimated an average recharge of 2500 m 3 /s, or 298 mm/year, with a model uncertainty of 17%. Those results were similar to the WaterGAP model, but the improved input data resulted in better spatial characteristics of recharge estimates. Multiple uncertainty analyses led to these main conclusions: the NGRM model could give valuable initial estimates in data-sparse areas, since it compared well to most ground-observed lysimeter data and local recharge models; and the nationwide input data of rainfall and geology caused the largest uncertainty in the model equation, which revealed that the satellite data could improve spatial characteristics without significantly increasing the uncertainty. Clearly the increasing volume and availability of large-scale satellite data is creating more opportunities for the application of national-scale models at the catchment, and smaller, scales. This should result in improved utility of these models including provision of initial estimates in data-sparse areas. Topics for future
Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa
2013-05-10
We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.
Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun
2018-03-15
Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.
Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.
2017-12-01
The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.
International Nuclear Information System (INIS)
Lee, Timothy; Yao, Runming
2013-01-01
The UK has a target for an 80% reduction in CO 2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures. - Highlights: ► Long term energy models are reviewed with a focus on UK domestic stock models. ► Existing models are found weak in modelling green technology buying behaviour. ► Agent models, Markov chains and neural networks are considered as solutions. ► Agent-based modelling (ABM) is found to be the most promising approach. ► A prototype ABM is developed and testing indicates a lot of potential.
Walsh, Daniel P.; Norton, Andrew S.; Storm, Daniel J.; Van Deelen, Timothy R.; Heisy, Dennis M.
2018-01-01
Implicit and explicit use of expert knowledge to inform ecological analyses is becoming increasingly common because it often represents the sole source of information in many circumstances. Thus, there is a need to develop statistical methods that explicitly incorporate expert knowledge, and can successfully leverage this information while properly accounting for associated uncertainty during analysis. Studies of cause-specific mortality provide an example of implicit use of expert knowledge when causes-of-death are uncertain and assigned based on the observer's knowledge of the most likely cause. To explicitly incorporate this use of expert knowledge and the associated uncertainty, we developed a statistical model for estimating cause-specific mortality using a data augmentation approach within a Bayesian hierarchical framework. Specifically, for each mortality event, we elicited the observer's belief of cause-of-death by having them specify the probability that the death was due to each potential cause. These probabilities were then used as prior predictive values within our framework. This hierarchical framework permitted a simple and rigorous estimation method that was easily modified to include covariate effects and regularizing terms. Although applied to survival analysis, this method can be extended to any event-time analysis with multiple event types, for which there is uncertainty regarding the true outcome. We conducted simulations to determine how our framework compared to traditional approaches that use expert knowledge implicitly and assume that cause-of-death is specified accurately. Simulation results supported the inclusion of observer uncertainty in cause-of-death assignment in modeling of cause-specific mortality to improve model performance and inference. Finally, we applied the statistical model we developed and a traditional method to cause-specific survival data for white-tailed deer, and compared results. We demonstrate that model selection
Barrios, J. M.; Verstraeten, W. W.; Farifteh, J.; Maes, P.; Aerts, J. M.; Coppin, P.
2012-04-01
Lyme borreliosis (LB) is the most common tick-borne disease in Europe and incidence growth has been reported in several European countries during the last decade. LB is caused by the bacterium Borrelia burgdorferi and the main vector of this pathogen in Europe is the tick Ixodes ricinus. LB incidence and spatial spread is greatly dependent on environmental conditions impacting habitat, demography and trophic interactions of ticks and the wide range of organisms ticks parasite. The landscape configuration is also a major determinant of tick habitat conditions and -very important- of the fashion and intensity of human interaction with vegetated areas, i.e. human exposure to the pathogen. Hence, spatial notions as distance and adjacency between urban and vegetated environments are related to human exposure to tick bites and, thus, to risk. This work tested the adequacy of a gravity model setting to model the observed spatio-temporal pattern of LB as a function of location and size of urban and vegetated areas and the seasonal and annual change in the vegetation dynamics as expressed by MODIS NDVI. Opting for this approach implies an analogy with Newton's law of universal gravitation in which the attraction forces between two bodies are directly proportional to the bodies mass and inversely proportional to distance. Similar implementations have proven useful in fields like trade modeling, health care service planning, disease mapping among other. In our implementation, the size of human settlements and vegetated systems and the distance separating these landscape elements are considered the 'bodies'; and the 'attraction' between them is an indicator of exposure to pathogen. A novel element of this implementation is the incorporation of NDVI to account for the seasonal and annual variation in risk. The importance of incorporating this indicator of vegetation activity resides in the fact that alterations of LB incidence pattern observed the last decade have been ascribed
Anisotropic Heisenberg model for a semi-infinite crystal
International Nuclear Information System (INIS)
Queiroz, C.A.
1985-11-01
A semi-infinite Heisenberg model with exchange interactions between nearest and next-nearest neighbors in a simple cubic lattice. The free surface from the other layers of magnetic ions, by choosing a single ion uniaxial anisotropy in the surface (Ds) different from the anisotropy in the other layers (D). Using the Green function formalism, the behavior of magnetization as a function of the temperature for each layer, as well as the spectrum localized magnons for several values of ratio Ds/D for surface magnetization. Above this critical ratio, a ferromagnetic surface layer is obtained white the other layers are already in the paramagnetic phase. In this situation the critical temperature of surface becomes larger than the critical temperature of the bulk. (Author) [pt
Anti-ferromagnetic Heisenberg model on bilayer honeycomb
International Nuclear Information System (INIS)
Shoja, M.; Shahbazi, F.
2012-01-01
Recent experiment on spin-3/2 bilayer honeycomb lattice antiferromagnet Bi 3 Mn 4 O 12 (NO 3 ) shows a spin liquid behavior down to very low temperatures. This behavior can be ascribed to the frustration effect due to competitions between first and second nearest neighbour's antiferromagnet interaction. Motivated by the experiment, we study J 1 -J 2 Antiferromagnet Heisenberg model, using Mean field Theory. This calculation shows highly degenerate ground state. We also calculate the effect of second nearest neighbor through z direction and show these neighbors also increase frustration in these systems. Because of these degenerate ground state in these systems, spins can't find any ground state to be freeze in low temperatures. This behavior shows a novel spin liquid state down to very low temperatures.
On Rationality of Decision Models Incorporating Emotion-Related Valuing and Hebbian Learning
Treur, J.; Umair, M.
2011-01-01
In this paper an adaptive decision model based on predictive loops through feeling states is analysed from the perspective of rationality. Four different variations of Hebbian learning are considered for different types of connections in the decision model. To assess the extent of rationality, a
Becky K. Kerns; Miles A. Hemstrom; David Conklin; Gabriel I. Yospin; Bart Johnson; Dominique Bachelet; Scott Bridgham
2012-01-01
Understanding landscape vegetation dynamics often involves the use of scientifically-based modeling tools that are capable of testing alternative management scenarios given complex ecological, management, and social conditions. State-and-transition simulation model (STSM) frameworks and software such as PATH and VDDT are commonly used tools that simulate how landscapes...
Incorporating additional tree and environmental variables in a lodgepole pine stem profile model
John C. Byrne
1993-01-01
A new variable-form segmented stem profile model is developed for lodgepole pine (Pinus contorta) trees from the northern Rocky Mountains of the United States. I improved estimates of stem diameter by predicting two of the model coefficients with linear equations using a measure of tree form, defined as a ratio of dbh and total height. Additional improvements were...
Lowe, James; Carter, Merilyn; Cooper, Tom
2018-01-01
Mathematical models are conceptual processes that use mathematics to describe, explain, and/or predict the behaviour of complex systems. This article is written for teachers of mathematics in the junior secondary years (including out-of-field teachers of mathematics) who may be unfamiliar with mathematical modelling, to explain the steps involved…
Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency
Su, Shiyang
2017-01-01
With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs has been recognized and the literature of modeling times has been growing during the last few decades. Previous studies have tried to formulate models and theories to…
Wilson, Kaitlyn P.
2013-01-01
Purpose: Video modeling is an intervention strategy that has been shown to be effective in improving the social and communication skills of students with autism spectrum disorders, or ASDs. The purpose of this tutorial is to outline empirically supported, step-by-step instructions for the use of video modeling by school-based speech-language…
LINKING MICROBES TO CLIMATE: INCORPORATING MICROBIAL ACTIVITY INTO CLIMATE MODELS COLLOQUIUM
Energy Technology Data Exchange (ETDEWEB)
DeLong, Edward; Harwood, Caroline; Reid, Ann
2011-01-01
This report explains the connection between microbes and climate, discusses in general terms what modeling is and how it applied to climate, and discusses the need for knowledge in microbial physiology, evolution, and ecology to contribute to the determination of fluxes and rates in climate models. It recommends with a multi-pronged approach to address the gaps.
A model for arsenic anti-site incorporation in GaAs grown by hydride vapor phase epitaxy
Energy Technology Data Exchange (ETDEWEB)
Schulte, K. L.; Kuech, T. F. [Department of Chemical and Biological Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States)
2014-12-28
GaAs growth by hydride vapor phase epitaxy (HVPE) has regained interest as a potential route to low cost, high efficiency thin film photovoltaics. In order to attain the highest efficiencies, deep level defect incorporation in these materials must be understood and controlled. The arsenic anti-site defect, As{sub Ga} or EL2, is the predominant deep level defect in HVPE-grown GaAs. In the present study, the relationships between HVPE growth conditions and incorporation of EL2 in GaAs epilayers were determined. Epitaxial n-GaAs layers were grown under a wide range of deposition temperatures (T{sub D}) and gallium chloride partial pressures (P{sub GaCl}), and the EL2 concentration, [EL2], was determined by deep level transient spectroscopy. [EL2] agreed with equilibrium thermodynamic predictions in layers grown under conditions in which the growth rate, R{sub G}, was controlled by conditions near thermodynamic equilibrium. [EL2] fell below equilibrium levels when R{sub G} was controlled by surface kinetic processes, with the disparity increasing as R{sub G} decreased. The surface chemical composition during growth was determined to have a strong influence on EL2 incorporation. Under thermodynamically limited growth conditions, e.g., high T{sub D} and/or low P{sub GaCl}, the surface vacancy concentration was high and the bulk crystal was close to equilibrium with the vapor phase. Under kinetically limited growth conditions, e.g., low T{sub D} and/or high P{sub GaCl}, the surface attained a high GaCl coverage, blocking As adsorption. This competitive adsorption process reduced the growth rate and also limited the amount of arsenic that incorporated as As{sub Ga}. A defect incorporation model which accounted for the surface concentration of arsenic as a function of the growth conditions, was developed. This model was used to identify optimal growth parameters for the growth of thin films for photovoltaics, conditions in which a high growth rate and low [EL2] could be
A LabVIEW model incorporating an open-loop arterial impedance and a closed-loop circulatory system.
Cole, R T; Lucas, C L; Cascio, W E; Johnson, T A
2005-11-01
While numerous computer models exist for the circulatory system, many are limited in scope, contain unwanted features or incorporate complex components specific to unique experimental situations. Our purpose was to develop a basic, yet multifaceted, computer model of the left heart and systemic circulation in LabVIEW having universal appeal without sacrificing crucial physiologic features. The program we developed employs Windkessel-type impedance models in several open-loop configurations and a closed-loop model coupling a lumped impedance and ventricular pressure source. The open-loop impedance models demonstrate afterload effects on arbitrary aortic pressure/flow inputs. The closed-loop model catalogs the major circulatory waveforms with changes in afterload, preload, and left heart properties. Our model provides an avenue for expanding the use of the ventricular equations through closed-loop coupling that includes a basic coronary circuit. Tested values used for the afterload components and the effects of afterload parameter changes on various waveforms are consistent with published data. We conclude that this model offers the ability to alter several circulatory factors and digitally catalog the most salient features of the pressure/flow waveforms employing a user-friendly platform. These features make the model a useful instructional tool for students as well as a simple experimental tool for cardiovascular research.
Druce, Donald J.
1990-01-01
A monthly stochastic dynamic programing model was recently developed and implemented at British Columbia (B.C.) Hydro to provide decision support for short-term energy exports and, if necessary, for flood control on the Peace River in northern British Columbia. The model establishes the marginal cost of supplying energy from the B.C. Hydro system, as well as a monthly operating policy for the G.M. Shrum and Peace Canyon hydroelectric plants and the Williston Lake storage reservoir. A simulation model capable of following the operating policy then determines the probability of refilling Williston Lake and possible spill rates and volumes. Reservoir inflows are input to both models in daily and monthly formats. The results indicate that flood control can be accommodated without sacrificing significant export revenue.
Energy Technology Data Exchange (ETDEWEB)
Druce, D.J. (British Columbia Hydro and Power Authority, Vancouver, British Columbia (Canada))
1990-01-01
A monthly stochastic dynamic programing model was recently developed and implemented at British Columbia (B.C.) Hydro to provide decision support for short-term energy exports and, if necessary, for flood control on the Peace River in northern British Columbia. The model established the marginal cost of supplying energy from the B.C. Hydro system, as well as a monthly operating policy for the G.M. Shrum and Peace Canyon hydroelectric plants and the Williston Lake storage reservoir. A simulation model capable of following the operating policy then determines the probability of refilling Williston Lake and possible spill rates and volumes. Reservoir inflows are input to both models in daily and monthly formats. The results indicate that flood control can be accommodated without sacrificing significant export revenue.
Incorporating Pass-Phrase Dependent Background Models for Text-Dependent Speaker verification
DEFF Research Database (Denmark)
Sarkar, Achintya Kumar; Tan, Zheng-Hua
2018-01-01
-dependent. We show that the proposed method significantly reduces the error rates of text-dependent speaker verification for the non-target types: target-wrong and impostor-wrong while it maintains comparable TD-SV performance when impostors speak a correct utterance with respect to the conventional system......In this paper, we propose pass-phrase dependent background models (PBMs) for text-dependent (TD) speaker verification (SV) to integrate the pass-phrase identification process into the conventional TD-SV system, where a PBM is derived from a text-independent background model through adaptation using...... the utterances of a particular pass-phrase. During training, pass-phrase specific target speaker models are derived from the particular PBM using the training data for the respective target model. While testing, the best PBM is first selected for the test utterance in the maximum likelihood (ML) sense...
Incorporation of sedimentological data into a calibrated groundwater flow and transport model
International Nuclear Information System (INIS)
Williams, N.J.; Young, S.C.; Barton, D.H.; Hurst, B.T.
1997-01-01
Analysis suggests that a high hydraulic conductivity (K) zone is associated with a former river channel at the Portsmouth Gaseous Diffusion Plant (PORTS). A two-dimensional (2-D) and three-dimensional (3-D) groundwater flow model was developed base on a sedimentological model to demonstrate the performance of a horizontal well for plume capture. The model produced a flow field with magnitudes and directions consistent with flow paths inferred from historical trichloroethylene (TCE) plume data. The most dominant feature affecting the well's performance was preferential high- and low-K zones. Based on results from the calibrated flow and transport model, a passive groundwater collection system was designed and built. Initial flow rates and concentrations measured from a gravity-drained horizontal well agree closely to predicted values
DEFF Research Database (Denmark)
Sanchez, Benjamin J.; Zhang, Xi-Cheng; Nilsson, Avlant
2017-01-01
, which act as limitations on metabolic fluxes, are not taken into account. Here, we present GECKO, a method that enhances a GEM to account for enzymes as part of reactions, thereby ensuring that each metabolic flux does not exceed its maximum capacity, equal to the product of the enzyme's abundance...... and turnover number. We applied GECKO to a Saccharomyces cerevisiae GEM and demonstrated that the new model could correctly describe phenotypes that the previous model could not, particularly under high enzymatic pressure conditions, such as yeast growing on different carbon sources in excess, coping...... with stress, or overexpressing a specific pathway. GECKO also allows to directly integrate quantitative proteomics data; by doing so, we significantly reduced flux variability of the model, in over 60% of metabolic reactions. Additionally, the model gives insight into the distribution of enzyme usage between...
Radmap: ''as-built'' cad models incorporating geometrical, radiological and material information
International Nuclear Information System (INIS)
Piotrowski, L.; Lubawy, J.L.
2001-01-01
EDF intends to achieve successful and cost-effective dismantling of its obsolete nuclear plants. To reach this goal, EDF is currently extending its ''as-built'' 3-D modelling system to also include the location and characteristics of gamma sources in the geometrical models of its nuclear installations. The resulting system (called RADMAP) is a complete CAD chain covering 3-D and gamma data acquisitions, CAD modelling and exploitation of the final model. Its aim is to describe completely the geometrical and radiological state of a particular nuclear environment. This paper presents an overall view of RADMAP. The technical and functional characteristics of each element of the chain are indicated and illustrated using real (EDF) environments/applications. (author)
Incorporation of the time aspect into the liability-threshold model for case-control-family data
DEFF Research Database (Denmark)
Cederkvist, Luise; Holst, Klaus K.; Andersen, Klaus K.
2017-01-01
to estimates that are difficult to interpret and are potentially biased. We incorporate the time aspect into the liability-threshold model for case-control-family data following the same approach that has been applied in the twin setting. Thus, the data are considered as arising from a competing risks setting...... approach using simulation studies and apply it in the analysis of two Danish register-based case-control-family studies: one on cancer diagnosed in childhood and adolescence, and one on early-onset breast cancer....
On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics
Directory of Open Access Journals (Sweden)
Jiang Gangyi
2010-01-01
Full Text Available The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.
A qualitative comparison of fire spread models incorporating wind and slope effects
David R. Weise; Gregory S. Biging
1997-01-01
Wind velocity and slope are two critical variables that affect wildland fire rate of spread. The effects of these variables on rate of spread are often combined in rate-of-spread models using vector addition. The various methods used to combine wind and slope effects have seldom been validated or compared due to differences in the models or to lack of data. In this...
Lompar, Miloš; Ćurić, Mladjen; Romanic, Djordje
2017-09-01
Despite an important role the aerosols play in all stages of cloud lifecycle, their representation in numerical weather prediction models is often rather crude. This paper investigates the effects the explicit versus implicit inclusion of aerosols in a microphysics parameterization scheme in Weather Research and Forecasting (WRF) - Advanced Research WRF (WRF-ARW) model has on cloud dynamics and microphysics. The testbed selected for this study is a severe mesoscale convective system with supercells that struck west and central parts of Serbia in the afternoon of July 21, 2014. Numerical products of two model runs, i.e. one with aerosols explicitly (WRF-AE) included and another with aerosols implicitly (WRF-AI) assumed, are compared against precipitation measurements from surface network of rain gauges, as well as against radar and satellite observations. The WRF-AE model accurately captured the transportation of dust from the north Africa over the Mediterranean and to the Balkan region. On smaller scales, both models displaced the locations of clouds situated above west and central Serbia towards southeast and under-predicted the maximum values of composite radar reflectivity. Similar to satellite images, WRF-AE shows the mesoscale convective system as a merged cluster of cumulonimbus clouds. Both models over-predicted the precipitation amounts; WRF-AE over-predictions are particularly pronounced in the zones of light rain, while WRF-AI gave larger outliers. Unlike WRF-AI, the WRF-AE approach enables the modelling of time evolution and influx of aerosols into the cloud which could be of practical importance in weather forecasting and weather modification. Several likely causes for discrepancies between models and observations are discussed and prospects for further research in this field are outlined.
Luna, Byron Quan; Remaître, Alexandre; van Asch, Theo; Malet, Jean-Philippe; van Westen, Cees
2010-05-01
Estimating the magnitude and the intensity of rapid landslides like debris flows is fundamental to evaluate quantitatively the hazard in a specific location. Intensity varies through the travelled course of the flow and can be described by physical features such as deposited volume, velocities, height of the flow, impact forces and pressures. Dynamic run-out models are able to characterize the distribution of the material, its intensity and define the zone where the elements will experience an impact. These models can provide valuable inputs for vulnerability and risk calculations. However, most dynamic run-out models assume a constant volume during the motion of the flow, ignoring the important role of material entrained along its path. Consequently, they neglect that the increase of volume enhances the mobility of the flow and can significantly influence the size of the potential impact area. An appropriate erosion mechanism needs to be established in the analyses of debris flows that will improve the results of dynamic modeling and consequently the quantitative evaluation of risk. The objective is to present and test a simple 1D debris flow model with a material entrainment concept based on limit equilibrium considerations and the generation of excess pore water pressure through undrained loading of the in situ bed material. The debris flow propagation model is based on a one dimensional finite difference solution of a depth-averaged form of the Navier-Stokes equations of fluid motions. The flow is treated as a laminar one phase material, which behavior is controlled by a visco-plastic Coulomb-Bingham rheology. The model parameters are evaluated and the model performance is tested on a debris flow event that occurred in 2003 in the Faucon torrent (Southern French Alps).
Creating a process for incorporating epidemiological modelling into outbreak management decisions.
Akselrod, Hana; Mercon, Monica; Kirkeby Risoe, Petter; Schlegelmilch, Jeffrey; McGovern, Joanne; Bogucki, Sandy
2012-01-01
Modern computational models of infectious diseases greatly enhance our ability to understand new infectious threats and assess the effects of different interventions. The recently-released CDC Framework for Preventing Infectious Diseases calls for increased use of predictive modelling of epidemic emergence for public health preparedness. Currently, the utility of these technologies in preparedness and response to outbreaks is limited by gaps between modelling output and information requirements for incident management. The authors propose an operational structure that will facilitate integration of modelling capabilities into action planning for outbreak management, using the Incident Command System (ICS) and Synchronization Matrix framework. It is designed to be adaptable and scalable for use by state and local planners under the National Response Framework (NRF) and Emergency Support Function #8 (ESF-8). Specific epidemiological modelling requirements are described, and integrated with the core processes for public health emergency decision support. These methods can be used in checklist format to align prospective or real-time modelling output with anticipated decision points, and guide strategic situational assessments at the community level. It is anticipated that formalising these processes will facilitate translation of the CDC's policy guidance from theory to practice during public health emergencies involving infectious outbreaks.
International Nuclear Information System (INIS)
Stubbs, J.B.
1992-01-01
As part of the revision by the International Commission on Radiological Protection (ICRP) of its report on Reference Man, an extensive review of the literature regarding anatomy and morphology of the gastrointestinal (GI) tract has been completed. Data on age- and gender-dependent GI physiology and motility may be included in the proposed ICRP report. A new mathematical model describing the transit of substances through the GI tract as well as the absorption and secretion of material in the GI tract has been developed. This mathematical description of GI tract kinetics utilizes more physiologically accurate transit processes than the mathematically simple, but nonphysiological, GI tract model that was used in ICRP Report 30. The proposed model uses a combination of zero- and first-order kinetics to describe motility. Some of the physiological parameters that the new model accounts for include sex, age, pathophysiological condition and meal phase (solid versus liquid). A computer algorithm, written in BASIC, based on this new model has been derived and results are compared to those of the ICRP-30 model
Enhanced stability of car-following model upon incorporation of short-term driving memory
Liu, Da-Wei; Shi, Zhong-Ke; Ai, Wen-Huan
2017-06-01
Based on the full velocity difference model, a new car-following model is developed to investigate the effect of short-term driving memory on traffic flow in this paper. Short-term driving memory is introduced as the influence factor of driver's anticipation behavior. The stability condition of the newly developed model is derived and the modified Korteweg-de Vries (mKdV) equation is constructed to describe the traffic behavior near the critical point. Via numerical method, evolution of a small perturbation is investigated firstly. The results show that the improvement of this new car-following model over the previous ones lies in the fact that the new model can improve the traffic stability. Starting and breaking processes of vehicles in the signalized intersection are also investigated. The numerical simulations illustrate that the new model can successfully describe the driver's anticipation behavior, and that the efficiency and safety of the vehicles passing through the signalized intersection are improved by considering short-term driving memory.
Incorporating rainfall uncertainty in a SWAT model: the river Zenne basin (Belgium) case study
Tolessa Leta, Olkeba; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2013-04-01
The European Union Water Framework Directive (EU-WFD) called its member countries to achieve a good ecological status for all inland and coastal water bodies by 2015. According to recent studies, the river Zenne (Belgium) is far from this objective. Therefore, an interuniversity and multidisciplinary project "Towards a Good Ecological Status in the river Zenne (GESZ)" was launched to evaluate the effects of wastewater management plans on the river. In this project, different models have been developed and integrated using the Open Modelling Interface (OpenMI). The hydrologic, semi-distributed Soil and Water Assessment Tool (SWAT) is hereby used as one of the model components in the integrated modelling chain in order to model the upland catchment processes. The assessment of the uncertainty of SWAT is an essential aspect of the decision making process, in order to design robust management strategies that take the predicted uncertainties into account. Model uncertainty stems from the uncertainties on the model parameters, the input data (e.g, rainfall), the calibration data (e.g., stream flows) and on the model structure itself. The objective of this paper is to assess the first three sources of uncertainty in a SWAT model of the river Zenne basin. For the assessment of rainfall measurement uncertainty, first, we identified independent rainfall periods, based on the daily precipitation and stream flow observations and using the Water Engineering Time Series PROcessing tool (WETSPRO). Secondly, we assigned a rainfall multiplier parameter for each of the independent rainfall periods, which serves as a multiplicative input error corruption. Finally, we treated these multipliers as latent parameters in the model optimization and uncertainty analysis (UA). For parameter uncertainty assessment, due to the high number of parameters of the SWAT model, first, we screened out its most sensitive parameters using the Latin Hypercube One-factor-At-a-Time (LH-OAT) technique
Magnetic properties of Fe–Al for quenched diluted spin-1 Ising model
International Nuclear Information System (INIS)
Freitas, A.S.; Albuquerque, Douglas F. de; Fittipaldi, I.P.; Moreno, N.O.
2014-01-01
We study the phase diagram of Fe 1−q Al q alloys via the quenched site diluted spin-1 ferromagnetic Ising model by employing effective field theory. One suggests a new approach to exchange interaction between nearest neighbors of Fe that depends on the powers of the Al (q) instead of the linear dependence proposed in other papers. In such model we propose the same kind of the exchange interaction in which the iron–nickel alloys obtain an excellent theoretical description of the experimental data of the T–q phase diagram for all Al concentration q. - Highlights: • We apply the quenched Ising model spin-1 to study the properties of Fe–Al. • We employ the EFT and suggest a new approach to ferromagnetic coupling. • The new probability distribution is considered. • The phase diagram is obtained for all values of q in T–q plane
A statistical-thermodynamic model for ordering phenomena in thin film intermetallic structures
International Nuclear Information System (INIS)
Semenova, Olga; Krachler, Regina
2008-01-01
Ordering phenomena in bcc (110) binary thin film intermetallics are studied by a statistical-thermodynamic model. The system is modeled by an Ising approach that includes only nearest-neighbor chemical interactions and is solved in a mean-field approximation. Vacancies and anti-structure atoms are considered on both sublattices. The model describes long-range ordering and simultaneously short-range ordering in the thin film. It is applied to NiAl thin films with B2 structure. Vacancy concentrations, thermodynamic activity profiles and the virtual critical temperature of order-disorder as a function of film composition and thickness are presented. The results point to an important role of vacancies in near-stoichiometric and Ni-rich NiAl thin films
International Nuclear Information System (INIS)
Lopez Carvajal, Jaime; Branch Bedoya, John Willian
2005-01-01
The automatic classification of objects is a very interesting approach under several problem domains. This paper outlines some results obtained under different classification models to categorize textural patterns of minerals using real digital images. The data set used was characterized by a small size and noise presence. The implemented models were the Bayesian classifier, Neural Network (2-5-1), support vector machine, decision tree and 3-nearest neighbors. The results after applying crossed validation show that the Bayesian model (84%) proved better predictive capacity than the others, mainly due to its noise robustness behavior. The neuronal network (68%) and the SVM (67%) gave promising results, because they could be improved increasing the data amount used, while the decision tree (55%) and K-NN (54%) did not seem to be adequate for this problem, because of their sensibility to noise
Magnetic properties of Fe–Al for quenched diluted spin-1 Ising model
Energy Technology Data Exchange (ETDEWEB)
Freitas, A.S. [Departamento de Física, Universidade Federal de Sergipe, 49100-000, São Cristovão, SE (Brazil); Coordenadoria de Física, Instituto Federal de Sergipe, 49400-000 Lagarto, SE (Brazil); Albuquerque, Douglas F. de, E-mail: douglas@ufs.br [Departamento de Física, Universidade Federal de Sergipe, 49100-000, São Cristovão, SE (Brazil); Departamento de Matemática, Universidade Federal de Sergipe, 49100-000, São Cristovão, SE (Brazil); Fittipaldi, I.P. [Representação Regional do Ministério da Ciência, Tecnologia e Inovação no Nordeste - ReNE, 50740-540 Recife, PE (Brazil); Moreno, N.O. [Departamento de Física, Universidade Federal de Sergipe, 49100-000, São Cristovão, SE (Brazil)
2014-08-01
We study the phase diagram of Fe{sub 1−q}Al{sub q} alloys via the quenched site diluted spin-1 ferromagnetic Ising model by employing effective field theory. One suggests a new approach to exchange interaction between nearest neighbors of Fe that depends on the powers of the Al (q) instead of the linear dependence proposed in other papers. In such model we propose the same kind of the exchange interaction in which the iron–nickel alloys obtain an excellent theoretical description of the experimental data of the T–q phase diagram for all Al concentration q. - Highlights: • We apply the quenched Ising model spin-1 to study the properties of Fe–Al. • We employ the EFT and suggest a new approach to ferromagnetic coupling. • The new probability distribution is considered. • The phase diagram is obtained for all values of q in T–q plane.
Degenerate and chiral states in the extended Heisenberg model on the kagome lattice
Gómez Albarracín, F. A.; Pujol, P.
2018-03-01
We present a study of the low-temperature phases of the antiferromagnetic extended classical Heisenberg model on the kagome lattice, up to third-nearest neighbors. First, we focus on the degenerate lines in the boundaries of the well-known staggered chiral phases. These boundaries have either semiextensive or extensive degeneracy, and we discuss the partial selection of states by thermal fluctuations. Then, we study the model under an external magnetic field on these lines and in the staggered chiral phases. We pay particular attention to the highly frustrated point, where the three exchange couplings are equal. We show that this point can be mapped to a model with spin-liquid behavior and nonzero chirality. Finally, we explore the effect of Dzyaloshinskii-Moriya (DM) interactions in two ways: a homogeneous and a staggered DM interaction. In both cases, there is a rich low-temperature phase diagram, with different spontaneously broken symmetries and nontrivial chiral phases.
Using recurrent neural network models for early detection of heart failure onset.
Choi, Edward; Schuetz, Andy; Stewart, Walter F; Sun, Jimeng
2017-03-01
We explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality. Data were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches. Using a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP). Deep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
International Nuclear Information System (INIS)
Xiong, J.J.; Shenoi, R.A.
2009-01-01
This paper outlines a new durability model to assess the first inspection and maintenance period for structures. Practical scatter factor formulae are presented to determine the safe fatigue crack initiation and propagation lives from the results of a single full-scale test of a complete structure. New theoretical solutions are proposed to determine the s a -s m -N surfaces of fatigue crack initiation and propagation. Prediction techniques are then developed to establish the relationship equation between safe fatigue crack initiation and propagation lives with a specific reliability level using a two-stage fatigue damage cumulative rule. A new durability model incorporating safe life and damage tolerance design approaches is derived to assess the first inspection and maintenance period. Finally, the proposed models are applied to assess the first inspection and maintenance period of a fastening structure at the root of helicopter blade.
Energy Technology Data Exchange (ETDEWEB)
Xiong, J.J. [Aircraft Department, Beihang University, Beijing 100083 (China); Shenoi, R.A. [School of Engineering Sciences, University of Southampton, Southampton SO17 1BJ (United Kingdom)], E-mail: r.a.shenoi@ship.soton.ac.uk
2009-08-15
This paper outlines a new durability model to assess the first inspection and maintenance period for structures. Practical scatter factor formulae are presented to determine the safe fatigue crack initiation and propagation lives from the results of a single full-scale test of a complete structure. New theoretical solutions are proposed to determine the s{sub a}-s{sub m}-N surfaces of fatigue crack initiation and propagation. Prediction techniques are then developed to establish the relationship equation between safe fatigue crack initiation and propagation lives with a specific reliability level using a two-stage fatigue damage cumulative rule. A new durability model incorporating safe life and damage tolerance design approaches is derived to assess the first inspection and maintenance period. Finally, the proposed models are applied to assess the first inspection and maintenance period of a fastening structure at the root of helicopter blade.
International Nuclear Information System (INIS)
Ivascu, M.
1983-10-01
Computer codes incorporating advanced nuclear models (optical, statistical and pre-equilibrium decay nuclear reaction models) were used to calculate neutron cross sections needed for fusion reactor technology. The elastic and inelastic scattering (n,2n), (n,p), (n,n'p), (n,d) and (n,γ) cross sections for stable molybdenum isotopes Mosup(92,94,95,96,97,98,100) and incident neutron energy from about 100 keV or a threshold to 20 MeV were calculated using the consistent set of input parameters. The hydrogen production cross section which determined the radiation damage in structural materials of fusion reactors can be simply deduced from the presented results. The more elaborated microscopic models of nuclear level density are required for high accuracy calculations
International Nuclear Information System (INIS)
Vasilev, V.; Doncheva, B.
1989-01-01
A model is presented for irradiation calculation of human foetus during weeks 8-15 of the intrauterine development, when the mother chronically incorporates iodine 131. This period is critical for the nervous system of the foetus. Compared to some other author's models, the method proposed eliminates some uncertainties and takes into account the changes in the activity of mother's thyroid in time. The model is built on the base of data from 131 I-kinetics of pregnant women and experimental mice. A formula is proposed for total foetus irradiation calculation including: the internal γ and β irradiation; the external γ and β irradiation from the mother as a whole; and the external γ irradiation from the mother's thyroid