and Models: A Self-Similar Approach
Directory of Open Access Journals (Sweden)
José Antonio Belinchón
2013-01-01
equations (FEs admit self-similar solutions. The methods employed allow us to obtain general results that are valid not only for the FRW metric, but also for all the Bianchi types as well as for the Kantowski-Sachs model (under the self-similarity hypothesis and the power-law hypothesis for the scale factors.
Similarity transformation approach to identifiability analysis of nonlinear compartmental models.
Vajda, S; Godfrey, K R; Rabitz, H
1989-04-01
Through use of the local state isomorphism theorem instead of the algebraic equivalence theorem of linear systems theory, the similarity transformation approach is extended to nonlinear models, resulting in finitely verifiable sufficient and necessary conditions for global and local identifiability. The approach requires testing of certain controllability and observability conditions, but in many practical examples these conditions prove very easy to verify. In principle the method also involves nonlinear state variable transformations, but in all of the examples presented in the paper the transformations turn out to be linear. The method is applied to an unidentifiable nonlinear model and a locally identifiable nonlinear model, and these are the first nonlinear models other than bilinear models where the reason for lack of global identifiability is nontrivial. The method is also applied to two models with Michaelis-Menten elimination kinetics, both of considerable importance in pharmacokinetics, and for both of which the complicated nature of the algebraic equations arising from the Taylor series approach has hitherto defeated attempts to establish identifiability results for specific input functions.
A Model-Based Approach to Constructing Music Similarity Functions
Directory of Open Access Journals (Sweden)
Lamere Paul
2007-01-01
Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.
Bianchi VI{sub 0} and III models: self-similar approach
Energy Technology Data Exchange (ETDEWEB)
Belinchon, Jose Antonio, E-mail: abelcal@ciccp.e [Departamento de Fisica, ETS Arquitectura, UPM, Av. Juan de Herrera 4, Madrid 28040 (Spain)
2009-09-07
We study several cosmological models with Bianchi VI{sub 0} and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and LAMBDA. As in other studied models we find that the behaviour of G and LAMBDA are related. If G behaves as a growing time function then LAMBDA is a positive decreasing time function but if G is decreasing then LAMBDA{sub 0} is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.
Dedík, Ladislav; Durisová, Mária
2002-07-01
System-approach based modeling methods are used to model dynamic systems describing in vitro dissolutions of drug dosage formulations. Employing the models of these systems, model-dependent criteria are proposed for testing similarity between in vitro dissolutions of different drug dosage formulations. The criteria proposed are exemplified and compared with the criterion called the similarity factor f(2), commonly used in the field of biomedicine. Advantages of the criteria proposed over this factor are presented.
A New Approach to Satisfy Dynamic Similarity for Model Submarine Maneuvers
2007-11-28
scale jam recovery, a steady approach speed is required for RCM correlation maneuvers. The scaled model approach speed must be set at 21 U.n0 . UsO (33...NAVSEA 05H Y M. King 1 ONR 331 Y R. Joslin 1 NSWCCD 3452 Y TIC (C) 1 NSWCCD 5060 Y D. Walden 1 NSWCCD 5080 Y J. Brown 1 NSWCCD 5080 Y B. Cox 1 NSWCCD 5400 Y
Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models
Directory of Open Access Journals (Sweden)
Jin Dai
2014-01-01
Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.
Directory of Open Access Journals (Sweden)
L. Boeckli
2012-01-01
explanatory variables are MAAT and PISR. The linear regression achieves a root mean square error (RMSE of 1.6 °C. The final model combines the two sub-models and accounts for the different scales used for model calibration.
The modelling approach provides a theoretical basis for estimating mountain permafrost distribution over larger mountain ranges and can be expanded to more surface types and sub-models than considered, here. The analyses performed with the Alpine data set further provide quantitative insight into larger-area patterns as well as the model coefficients for a later spatial application. The transfer into a map-based product, however, requires further steps such as the definition of offset terms that usually contain a degree of subjectivity.
Exact Solutions for Stokes' Flow of a Non-Newtonian Nanofluid Model: A Lie Similarity Approach
Aziz, Taha; Aziz, A.; Khalique, C. M.
2016-07-01
The fully developed time-dependent flow of an incompressible, thermodynamically compatible non-Newtonian third-grade nanofluid is investigated. The classical Stokes model is considered in which the flow is generated due to the motion of the plate in its own plane with an impulsive velocity. The Lie symmetry approach is utilised to convert the governing nonlinear partial differential equation into different linear and nonlinear ordinary differential equations. The reduced ordinary differential equations are then solved by using the compatibility and generalised group method. Exact solutions for the model equation are deduced in the form of closed-form exponential functions which are not available in the literature before. In addition, we also derived the conservation laws associated with the governing model. Finally, the physical features of the pertinent parameters are discussed in detail through several graphs.
Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati
2017-04-01
The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.
Directory of Open Access Journals (Sweden)
PAULO COSTA
2016-12-01
Full Text Available ABSTRACT Contemporaneously, and with the successive paradigmatic revolutions inherent to management since the XVII century, we are witnessing a new era marked by the structural rupture in the way organizations are perceived. Market globalization, cemented by quick technological evolutions, associated with economic, cultural, political and social transformations characterize a reality where uncertainty is the only certainty for organizations and managers. Knowledge management has been interpreted by managers and academics as a viable alternative in a logic of creation and conversation of sustainable competitive advantages. However, there are several barriers to the implementation and development of knowledge management programs in organizations, with organizational culture being one of the most preponderant. In this sense, and in this article, we will analyze and compare The Knowledge Creation and Conversion Model proposed by Nonaka and Takeuchi (1995 and Quinn and Rohrbaugh's Competing Values Framework (1983, since both have convergent conceptual lines that can assist managers in different sectors to guide their organization in a perspective of productivity, quality and market competitiveness.
Directory of Open Access Journals (Sweden)
Sergey B. Kuznetsov
2017-06-01
Full Text Available Objective to obtain dimensionless criteria ndash economic indices characterizing the national economy and not depending on its size. Methods mathematical modeling theory of dimensions processing statistical data. Results basing on differential equations describing the national economy with the account of economical environment resistance two dimensionless criteria are obtained which allow to compare economies regardless of their sizes. With the theory of dimensions we show that the obtained indices are not accidental. We demonstrate the implementation of the obtained dimensionless criteria for the analysis of behavior of certain countriesrsquo economies. Scientific novelty the dimensionless criteria are obtained ndash economic indices which allow to compare economies regardless of their sizes and to analyze the dynamic changes in the economies with time. nbsp Practical significance the obtained results can be used for dynamic and comparative analysis of different countriesrsquo economies regardless of their sizes.
Notions of similarity for computational biology models
Waltemath, Dagmar
2016-03-21
Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.
A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY
Directory of Open Access Journals (Sweden)
Q. X. Xu
2012-08-01
Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.
a Comparison of Semantic Similarity Models in Evaluating Concept Similarity
Xu, Q. X.; Shi, W. Z.
2012-08-01
The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.
Similarity and Modeling in Science and Engineering
Kuneš, Josef
2012-01-01
The present text sets itself in relief to other titles on the subject in that it addresses the means and methodologies versus a narrow specific-task oriented approach. Concepts and their developments which evolved to meet the changing needs of applications are addressed. This approach provides the reader with a general tool-box to apply to their specific needs. Two important tools are presented: dimensional analysis and the similarity analysis methods. The fundamental point of view, enabling one to sort all models, is that of information flux between a model and an original expressed by the similarity and abstraction. Each chapter includes original examples and ap-plications. In this respect, the models can be divided into several groups. The following models are dealt with separately by chapter; mathematical and physical models, physical analogues, deterministic, stochastic, and cybernetic computer models. The mathematical models are divided into asymptotic and phenomenological models. The phenomenological m...
Similarity-Based Approaches to Natural Language Processing
Lee, L
1997-01-01
This thesis presents two similarity-based approaches to sparse data problems. The first approach is to build soft, hierarchical clusters: soft, because each event belongs to each cluster with some probability; hierarchical, because cluster centroids are iteratively split to model finer distinctions. Our second approach is a nearest-neighbor approach: instead of calculating a centroid for each class, as in the hierarchical clustering approach, we in essence build a cluster around each word. We compare several such nearest-neighbor approaches on a word sense disambiguation task and find that as a whole, their performance is far superior to that of standard methods. In another set of experiments, we show that using estimation techniques based on the nearest-neighbor model enables us to achieve perplexity reductions of more than 20 percent over standard techniques in the prediction of low-frequency events, and statistically significant speech recognition error-rate reduction.
MOST: most-similar ligand based approach to target prediction.
Huang, Tao; Mi, Hong; Lin, Cheng-Yuan; Zhao, Ling; Zhong, Linda L D; Liu, Feng-Bin; Zhang, Ge; Lu, Ai-Ping; Bian, Zhao-Xiang
2017-03-11
Many computational approaches have been used for target prediction, including machine learning, reverse docking, bioactivity spectra analysis, and chemical similarity searching. Recent studies have suggested that chemical similarity searching may be driven by the most-similar ligand. However, the extent of bioactivity of most-similar ligands has been oversimplified or even neglected in these studies, and this has impaired the prediction power. Here we propose the MOst-Similar ligand-based Target inference approach, namely MOST, which uses fingerprint similarity and explicit bioactivity of the most-similar ligands to predict targets of the query compound. Performance of MOST was evaluated by using combinations of different fingerprint schemes, machine learning methods, and bioactivity representations. In sevenfold cross-validation with a benchmark Ki dataset from CHEMBL release 19 containing 61,937 bioactivity data of 173 human targets, MOST achieved high average prediction accuracy (0.95 for pKi ≥ 5, and 0.87 for pKi ≥ 6). Morgan fingerprint was shown to be slightly better than FP2. Logistic Regression and Random Forest methods performed better than Naïve Bayes. In a temporal validation, the Ki dataset from CHEMBL19 were used to train models and predict the bioactivity of newly deposited ligands in CHEMBL20. MOST also performed well with high accuracy (0.90 for pKi ≥ 5, and 0.76 for pKi ≥ 6), when Logistic Regression and Morgan fingerprint were employed. Furthermore, the p values associated with explicit bioactivity were found be a robust index for removing false positive predictions. Implicit bioactivity did not offer this capability. Finally, p values generated with Logistic Regression, Morgan fingerprint and explicit activity were integrated with a false discovery rate (FDR) control procedure to reduce false positives in multiple-target prediction scenario, and the success of this strategy it was demonstrated with a case of fluanisone
Ahn, Kwang Woo; Kosoy, Michael; Chan, Kung-Sik
2014-06-01
We developed a two-strain susceptible-infected-recovered (SIR) model that provides a framework for inferring the cross-immunity between two strains of a bacterial species in the host population with discretely sampled co-infection time-series data. Moreover, the model accounts for seasonality in host reproduction. We illustrate an approach using a dataset describing co-infections by several strains of bacteria circulating within a population of cotton rats (Sigmodon hispidus). Bartonella strains were clustered into three genetically close groups, between which the divergence is correspondent to the accepted level of separate bacterial species. The proposed approach revealed no cross-immunity between genetic clusters while limited cross-immunity might exist between subgroups within the clusters. Copyright © 2014. Published by Elsevier B.V.
Integrated Semantic Similarity Model Based on Ontology
Institute of Scientific and Technical Information of China (English)
LIU Ya-Jun; ZHAO Yun
2004-01-01
To solve the problem of the inadequacy of semantic processing in the intelligent question answering system, an integrated semantic similarity model which calculates the semantic similarity using the geometric distance and information content is presented in this paper.With the help of interrelationship between concepts, the information content of concepts and the strength of the edges in the ontology network, we can calculate the semantic similarity between two concepts and provide information for the further calculation of the semantic similarity between user's question and answers in knowlegdge base.The results of the experiments on the prototype have shown that the semantic problem in natural language processing can also be solved with the help of the knowledge and the abundant semantic information in ontology.More than 90% accuracy with less than 50 ms average searching time in the intelligent question answering prototype system based on ontology has been reached.The result is very satisfied.
Personalized Predictive Modeling and Risk Factor Identification using Patient Similarity.
Ng, Kenney; Sun, Jimeng; Hu, Jianying; Wang, Fei
2015-01-01
Personalized predictive models are customized for an individual patient and trained using information from similar patients. Compared to global models trained on all patients, they have the potential to produce more accurate risk scores and capture more relevant risk factors for individual patients. This paper presents an approach for building personalized predictive models and generating personalized risk factor profiles. A locally supervised metric learning (LSML) similarity measure is trained for diabetes onset and used to find clinically similar patients. Personalized risk profiles are created by analyzing the parameters of the trained personalized logistic regression models. A 15,000 patient data set, derived from electronic health records, is used to evaluate the approach. The predictive results show that the personalized models can outperform the global model. Cluster analysis of the risk profiles show groups of patients with similar risk factors, differences in the top risk factors for different groups of patients and differences between the individual and global risk factors.
Apostol, Tom M. (Editor)
1990-01-01
In this 'Project Mathematics! series, sponsored by the California Institute for Technology (CalTech), the mathematical concept of similarity is presented. he history of and real life applications are discussed using actual film footage and computer animation. Terms used and various concepts of size, shape, ratio, area, and volume are demonstrated. The similarity of polygons, solids, congruent triangles, internal ratios, perimeters, and line segments using the previous mentioned concepts are shown.
New approach for distinguishing the similarity of links
Institute of Scientific and Technical Information of China (English)
Jianyun XIANG; Maozhong GE; Zhiping WANG
2008-01-01
Based on the problem of distinguishing the similarity of links in the regenerative innovation design of a kinematic chain, a new approach using the standard power matrix of the adjacent matrix is presented in this paper. The implementation of the approach is illustrated with an example. This method solves the technically baffling problem in mechanism type synthesis and reduced redundant design scheme, and raises the reliability and the efficiency of the regenerative innova-tion design of the kinematic chain.
EVOLVING FRIENDSHIP NETWORKS - AN INDIVIDUAL-ORIENTED APPROACH IMPLEMENTING SIMILARITY
ZEGGELINK, E
1995-01-01
This article is an extension to Zeggelink (1994) which introduced the individual-oriented approach to model the evolution of networks. In this approach, the dynamics of friendship network structure are considered as a result of individual choices with regard to friendship relationships. Individuals
Bounding SAR ATR performance based on model similarity
Boshra, Michael; Bhanu, Bir
1999-08-01
Similarity between model targets plays a fundamental role in determining the performance of target recognition. We analyze the effect of model similarity on the performance of a vote- based approach for target recognition from SAR images. In such an approach, each model target is represented by a set of SAR views sampled at a variety of azimuth angles and a specific depression angle. Both model and data views are represented by locations of scattering centers, which are peak features. The model hypothesis (view of a specific target and associated location) corresponding to a given data view is chosen to be the one with the highest number of data-supported model features (votes). We address three issues in this paper. Firstly, we present a quantitative measure of the similarity between a pair of model views. Such a measure depends on the degree of structural overlap between the two views, and the amount of uncertainty. Secondly, we describe a similarity- based framework for predicting an upper bound on recognition performance in the presence of uncertainty, occlusion and clutter. Thirdly, we validate the proposed framework using MSTAR public data, which are obtained under different depression angles, configurations and articulations.
Directory of Open Access Journals (Sweden)
Kazi Abdur Rouf
2017-01-01
Full Text Available This paper compares and contrasts training evaluation different models and theories specially applies Kirkpatrick’s (K & K training evaluation model: four levels of measurement and their application in the Gramen Bank (GB training program. The objective of the paper is to know training evaluation models offered by Kirkpatrick and others, and their applications and usefulness to Grameen Bank and other microfinance institutions (MFIs. The paper uses literature review for analyzing of different models of training evaluation. Although K & K how training evaluation model of measuring training depend upon the training program and the demands of top management including stakeholders and donors, this K & K training evaluation model’s four-level training evaluation steps are simple enough that Grameen Bank Bangladesh and other micro-financing institutions (MFIs can adopt it in GB training evaluation process, which can assist GB and other micro-credit micro-management processes in monitoring employees’ performances.
CONDITIONAL SIMILARITY REDUCTION APPROACH:JIMBO-MIWA EQUATION
Institute of Scientific and Technical Information of China (English)
楼森岳; 唐晓艳
2001-01-01
The direct method developed by Clarkson and Kruskal (1989 J. Math. Phys. 30 2201) for finding the symmetryreductions of a nonlinear system is extended to find the conditional similarity solutions. Using the method of the JimboMiwa (JM) equation, we find that three well-known (2+1)-dimensional models-the asymmetric Nizhnik-NovikovVeselov equation, the breaking soliton equation and the Kadomtsev-Petviashvili equation-can all be obtained as the conditional similarity reductions of the JM equation.
Anderson, Andrew James; Zinszer, Benjamin D; Raizada, Rajeev D S
2016-03-01
Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns.
CELL FORMATION IN GROUP TECHNOLOGY: A SIMILARITY ORDER CLUSTERING APPROACH
Directory of Open Access Journals (Sweden)
Godfrey C. Onwubolu
2012-01-01
Full Text Available Grouping parts into families which can be produced by a cluster of machine cells is the cornerstone of cellular manufacturing, which in turn is the building block for flexible manufacturing systems. Cellular manufacturing is a group technology (GT concept that has recently attracted the attention of manufacturing firms operating under jobshop environment to consider redesigning their manufacturing systems so as to take advantage of increased throughput, reduction in work-in-progress, set-up time, and lead times; leading to product quality and customer satisfaction. The paper presents a generalised approach for machine cell formation from a jobshop using similarity order clustering technique for preliminary cell grouping and considering machine utilisation for the design of nonintergrouping material handling using the single-pass heuristic. The work addresses the shortcomings of cellular manufacturing systems design and implementations which ignore machine utilisations, group sizes and intergroup moves.
A novel similarity comparison approach for dynamic ECG series.
Yin, Hong; Zhu, Xiaoqian; Ma, Shaodong; Yang, Shuqiang; Chen, Liqian
2015-01-01
The heart sound signal is a reflection of heart and vascular system motion. Long-term continuous electrocardiogram (ECG) contains important information which can be helpful to prevent heart failure. A single piece of a long-term ECG recording usually consists of more than one hundred thousand data points in length, making it difficult to derive hidden features that may be reflected through dynamic ECG monitoring, which is also very time-consuming to analyze. In this paper, a Dynamic Time Warping based on MapReduce (MRDTW) is proposed to make prognoses of possible lesions in patients. Through comparison of a real-time ECG of a patient with the reference sets of normal and problematic cardiac waveforms, the experimental results reveal that our approach not only retains high accuracy, but also greatly improves the efficiency of the similarity measure in dynamic ECG series.
Spectral similarity approach for mapping turbidity of an inland waterbody
Garg, Vaibhav; Senthil Kumar, A.; Aggarwal, S. P.; Kumar, Vinay; Dhote, Pankaj R.; Thakur, Praveen K.; Nikam, Bhaskar R.; Sambare, Rohit S.; Siddiqui, Asfa; Muduli, Pradipta R.; Rastogi, Gurdeep
2017-07-01
Turbidity is an important quality parameter of water from its optical property point of view. It varies spatio-temporally over large waterbodies and its well distributed measurement on field is tedious and time consuming. Generally, normalized difference turbidity index (NDTI), or band ratio, or regression analysis between turbidity concentration and band reflectance, approaches have been adapted to retrieve turbidity using multispectral remote sensing data. These techniques usually provide qualitative rather than quantitative estimates of turbidity. However, in the present study, spectral similarity analysis, between the spectral characteristics of spaceborne hyperspectral remote sensing data and spectral library generated on field, was carried out to quantify turbidity in the part of Chilika Lake, Odisha, India. Spatial spectral contextual image analysis, spectral angle mapper (SAM) technique was evaluated for the same. The SAM spectral matching technique has been widely used in geological application (mineral mapping), however, the application of this kind of techniques is limited in water quality studies due to non-availability of reference spectral libraries. A spectral library was generated on field for the different concentrations of turbidity using well calibrated instruments like field spectro-radiometer, turbidity meter and hand held global positioning system. The field spectra were classified into 7 classes of turbidity concentration as 100 NTU for analysis. Analysis reveal that at each location in the lake under consideration, the field spectra matched with the image spectra with SAM score of 0.8 and more. The observed turbidity at each location was also very much falling in the estimated turbidity class range. It was observed that the spectral similarity approach provides more quantitative estimate of turbidity as compared to NDTI.
Machine Learning Approaches for Predicting Protein Complex Similarity.
Farhoodi, Roshanak; Akbal-Delibas, Bahar; Haspel, Nurit
2017-01-01
Discriminating native-like structures from false positives with high accuracy is one of the biggest challenges in protein-protein docking. While there is an agreement on the existence of a relationship between various favorable intermolecular interactions (e.g., Van der Waals, electrostatic, and desolvation forces) and the similarity of a conformation to its native structure, the precise nature of this relationship is not known. Existing protein-protein docking methods typically formulate this relationship as a weighted sum of selected terms and calibrate their weights by using a training set to evaluate and rank candidate complexes. Despite improvements in the predictive power of recent docking methods, producing a large number of false positives by even state-of-the-art methods often leads to failure in predicting the correct binding of many complexes. With the aid of machine learning methods, we tested several approaches that not only rank candidate structures relative to each other but also predict how similar each candidate is to the native conformation. We trained a two-layer neural network, a multilayer neural network, and a network of Restricted Boltzmann Machines against extensive data sets of unbound complexes generated by RosettaDock and PyDock. We validated these methods with a set of refinement candidate structures. We were able to predict the root mean squared deviations (RMSDs) of protein complexes with a very small, often less than 1.5 Å, error margin when trained with structures that have RMSD values of up to 7 Å. In our most recent experiments with the protein samples having RMSD values up to 27 Å, the average prediction error was still relatively small, attesting to the potential of our approach in predicting the correct binding of protein-protein complexes.
Mazandu, Gaston K; Mulder, Nicola J
2013-01-01
Several approaches have been proposed for computing term information content (IC) and semantic similarity scores within the gene ontology (GO) directed acyclic graph (DAG). These approaches contributed to improving protein analyses at the functional level. Considering the recent proliferation of these approaches, a unified theory in a well-defined mathematical framework is necessary in order to provide a theoretical basis for validating these approaches. We review the existing IC-based ontological similarity approaches developed in the context of biomedical and bioinformatics fields to propose a general framework and unified description of all these measures. We have conducted an experimental evaluation to assess the impact of IC approaches, different normalization models, and correction factors on the performance of a functional similarity metric. Results reveal that considering only parents or only children of terms when assessing information content or semantic similarity scores negatively impacts the approach under consideration. This study produces a unified framework for current and future GO semantic similarity measures and provides theoretical basics for comparing different approaches. The experimental evaluation of different approaches based on different term information content models paves the way towards a solution to the issue of scoring a term's specificity in the GO DAG.
Mazandu, Gaston K.; Mulder, Nicola J.
2013-01-01
Several approaches have been proposed for computing term information content (IC) and semantic similarity scores within the gene ontology (GO) directed acyclic graph (DAG). These approaches contributed to improving protein analyses at the functional level. Considering the recent proliferation of these approaches, a unified theory in a well-defined mathematical framework is necessary in order to provide a theoretical basis for validating these approaches. We review the existing IC-based ontological similarity approaches developed in the context of biomedical and bioinformatics fields to propose a general framework and unified description of all these measures. We have conducted an experimental evaluation to assess the impact of IC approaches, different normalization models, and correction factors on the performance of a functional similarity metric. Results reveal that considering only parents or only children of terms when assessing information content or semantic similarity scores negatively impacts the approach under consideration. This study produces a unified framework for current and future GO semantic similarity measures and provides theoretical basics for comparing different approaches. The experimental evaluation of different approaches based on different term information content models paves the way towards a solution to the issue of scoring a term's specificity in the GO DAG. PMID:24078912
Conditional Similarity Reductions of Jimbo-Miwa Equation via the Classical Lie Group Approach
Institute of Scientific and Technical Information of China (English)
TANG Xiao-Yan; LIN Ji
2003-01-01
Recently, the Clarkson and Kruskal direct method has been modified to find new similarity reductions (conditional similarity reductions) of nonlinear systems and the results obtained by the modified direct method cannot be obtained by the current classical and/or non-classical Lie group approach. In this paper, we show that the conditional similarity reductions of the Jimbo-Miwa equation can be reobtained by adding an additional constraint equation to the original model to form a conditional equation system first and then solving the model system by means of the classical Lie group approach.
Similarity-based semi-local estimation of EMOS models
Lerch, Sebastian
2015-01-01
Weather forecasts are typically given in the form of forecast ensembles obtained from multiple runs of numerical weather prediction models with varying initial conditions and physics parameterizations. Such ensemble predictions tend to be biased and underdispersive and thus require statistical postprocessing. In the ensemble model output statistics (EMOS) approach, a probabilistic forecast is given by a single parametric distribution with parameters depending on the ensemble members. This article proposes two semi-local methods for estimating the EMOS coefficients where the training data for a specific observation station are augmented with corresponding forecast cases from stations with similar characteristics. Similarities between stations are determined using either distance functions or clustering based on various features of the climatology, forecast errors, ensemble predictions and locations of the observation stations. In a case study on wind speed over Europe with forecasts from the Grand Limited Area...
Lie algebraic similarity transformed Hamiltonians for lattice model systems
Wahlen-Strothman, Jacob M.; Jiménez-Hoyos, Carlos A.; Henderson, Thomas M.; Scuseria, Gustavo E.
2015-01-01
We present a class of Lie algebraic similarity transformations generated by exponentials of two-body on-site Hermitian operators whose Hausdorff series can be summed exactly without truncation. The correlators are defined over the entire lattice and include the Gutzwiller factor ni ↑ni ↓ , and two-site products of density (ni ↑+ni ↓) and spin (ni ↑-ni ↓) operators. The resulting non-Hermitian many-body Hamiltonian can be solved in a biorthogonal mean-field approach with polynomial computational cost. The proposed similarity transformation generates locally weighted orbital transformations of the reference determinant. Although the energy of the model is unbound, projective equations in the spirit of coupled cluster theory lead to well-defined solutions. The theory is tested on the one- and two-dimensional repulsive Hubbard model where it yields accurate results for small and medium sized interaction strengths.
Towards Modelling Variation in Music as Foundation for Similarity
Volk, A.; de Haas, W.B.; van Kranenburg, P.; Cambouropoulos, E.; Tsougras, C.; Mavromatis, P.; Pastiadis, K.
2012-01-01
This paper investigates the concept of variation in music from the perspective of music similarity. Music similarity is a central concept in Music Information Retrieval (MIR), however there exists no comprehensive approach to music similarity yet. As a consequence, MIR faces the challenge on how to
Salient object detection: manifold-based similarity adaptation approach
Zhou, Jingbo; Ren, Yongfeng; Yan, Yunyang; Gao, Shangbing
2014-11-01
A saliency detection algorithm based on manifold-based similarity adaptation is proposed. The proposed algorithm is divided into three steps. First, we segment an input image into superpixels, which are represented as the nodes in a graph. Second, a new similarity measurement is used in the proposed algorithm. The weight matrix of the graph, which indicates the similarities between the nodes, uses a similarity-based method. It also captures the manifold structure of the image patches, in which the graph edges are determined in a data adaptive manner in terms of both similarity and manifold structure. Then, we use local reconstruction method as a diffusion method to obtain the saliency maps. The objective function in the proposed method is based on local reconstruction, with which estimated weights capture the manifold structure. Experiments on four bench-mark databases demonstrate the accuracy and robustness of the proposed method.
A synthesis of similarity and eddy-viscosity models
Verstappen, R.; Friedrich, R; Geurts, BJ; Metais, O
2004-01-01
In large-eddy simulation, a low-pass spatial filter is usually applied to the Navier-Stokes equations. The resulting commutator of the filter and the nonlinear term is usually modelled by an eddy-viscosity model, by a similarity model or by a mix thereof. Similarity models possess the proper mathema
An optimization approach to the similarity criteria of flows and its application
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
In the present paper, we propose an optimization approach to investigate the similarity criteria of complex flows. With this approach, we may identify the dominant dimensionless variables governing complex flows by numerical sensitivity analysis. Firstly,we define the sensitivity factor and examine its dependence on the dimensionless variables. Then, we apply this approach to study the similarity criteria of porous media flow in a presumed oil reservoir. The similarity principle obtained from the numerical sensitivity analysis is in agreement with the theoretical law, thus demonstrating the feasibility of the proposed optimization approach. Further explanation is given by analyzing the deviation of pressure distribution in a model from its prototype. In addition, we examine the effects of flow parameter variation on the sensitivity factors and find that the dominant dimensionless variables may change from different sets of parameters.
A Novel Approach For Syntactic Similarity Between Two Short Text
Directory of Open Access Journals (Sweden)
Anterpreet Kaur
2015-06-01
Full Text Available ABSTRACT Syntactic similarity is an important activity in the area of high field of text documents data mining natural language processing information retrieval. Natural language processing NLP is the intelligent machine where its ability is to translate the text into natural language such as English and other computer language such as c. Web mining used for task such as document clustering community mining etc to performed on web. However to find the similarity between the two documents is the difficult task. So with increasing scope in NLP require technique for dealing with many aspects of language in particular syntax semantics and paradigms.
Modeling of Hysteresis in Piezoelectric Actuator Based on Segment Similarity
Directory of Open Access Journals (Sweden)
Rui Xiong
2015-11-01
Full Text Available To successfully exploit the full potential of piezoelectric actuators in micro/nano positioning systems, it is essential to model their hysteresis behavior accurately. A novel hysteresis model for piezoelectric actuator is proposed in this paper. Firstly, segment-similarity, which describes the similarity relationship between hysteresis curve segments with different turning points, is proposed. Time-scale similarity, which describes the similarity relationship between hysteresis curves with different rates, is used to solve the problem of dynamic effect. The proposed model is formulated using these similarities. Finally, the experiments are performed with respect to a micro/nano-meter movement platform system. The effectiveness of the proposed model is verified as compared with the Preisach model. The experimental results show that the proposed model is able to precisely predict the hysteresis trajectories of piezoelectric actuators and performs better than the Preisach model.
Similarity between neonatal profile and socioeconomic index: a spatial approach
Directory of Open Access Journals (Sweden)
d'Orsi Eleonora
2005-01-01
Full Text Available This study aims to compare neonatal characteristics and socioeconomic conditions in Rio de Janeiro city neighborhoods in order to identify priority areas for intervention. The study design was ecological. Two databases were used: the Brazilian Population Census and the Live Birth Information System, aggregated by neighborhoods. Spatial analysis, multivariate cluster classification, and Moran's I statistics for detection of spatial clustering were used. A similarity index was created to compare socioeconomic clusters with the neonatal profile in each neighborhood. The proportions of Apgar score above 8 and cesarean sections showed positive spatial correlation and high similarity with the socioeconomic index. The proportion of low birth weight infants showed a random spatial distribution, indicating that at this scale of analysis, birth weight is not sufficiently sensitive to discriminate subtler differences among population groups. The observed relationship between the neighborhoods' neonatal profile (particularly Apgar score and mode of delivery and socioeconomic conditions shows evidence of a change in infant health profile, where the possibility for intervention shifts to medical services and the Apgar score assumes growing significance as a risk indicator.
An Emerge Approach in Inter Cluster Similarity for Quality Clusters
Directory of Open Access Journals (Sweden)
H. Venkateswara Reddy
2013-04-01
Full Text Available Relationship between the datasets is one most important issue in recent years. The recent methods are based mostly on the numerical data, but these methods are not suitable for real time data such as web pages, business transactions etc., which are known as Categorical data. It is difficult to find relationship in categorical data. In this paper, a new approach is proposed for finding the relationshipbetween the categorical data, hence to find relationship between the clusters. The main aim is to identify the quality clusters based on the relationship between clusters. If there is no relationship between clusters then those clusters are treated as quality clusters.
Institute of Scientific and Technical Information of China (English)
卢达; 钱忆平; 谢铭培; 浦炜
2002-01-01
提出了一种能有效完成对无监督字符分类的模糊逻辑方法,以提高字符识别系统的速度,正确性和鲁棒性.字符首先被分为8种印刷结构类,然后采用模式匹配方法将各类字符分别转换成基于一非线性加权相似函数的模糊样板集合.模糊无监督字符的分类是字符匹配的一种自然范例并发展了加权模糊相似测量的研究.本文讨论了该模糊模型的特性并用以加快字符分类处理,经过字符分类,在字符识别时由于只需针对较小的模糊样板集合而变得容易和快速.%This paper presents a fuzzy logic approach to efficiently perform unsupervised character classification for improvement in robustness, correctness and speed of a character recognition system. The characters are first split into eight typographical categories. The classification scheme uses pattern matching to classify the characters in each category into a set of fuzzy prototypes based on a nonlinear weighted similarity function. The fuzzy unsupervised character classification, which is natural in the representation of prototypes for character matching, is developed and a weighted fuzzy similarity measure is explored.The characteristics of the fuzzy model are discussed and used in speeding up the classification process. After classification, the character recognition which is simply applied on a smaller set of the fuzzy prototypes, becomes much easier and less time-consuming.
An alternative approach to measure similarity between two deterministic transient signals
Shin, Kihong
2016-06-01
In many practical engineering applications, it is often required to measure the similarity of two signals to gain insight into the conditions of a system. For example, an application that monitors machinery can regularly measure the signal of the vibration and compare it to a healthy reference signal in order to monitor whether or not any fault symptom is developing. Also in modal analysis, a frequency response function (FRF) from a finite element model (FEM) is often compared with an FRF from experimental modal analysis. Many different similarity measures are applicable in such cases, and correlation-based similarity measures may be most frequently used among these such as in the case where the correlation coefficient in the time domain and the frequency response assurance criterion (FRAC) in the frequency domain are used. Although correlation-based similarity measures may be particularly useful for random signals because they are based on probability and statistics, we frequently deal with signals that are largely deterministic and transient. Thus, it may be useful to develop another similarity measure that takes the characteristics of the deterministic transient signal properly into account. In this paper, an alternative approach to measure the similarity between two deterministic transient signals is proposed. This newly proposed similarity measure is based on the fictitious system frequency response function, and it consists of the magnitude similarity and the shape similarity. Finally, a few examples are presented to demonstrate the use of the proposed similarity measure.
Model Wind Turbines Tested at Full-Scale Similarity
Miller, M. A.; Kiefer, J.; Westergaard, C.; Hultmark, M.
2016-09-01
The enormous length scales associated with modern wind turbines complicate any efforts to predict their mechanical loads and performance. Both experiments and numerical simulations are constrained by the large Reynolds numbers governing the full- scale aerodynamics. The limited fundamental understanding of Reynolds number effects in combination with the lack of empirical data affects our ability to predict, model, and design improved turbines and wind farms. A new experimental approach is presented, which utilizes a highly pressurized wind tunnel (up to 220 bar). It allows exact matching of the Reynolds numbers (no matter how it is defined), tip speed ratios, and Mach numbers on a geometrically similar, small-scale model. The design of a measurement and instrumentation stack to control the turbine and measure the loads in the pressurized environment is discussed. Results are then presented in the form of power coefficients as a function of Reynolds number and Tip Speed Ratio. Due to gearbox power loss, a preliminary study has also been completed to find the gearbox efficiency and the resulting correction has been applied to the data set.
Self-Similar Symmetry Model and Cosmic Microwave Background
Directory of Open Access Journals (Sweden)
Tomohide eSonoda
2016-05-01
Full Text Available In this paper, we present the self-similar symmetry (SSS model that describes the hierarchical structure of the universe. The model is based on the concept of self-similarity, which explains the symmetry of the cosmic microwave background (CMB. The approximate length and time scales of the six hierarchies of the universe---grand unification, electroweak unification, the atom, the pulsar, the solar system, and the galactic system---are derived from the SSS model. In addition, the model implies that the electron mass and gravitational constant could vary with the CMB radiation temperature.
Simulation and similarity using models to understand the world
Weisberg, Michael
2013-01-01
In the 1950s, John Reber convinced many Californians that the best way to solve the state's water shortage problem was to dam up the San Francisco Bay. Against massive political pressure, Reber's opponents persuaded lawmakers that doing so would lead to disaster. They did this not by empirical measurement alone, but also through the construction of a model. Simulation and Similarity explains why this was a good strategy while simultaneously providing an account of modeling and idealization in modern scientific practice. Michael Weisberg focuses on concrete, mathematical, and computational models in his consideration of the nature of models, the practice of modeling, and nature of the relationship between models and real-world phenomena. In addition to a careful analysis of physical, computational, and mathematical models, Simulation and Similarity offers a novel account of the model/world relationship. Breaking with the dominant tradition, which favors the analysis of this relation through logical notions suc...
Burridge-Knopoff model and self-similarity
Akishin, P G; Budnik, A D; Ivanov, V V; Antoniou, I
1997-01-01
The seismic processes are well known to be self-similar in both spatial and temporal behavior. At the same time, the Burridge-Knopoff (BK) model of earthquake fault dynamics, one of the basic models of theoretical seismicity, does not posses self-similarity. In this article an extension of BK model, which directly accounts for the self-similarity of earth crust elastic properties by introducing nonlinear terms for inter-block springs of BK model, is presented. The phase space analysis of the model have shown it to behave like a system of coupled randomly kicked oscillators. The nonlinear stiffness terms cause the synchronization of collective motion and produce stronger seismic events.
Similar Constructive Method for Solving a nonlinearly Spherical Percolation Model
Directory of Open Access Journals (Sweden)
WANG Yong
2013-01-01
Full Text Available In the view of nonlinear spherical percolation problem of dual porosity reservoir, a mathematical model considering three types of outer boundary conditions: closed, constant pressure, infinity was established in this paper. The mathematical model was linearized by substitution of variable and became a boundary value problem of ordinary differential equation in Laplace space by Laplace transformation. It was verified that such boundary value problem with one type of outer boundary had a similar structure of solution. And a new method: Similar Constructive Method was obtained for solving such boundary value problem. By this method, solutions with similar structure in other two outer boundary conditions were obtained. The Similar Constructive Method raises efficiency of solving such percolation model.
Losada, David E.; Barreiro, Alvaro
2003-01-01
Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…
A topic clustering approach to finding similar questions from large question and answer archives.
Directory of Open Access Journals (Sweden)
Wei-Nan Zhang
Full Text Available With the blooming of Web 2.0, Community Question Answering (CQA services such as Yahoo! Answers (http://answers.yahoo.com, WikiAnswer (http://wiki.answers.com, and Baidu Zhidao (http://zhidao.baidu.com, etc., have emerged as alternatives for knowledge and information acquisition. Over time, a large number of question and answer (Q&A pairs with high quality devoted by human intelligence have been accumulated as a comprehensive knowledge base. Unlike the search engines, which return long lists of results, searching in the CQA services can obtain the correct answers to the question queries by automatically finding similar questions that have already been answered by other users. Hence, it greatly improves the efficiency of the online information retrieval. However, given a question query, finding the similar and well-answered questions is a non-trivial task. The main challenge is the word mismatch between question query (query and candidate question for retrieval (question. To investigate this problem, in this study, we capture the word semantic similarity between query and question by introducing the topic modeling approach. We then propose an unsupervised machine-learning approach to finding similar questions on CQA Q&A archives. The experimental results show that our proposed approach significantly outperforms the state-of-the-art methods.
Antonetti, Manuel; Buss, Rahel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano
2016-07-01
The identification of landscapes with similar hydrological behaviour is useful for runoff and flood predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP-mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexities of automatic DRP-mapping approaches affect hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison, and a deviation map were derived. The automatically derived DRP maps were used in synthetic runoff simulations with an adapted version of the PREVAH hydrological model, and simulation results compared with those from simulations using the reference maps. The DRP maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. Furthermore, we argue not to use
Thomson, Ron I; Nearey, Terrance M; Derwing, Tracey M
2009-09-01
This study describes a statistical approach to measuring crosslinguistic vowel similarity and assesses its efficacy in predicting L2 learner behavior. In the first experiment, using linear discriminant analysis, relevant acoustic variables from vowel productions of L1 Mandarin and L1 English speakers were used to train a statistical pattern recognition model that simultaneously comprised both Mandarin and English vowel categories. The resulting model was then used to determine what categories novel Mandarin and English vowel productions most resembled. The extent to which novel cases were classified as members of a competing language category provided a means for assessing the crosslinguistic similarity of Mandarin and English vowels. In a second experiment, L2 English learners imitated English vowels produced by a native speaker of English. The statistically defined similarity between Mandarin and English vowels quite accurately predicted L2 learner behavior; the English vowel elicitation stimuli deemed most similar to Mandarin vowels were more likely to elicit L2 productions that were recognized as a Mandarin category; English stimuli that were less similar to Mandarin vowels were more likely to elicit L2 productions that were recognized as new or emerging categories.
Morphological similarities between DBM and a microeconomic model of sprawl
Caruso, Geoffrey; Vuidel, Gilles; Cavailhès, Jean; Frankhauser, Pierre; Peeters, Dominique; Thomas, Isabelle
2011-03-01
We present a model that simulates the growth of a metropolitan area on a 2D lattice. The model is dynamic and based on microeconomics. Households show preferences for nearby open spaces and neighbourhood density. They compete on the land market. They travel along a road network to access the CBD. A planner ensures the connectedness and maintenance of the road network. The spatial pattern of houses, green spaces and road network self-organises, emerging from agents individualistic decisions. We perform several simulations and vary residential preferences. Our results show morphologies and transition phases that are similar to Dieletric Breakdown Models (DBM). Such similarities were observed earlier by other authors, but we show here that it can be deducted from the functioning of the land market and thus explicitly connected to urban economic theory.
2.5-dimensional solution of the advective accretion disk:a self-similar approach
Institute of Scientific and Technical Information of China (English)
Shubhrangshu Ghosh; Banibrata Mukhopadhyay
2009-01-01
We provide a 2.5-dimensional solution to a complete set of viscous hydrodynamical equations describing accretion-induced outflows and plausible jets around black holes/compact objects. We prescribe a self-consistent advective disk-outflow coupling model, which explicitly includes the information of vertical flux. Inter-connecting dynamics of an inflow-outflow system essentially upholds the conservation laws. We provide a set of analytical family of solutions through a self-similar approach. The flow parameters of the disk-outflow system depend strongly on the viscosity parameter α and the cooling factor f.
A Fuzzy Similarity Based Concept Mining Model for Text Classification
Puri, Shalini
2012-01-01
Text Classification is a challenging and a red hot field in the current scenario and has great importance in text categorization applications. A lot of research work has been done in this field but there is a need to categorize a collection of text documents into mutually exclusive categories by extracting the concepts or features using supervised learning paradigm and different classification algorithms. In this paper, a new Fuzzy Similarity Based Concept Mining Model (FSCMM) is proposed to classify a set of text documents into pre - defined Category Groups (CG) by providing them training and preparing on the sentence, document and integrated corpora levels along with feature reduction, ambiguity removal on each level to achieve high system performance. Fuzzy Feature Category Similarity Analyzer (FFCSA) is used to analyze each extracted feature of Integrated Corpora Feature Vector (ICFV) with the corresponding categories or classes. This model uses Support Vector Machine Classifier (SVMC) to classify correct...
APPLICABILITY OF SIMILARITY CONDITIONS TO ANALOGUE MODELLING OF TECTONIC STRUCTURES
Directory of Open Access Journals (Sweden)
Mikhail A. Goncharov
2015-09-01
Full Text Available The publication is aimed at comparing concepts of V.V. Belousov and M.V. Gzovsky, outstanding researchers who established fundamentals of tectonophysics in Russia, specifically similarity conditions in application to tectonophysical modeling. Quotations from their publications illustrate differences in their views. In this respect, we can reckon V.V. Belousov as a «realist» as he supported «the liberal point of view» [Methods of modelling…, 1988, p. 21–22], whereas M.V. Gzovsky can be regarded as an «idealist» as he believed that similarity conditions should be mandatorily applied to ensure correctness of physical modeling of tectonic deformations and structures [Gzovsky, 1975, pp. 88 and 94].Objectives of the present publication are (1 to be another reminder about desirability of compliance with similarity conditions in experimental tectonics; (2 to point out difficulties in ensuring such compliance; (3 to give examples which bring out the fact that similarity conditions are often met per se, i.e. automatically observed; (4 to show that modeling can be simplified in some cases without compromising quantitative estimations of parameters of structure formation.(1 Physical modelling of tectonic deformations and structures should be conducted, if possible, in compliance with conditions of geometric and physical similarity between experimental models and corresponding natural objects. In any case, a researcher should have a clear vision of conditions applicable to each particular experiment.(2 Application of similarity conditions is often challenging due to unavoidable difficulties caused by the following: a Imperfection of experimental equipment and technologies (Fig. 1 to 3; b uncertainties in estimating parameters of formation of natural structures, including main ones: structure size (Fig. 4, time of formation (Fig. 5, deformation properties of the medium wherein such structures are formed, including, first of all, viscosity (Fig. 6
Self-similar two-particle separation model
DEFF Research Database (Denmark)
Lüthi, Beat; Berg, Jacob; Ott, Søren
2007-01-01
We present a new stochastic model for relative two-particle separation in turbulence. Inspired by material line stretching, we suggest that a similar process also occurs beyond the viscous range, with time scaling according to the longitudinal second-order structure function S2(r), e.g.; in the i......We present a new stochastic model for relative two-particle separation in turbulence. Inspired by material line stretching, we suggest that a similar process also occurs beyond the viscous range, with time scaling according to the longitudinal second-order structure function S2(r), e.......g.; in the inertial range as epsilon−1/3r2/3. Particle separation is modeled as a Gaussian process without invoking information of Eulerian acceleration statistics or of precise shapes of Eulerian velocity distribution functions. The time scale is a function of S2(r) and thus of the Lagrangian evolving separation....... The model predictions agree with numerical and experimental results for various initial particle separations. We present model results for fixed time and fixed scale statistics. We find that for the Richardson-Obukhov law, i.e., =gepsilont3, to hold and to also be observed in experiments, high Reynolds...
Detection of synchronization between chaotic signals: An adaptive similarity-based approach
Chen, Shyan-Shiou; Chen, Li-Fen; Wu, Yu-Te; Wu, Yu-Zu; Lee, Po-Lei; Yeh, Tzu-Chen; Hsieh, Jen-Chuen
2007-12-01
We present an adaptive similarity-based approach to detect generalized synchronization (GS) with n:m phase synchronization (PS), where n and m are integers and one of them is 1. This approach is based on the similarity index (SI) and Gaussian mixture model with the minimum description length criterion. The clustering method, which is shown to be superior to the closeness and connectivity of a continuous function, is employed in this study to detect the existence of GS with n:m PS. We conducted a computer simulation and a finger-lifting experiment to illustrate the effectiveness of the proposed method. In the simulation of a Rössler-Lorenz system, our method outperformed the conventional SI, and GS with 2:1 PS within the coupled system was found. In the experiment of self-paced finger-lifting movement, cortico-muscular GS with 1:2 and 1:3 PS was found between the surface electromyogram signals on the first dorsal interossei muscle and the magnetoencephalographic data in the motor area. The GS with n:m PS ( n or m=1 ) has been simultaneously resolved from both simulation and experiment. The proposed approach thereby provides a promising means for advancing research into both nonlinear dynamics and brain science.
RNA and protein 3D structure modeling: similarities and differences.
Rother, Kristian; Rother, Magdalena; Boniecki, Michał; Puton, Tomasz; Bujnicki, Janusz M
2011-09-01
In analogy to proteins, the function of RNA depends on its structure and dynamics, which are encoded in the linear sequence. While there are numerous methods for computational prediction of protein 3D structure from sequence, there have been very few such methods for RNA. This review discusses template-based and template-free approaches for macromolecular structure prediction, with special emphasis on comparison between the already tried-and-tested methods for protein structure modeling and the very recently developed "protein-like" modeling methods for RNA. We highlight analogies between many successful methods for modeling of these two types of biological macromolecules and argue that RNA 3D structure can be modeled using "protein-like" methodology. We also highlight the areas where the differences between RNA and proteins require the development of RNA-specific solutions.
A framework for similarity recognition of CAD models
Directory of Open Access Journals (Sweden)
Leila Zehtaban
2016-07-01
Full Text Available A designer is mainly supported by two essential factors in design decisions. These two factors are intelligence and experience aiding the designer by predicting the interconnection between the required design parameters. Through classification of product data and similarity recognition between new and existing designs, it is partially possible to replace the required experience for an inexperienced designer. Given this context, the current paper addresses a framework for recognition and flexible retrieval of similar models in product design. The idea is to establish an infrastructure for transferring design as well as the required PLM (Product Lifecycle Management know-how to the design phase of product development in order to reduce the design time. Furthermore, such a method can be applied as a brainstorming method for a new and creative product development as well. The proposed framework has been tested and benchmarked while showing promising results.
A Fuzzy Similarity Based Concept Mining Model for Text Classification
Directory of Open Access Journals (Sweden)
Shalini Puri
2011-11-01
Full Text Available Text Classification is a challenging and a red hot field in the current scenario and has great importance in text categorization applications. A lot of research work has been done in this field but there is a need to categorize a collection of text documents into mutually exclusive categories by extracting the concepts or features using supervised learning paradigm and different classification algorithms. In this paper, a new Fuzzy Similarity Based Concept Mining Model (FSCMM is proposed to classify a set of text documents into pre - defined Category Groups (CG by providing them training and preparing on the sentence, document and integrated corpora levels along with feature reduction, ambiguity removal on each level to achieve high system performance. Fuzzy Feature Category Similarity Analyzer (FFCSA is used to analyze each extracted feature of Integrated Corpora Feature Vector (ICFV with the corresponding categories or classes. This model uses Support Vector Machine Classifier (SVMC to classify correctly the training data patterns into two groups; i. e., + 1 and – 1, thereby producing accurate and correct results. The proposed model works efficiently and effectively with great performance and high - accuracy results.
Robust hashing with local models for approximate similarity search.
Song, Jingkuan; Yang, Yi; Li, Xuelong; Huang, Zi; Yang, Yang
2014-07-01
Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing l2,1 -norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.
Self-similar infall models for cold dark matter haloes
Le Delliou, Morgan Patrick
2002-04-01
How can we understand the mechanisms for relaxation and the constitution of the density profile in CDM halo formation? Can the old Self-Similar Infall Model (SSIM) be made to contain all the elements essential for this understanding? In this work, we have explored and improved the SSIM, showing it can at once explain large N-body simulations and indirect observations of real haloes alike. With the use of a carefully-crafted simple shell code, we have followed the accretion of secondary infalls in different settings, ranging from a model for mergers to a distribution of angular momentum for the shells, through the modeling of a central black hole. We did not assume self-similar accretion from initial conditions but allowed for it to develop and used coordinates that make it evident. We found self-similar accretion to appear very prominently in CDM halo formation as an intermediate stable (quasi-equilibrium) stage of Large Scale Structure formation. Dark Matter haloes density profiles are shown to be primarily influenced by non-radial motion. The merger paradigm reveals itself through the SSIM to be a secondary but non-trivial factor in those density profiles: it drives the halo profile towards a unique attractor, but the main factor for universality is still the self-similarity. The innermost density cusp flattening observed in some dwarf and Low Surface Brightness galaxies finds a natural and simple explanation in the SSIM embedding a central black hole. Relaxation in cold collisionless collapse is clarified by the SSIM. It is a continuous process involving only the newly-accreted particles for just a few dynamical times. All memory of initial energy is not lost so relaxation is only moderately violent. A sharp cut off, or population inversion, originates in initial conditions and is maintained through relaxation. It characterises moderately violent relaxation in the system's Distribution Function. Finally, the SSIM has shown this relaxation to arise from phase
Meeting your match: How attractiveness similarity affects approach behavior in mixed-sex dyads
Straaten, I. van; Engels, R.C.M.E.; Finkenauer, C.; Holland, R.W.
2009-01-01
This experimental study investigated approach behavior toward opposite-sex others of similar versus dissimilar physical attractiveness. Furthermore, it tested the moderating effects of sex. Single participants interacted with confederates of high and low attractiveness. Observers rated their behavio
Meeting your match: How attractiveness similarity affects approach behavior in mixed-sex dyads
Straaten, I. van; Engels, R.C.M.E.; Finkenauer, C.; Holland, R.W.
2009-01-01
This experimental study investigated approach behavior toward opposite-sex others of similar versus dissimilar physical attractiveness. Furthermore, it tested the moderating effects of sex. Single participants interacted with confederates of high and low attractiveness. Observers rated their
Extending the similarity-based XML multicast approach with digital signatures
DEFF Research Database (Denmark)
Azzini, Antonia; Marrara, Stefania; Jensen, Meiko
2009-01-01
This paper investigates the interplay between similarity-based SOAP message aggregation and digital signature application. An overview on the approaches resulting from the different orders for the tasks of signature application, verification, similarity aggregation and splitting is provided....... Depending on the intersection between similarity-aggregated and signed SOAP message parts, the paper discusses three different cases of signature application, and sketches their applicability and performance implications. Copyright 2009 ACM....
Extending the similarity-based XML multicast approach with digital signatures
DEFF Research Database (Denmark)
Azzini, Antonia; Marrara, Stefania; Jensen, Meiko
2009-01-01
This paper investigates the interplay between similarity-based SOAP message aggregation and digital signature application. An overview on the approaches resulting from the different orders for the tasks of signature application, verification, similarity aggregation and splitting is provided....... Depending on the intersection between similarity-aggregated and signed SOAP message parts, the paper discusses three different cases of signature application, and sketches their applicability and performance implications....
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...
An Axiomatic Approach to the notion of Similarity of individual Sequences and their Classification
Ziv, Jacob
2011-01-01
An axiomatic approach to the notion of similarity of sequences, that seems to be natural in many cases (e.g. Phylogenetic analysis), is proposed. Despite of the fact that it is not assume that the sequences are a realization of a probabilistic process (e.g. a variable-order Markov process), it is demonstrated that any classifier that fully complies with the proposed similarity axioms must be based on modeling of the training data that is contained in a (long) individual training sequence via a suffix tree with no more than O(N) leaves (or, alternatively, a table with O(N) entries) where N is the length of the test sequence. Some common classification algorithms may be slightly modified to comply with the proposed axiomatic conditions and the resulting organization of the training data, thus yielding a formal justification for their good empirical performance without relying on any a-priori (sometimes unjustified) probabilistic assumption. One such case is discussed in details.
Similarities between obesity in pets and children: the addiction model.
Pretlow, Robert A; Corbee, Ronald J
2016-09-01
Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest short-term but poor long-term results. New strategies are needed. A novel theory posits that obesity in pets and children is due to 'treats' and excessive meal amounts given by the 'pet-parent' and child-parent to obtain affection from the pet/child, which enables 'eating addiction' in the pet/child and results in parental 'co-dependence'. Pet-parents and child-parents may even become hostage to the treats/food to avoid the ire of the pet/child. Eating addiction in the pet/child also may be brought about by emotional factors such as stress, independent of parental co-dependence. An applicable treatment for child obesity has been trialled using classic addiction withdrawal/abstinence techniques, as well as behavioural addiction methods, with significant results. Both the child and the parent progress through withdrawal from specific 'problem foods', next from snacking (non-specific foods) and finally from excessive portions at meals (gradual reductions). This approach should adapt well for pets and pet-parents. Pet obesity is more 'pure' than child obesity, in that contributing factors and treatment points are essentially under the control of the pet-parent. Pet obesity might thus serve as an ideal test bed for the treatment and prevention of child obesity, with focus primarily on parental behaviours. Sharing information between the fields of pet and child obesity would be mutually beneficial.
Similarity-based search of model organism, disease and drug effect phenotypes
Hoehndorf, Robert
2015-02-19
Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.
An approach to large scale identification of non-obvious structural similarities between proteins
Directory of Open Access Journals (Sweden)
Cherkasov Artem
2004-05-01
Full Text Available Abstract Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence.
QSAR models based on quantum topological molecular similarity.
Popelier, P L A; Smith, P J
2006-07-01
A new method called quantum topological molecular similarity (QTMS) was fairly recently proposed [J. Chem. Inf. Comp. Sc., 41, 2001, 764] to construct a variety of medicinal, ecological and physical organic QSAR/QSPRs. QTMS method uses quantum chemical topology (QCT) to define electronic descriptors drawn from modern ab initio wave functions of geometry-optimised molecules. It was shown that the current abundance of computing power can be utilised to inject realistic descriptors into QSAR/QSPRs. In this article we study seven datasets of medicinal interest : the dissociation constants (pK(a)) for a set of substituted imidazolines , the pK(a) of imidazoles , the ability of a set of indole derivatives to displace [(3)H] flunitrazepam from binding to bovine cortical membranes , the influenza inhibition constants for a set of benzimidazoles , the interaction constants for a set of amides and the enzyme liver alcohol dehydrogenase , the natriuretic activity of sulphonamide carbonic anhydrase inhibitors and the toxicity of a series of benzyl alcohols. A partial least square analysis in conjunction with a genetic algorithm delivered excellent models. They are also able to highlight the active site, of the ligand or the molecule whose structure determines the activity. The advantages and limitations of QTMS are discussed.
Document Representation and Clustering with WordNet Based Similarity Rough Set Model
Directory of Open Access Journals (Sweden)
Koichi Yamada
2011-09-01
Full Text Available Most studies on document clustering till date use Vector Space Model (VSM to represent documents in the document space, where documents are denoted by a vector in a word vector space. The standard VSM does not take into account the semantic relatedness between terms. Thus, terms with some semantic similarity are dealt with in the same way as terms with no semantic relatedness. Since this unconcern about semantics reduces the quality of clustering results, many studies have proposed various approaches to introduce knowledge of semantic relatedness into VSM model. Those approaches give better results than the standard VSM. However they still have their own issues. We propose a new approach as a combination of two approaches, one of which uses Rough Sets theory and co-occurrence of terms, and the other uses WordNet knowledge to solve these issues. Experiments for its evaluation show advantage of the proposed approach over the others.
Directory of Open Access Journals (Sweden)
Fatima Zohra Benkaddour
2016-12-01
Full Text Available In spunlace nonwovens industry, the maintenance task is very complex, it requires experts and operators collaboration. In this paper, we propose a new approach integrating an agent- based modelling with case-based reasoning that utilizes similarity measures and preferences module. The main purpose of our study is to compare and evaluate the most suitable similarity measure for our case. Furthermore, operators that are usually geographically dispersed, have to collaborate and negotiate to achieve mutual agreements, especially when their proposals (diagnosis lead to a conflicting situation. The experimentation shows that the suggested agent-based approach is very interesting and efficient for operators and experts who collaborate in INOTIS enterprise.
Bergen, K.; Yoon, C. E.; OReilly, O. J.; Beroza, G. C.
2015-12-01
Recent improvements in computational efficiency for waveform correlation-based detections achieved by new methods such as Fingerprint and Similarity Thresholding (FAST) promise to allow large-scale blind search for similar waveforms in long-duration continuous seismic data. Waveform similarity search applied to datasets of months to years of continuous seismic data will identify significantly more events than traditional detection methods. With the anticipated increase in number of detections and associated increase in false positives, manual inspection of the detection results will become infeasible. This motivates the need for new approaches to process the output of similarity-based detection. We explore data mining techniques for improved detection post-processing. We approach this by considering similarity-detector output as a sparse similarity graph with candidate events as vertices and similarities as weighted edges. Image processing techniques are leveraged to define candidate events and combine results individually processed at multiple stations. Clustering and graph analysis methods are used to identify groups of similar waveforms and assign a confidence score to candidate detections. Anomaly detection and classification are applied to waveform data for additional false detection removal. A comparison of methods will be presented and their performance will be demonstrated on a suspected induced and non-induced earthquake sequence.
de Wolff, Marianne S.; Vogels, Anton G. C.; Reijneveld, Sijmen A.
2014-01-01
The DSM-oriented approach of the Child Behavior Checklist (CBCL) is a relatively new classification of problem behavior in children and adolescents. Given the clinical and scientific relevance of the CBCL, this study examines similarities and dissimilarities between the empirical and the
Wolff, M.S. de; Vogels, A.G.C.; Reijneveld, S.A.
2014-01-01
The DSM-oriented approach of the Child Behavior Checklist (CBCL) is a relatively new classification of problem behavior in children and adolescents. Given the clinical and scientific relevance of the CBCL, this study examines similarities and dissimilarities between the empirical and the
Li, Chenyang; Verma, Prakash; Hannon, Kevin P; Evangelista, Francesco A
2017-08-21
We propose an economical state-specific approach to evaluate electronic excitation energies based on the driven similarity renormalization group truncated to second order (DSRG-PT2). Starting from a closed-shell Hartree-Fock wave function, a model space is constructed that includes all single or single and double excitations within a given set of active orbitals. The resulting VCIS-DSRG-PT2 and VCISD-DSRG-PT2 methods are introduced and benchmarked on a set of 28 organic molecules [M. Schreiber et al., J. Chem. Phys. 128, 134110 (2008)]. Taking CC3 results as reference values, mean absolute deviations of 0.32 and 0.22 eV are observed for VCIS-DSRG-PT2 and VCISD-DSRG-PT2 excitation energies, respectively. Overall, VCIS-DSRG-PT2 yields results with accuracy comparable to those from time-dependent density functional theory using the B3LYP functional, while VCISD-DSRG-PT2 gives excitation energies comparable to those from equation-of-motion coupled cluster with singles and doubles.
Minimal axiom group of similarity-based rough set model
Institute of Scientific and Technical Information of China (English)
DAI Jian-hua; PAN Yun-he
2006-01-01
Rough set axiomatization is one aspect of rough set study to characterize rough set theory using dependable and minimal axiom groups.Thus,rough set theory can be studied by logic and axiom system methods.The classical rough set theory is based on equivalence relation,but the rough set theory based on similarity relation has wide applications in the real world.To characterize similarity-based rough set theory,an axiom group named S,consisting of 3 axioms,is proposed.The reliability of the axiom group,which shows that characterizing of rough set theory based on similarity relation is rational,is proved.Simultaneously,the minimization of the axiom group,which requests that each axiom is an equation and independent,is proved.The axiom group is helpful to research rough set theory by logic and axiom system methods.
MAC/FAC: A Model of Similarity-Based Retrieval
1994-10-01
giraffe, donkey] (b) HEAVIER [camel, cow] --- BITE [ dromedary , calf] (c) HEAVIER [camel, cowl --- TALLER [giraffe, donkey] (d) GREATER [WEIGHT(camel...stand a good chance of being matched, depending on the stored similarities between TALLER, HEAVIER, and BITE, camel, dromedary and giraffe, and so on
Similarities between obesity in pets and children : the addiction model
Pretlow, Robert A; Corbee, Ronald J
2016-01-01
Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest
Similarities between obesity in pets and children : the addiction model
Pretlow, Robert A; Corbee, Ronald J
2016-01-01
Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates - for example, poor nutrition and sedentary activity - are being challenged. Obesity interventions in both pets and children have produced modest
A Clustering Ensemble approach based on the similarities in 2-mode social networks
Institute of Scientific and Technical Information of China (English)
SU Bao-ping; ZHANG Meng-jie
2014-01-01
For a particular clustering problems, selecting the best clustering method is a challenging problem.Research suggests that integrate the multiple clustering can improve the accuracy of clustering ensemble greatly. A new clustering ensemble approach based on the similarities in 2-mode networks is proposed in this paper. First of all, the data object and the initial clustering clusters transform into 2-mode networks, then using the similarities in 2-mode networks to calculate the similarity between different clusters iteratively to refine the adjacency matrix , K-means algorithm is finally used to get the final clustering, then obtain the final clustering results.The method effectively use the similarity between different clusters, example shows the feasibility of this method.
Patient Similarity in Prediction Models Based on Health Data: A Scoping Review
Sharafoddini, Anis; Dubin, Joel A
2017-01-01
Background Physicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions.. Objective The aim is to summarize and review published studies describing computer-based approaches for predicting patients’ future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research. Methods The method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results. Results After duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods. Conclusions Interest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health
Modelling clinical systemic lupus erythematosus: similarities, differences and success stories.
Celhar, Teja; Fairhurst, Anna-Marie
2016-12-24
Mouse models of SLE have been indispensable tools to study disease pathogenesis, to identify genetic susceptibility loci and targets for drug development, and for preclinical testing of novel therapeutics. Recent insights into immunological mechanisms of disease progression have boosted a revival in SLE drug development. Despite promising results in mouse studies, many novel drugs have failed to meet clinical end points. This is probably because of the complexity of the disease, which is driven by polygenic predisposition and diverse environmental factors, resulting in a heterogeneous clinical presentation. Each mouse model recapitulates limited aspects of lupus, especially in terms of the mechanism underlying disease progression. The main mouse models have been fairly successful for the evaluation of broad-acting immunosuppressants. However, the advent of targeted therapeutics calls for a selection of the most appropriate model(s) for testing and, ultimately, identification of patients who will be most likely to respond.
Modelling clinical systemic lupus erythematosus: similarities, differences and success stories
Celhar, Teja
2017-01-01
Abstract Mouse models of SLE have been indispensable tools to study disease pathogenesis, to identify genetic susceptibility loci and targets for drug development, and for preclinical testing of novel therapeutics. Recent insights into immunological mechanisms of disease progression have boosted a revival in SLE drug development. Despite promising results in mouse studies, many novel drugs have failed to meet clinical end points. This is probably because of the complexity of the disease, which is driven by polygenic predisposition and diverse environmental factors, resulting in a heterogeneous clinical presentation. Each mouse model recapitulates limited aspects of lupus, especially in terms of the mechanism underlying disease progression. The main mouse models have been fairly successful for the evaluation of broad-acting immunosuppressants. However, the advent of targeted therapeutics calls for a selection of the most appropriate model(s) for testing and, ultimately, identification of patients who will be most likely to respond. PMID:28013204
An Approach of System Similarity Measurement Based on Segmented-Digital-Fingerprint
Directory of Open Access Journals (Sweden)
Liao Gen-Wei
2013-06-01
Full Text Available Analysis and identification on software infringement, which is a time-consuming and complicated work, is always done in lab. However, to check whether suspect software infringes upon other’s copyright quickly is the necessity in software infringement cases. An approach of copyright checking based on digital fingerprint is provided in this study, which computes system similarity through segmenting files to be compared, searching boundaries by sliding window and finding the same digital fingerprints of data blocks with simple and complex hash. The approach fits for finding preliminary evidences on the law enforcement spot of software infringement case, thus it has the attributes of efficiency and reliability.
Similarities between obesity in pets and children: the addiction model
Pretlow, Robert A.; Corbee, Ronald J.
2016-01-01
Obesity in pets is a frustrating, major health problem. Obesity in human children is similar. Prevailing theories accounting for the rising obesity rates – for example, poor nutrition and sedentary activity – are being challenged. Obesity interventions in both pets and children have produced modest short-term but poor long-term results. New strategies are needed. A novel theory posits that obesity in pets and children is due to ‘treats’ and excessive meal amounts given by the ‘pet–parent’ and...
Similarity solutions for systems arising from an Aedes aegypti model
Freire, Igor Leite; Torrisi, Mariano
2014-04-01
In a recent paper a new model for the Aedes aegypti mosquito dispersal dynamics was proposed and its Lie point symmetries were investigated. According to the carried group classification, the maximal symmetry Lie algebra of the nonlinear cases is reached whenever the advection term vanishes. In this work we analyze the family of systems obtained when the wind effects on the proposed model are neglected. Wide new classes of solutions to the systems under consideration are obtained.
Energy Technology Data Exchange (ETDEWEB)
Szpigel, S. [Centro de Ciencias e Humanidades, Universidade Presbiteriana Mackenzie, Sao Paulo, SP (Brazil); Timoteo, V.S. [Faculdade de Tecnologia, Universidade Estadual de Campinas, Limeira, SP (Brazil); Duraes, F. de O [Centro de Ciencias e Humanidades, Universidade Presbiteriana Mackenzie, Sao Paulo, SP (Brazil)
2010-02-15
In this work we study the Similarity Renormalization Group (SRG) evolution of effective nucleon-nucleon (NN) interactions derived using the Subtracted Kernel Method (SKM) approach. We present the results for the phaseshifts in the {sup 1}S{sub 0} channel calculated using a SRG potential evolved from an initial effective potential obtained by implementing the SKM scheme for the leading-order NN interaction in chiral effective field theory (ChEFT).
Meeting your match: how attractiveness similarity affects approach behavior in mixed-sex dyads.
van Straaten, Ischa; Engels, Rutger C M E; Finkenauer, Catrin; Holland, Rob W
2009-06-01
This experimental study investigated approach behavior toward opposite-sex others of similar versus dissimilar physical attractiveness. Furthermore, it tested the moderating effects of sex. Single participants interacted with confederates of high and low attractiveness. Observers rated their behavior in terms of relational investment (i.e., behavioral efforts related to the improvement of interaction fluency, communication of positive interpersonal affect, and positive self-presentation). As expected, men displayed more relational investment behavior if their own physical attractiveness was similar to that of the confederate. For women, no effects of attractiveness similarity on relational investment behavior were found. Results are discussed in the light of positive assortative mating, preferences for physically attractive mates, and sex differences in attraction-related interpersonal behaviors.
On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.
Yang, Harry; Novick, Steven; Burdick, Richard K
Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σR, where σR is the reference product variability estimated by the sample standard deviation SR from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄R ± K × σR where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation SR underestimates the true reference product variability σR As a result, substituting SR for σR in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is
The waveform similarity approach to identify dependent events in instrumental seismic catalogues
Barani, S.; Ferretti, G.; Massa, M.; Spallarossa, D.
2007-01-01
In this paper, waveform similarity analysis is adapted and implemented in a declustering procedure to identify foreshocks and aftershocks, to obtain instrumental catalogues that are cleaned of dependent events and to perform an independent check of the results of traditional declustering techniques. Unlike other traditional declustering methods (i.e. windowing techniques), the application of cross-correlation analysis allows definition of groups of dependent events (multiplets) characterized by similar location, fault mechanism and propagation pattern. In this way the chain of intervening related events is led by the seismogenetic features of earthquakes. Furthermore, a time-selection criterion is used to define time-independent seismic episodes eventually joined (on the basis of waveform similarity) into a single multiplet. The results, obtained applying our procedure to a test data set, show that the declustered catalogue is drawn by the Poisson distribution with a degree of confidence higher than using the Gardner and Knopoff method. The declustered catalogues, applying these two approaches, are similar with respect to the frequency-magnitude distribution and the number of earthquakes. Nevertheless, the application of our approach leads to declustered catalogues properly related to the seismotectonic background and the reology of the investigated area and the success of the procedure is ensured by the independence of the results on estimated location errors of the events collected in the raw catalogue.
Novel Approach to Classify Plants Based on Metabolite-Content Similarity
Directory of Open Access Journals (Sweden)
Kang Liu
2017-01-01
Full Text Available Secondary metabolites are bioactive substances with diverse chemical structures. Depending on the ecological environment within which they are living, higher plants use different combinations of secondary metabolites for adaptation (e.g., defense against attacks by herbivores or pathogenic microbes. This suggests that the similarity in metabolite content is applicable to assess phylogenic similarity of higher plants. However, such a chemical taxonomic approach has limitations of incomplete metabolomics data. We propose an approach for successfully classifying 216 plants based on their known incomplete metabolite content. Structurally similar metabolites have been clustered using the network clustering algorithm DPClus. Plants have been represented as binary vectors, implying relations with structurally similar metabolite groups, and classified using Ward’s method of hierarchical clustering. Despite incomplete data, the resulting plant clusters are consistent with the known evolutional relations of plants. This finding reveals the significance of metabolite content as a taxonomic marker. We also discuss the predictive power of metabolite content in exploring nutritional and medicinal properties in plants. As a byproduct of our analysis, we could predict some currently unknown species-metabolite relations.
A New Approach to Change Vector Analysis Using Distance and Similarity Measures
Directory of Open Access Journals (Sweden)
Alan R. Gillespie
2011-11-01
Full Text Available The need to monitor the Earth’s surface over a range of spatial and temporal scales is fundamental in ecosystems planning and management. Change-Vector Analysis (CVA is a bi-temporal method of change detection that considers the magnitude and direction of change vector. However, many multispectral applications do not make use of the direction component. The procedure most used to calculate the direction component using multiband data is the direction cosine, but the number of output direction cosine images is equal to the number of original bands and has a complex interpretation. This paper proposes a new approach to calculate the spectral direction of change, using the Spectral Angle Mapper and Spectral Correlation Mapper spectral-similarity measures. The chief advantage of this approach is that it generates a single image of change information insensitive to illumination variation. In this paper the magnitude component of the spectral similarity was calculated in two ways: as the standard Euclidean distance and as the Mahalanobis distance. In this test the best magnitude measure was the Euclidean distance and the best similarity measure was Spectral Angle Mapper. The results show that the distance and similarity measures are complementary and need to be applied together.
Novel Approach to Classify Plants Based on Metabolite-Content Similarity.
Liu, Kang; Abdullah, Azian Azamimi; Huang, Ming; Nishioka, Takaaki; Altaf-Ul-Amin, Md; Kanaya, Shigehiko
2017-01-01
Secondary metabolites are bioactive substances with diverse chemical structures. Depending on the ecological environment within which they are living, higher plants use different combinations of secondary metabolites for adaptation (e.g., defense against attacks by herbivores or pathogenic microbes). This suggests that the similarity in metabolite content is applicable to assess phylogenic similarity of higher plants. However, such a chemical taxonomic approach has limitations of incomplete metabolomics data. We propose an approach for successfully classifying 216 plants based on their known incomplete metabolite content. Structurally similar metabolites have been clustered using the network clustering algorithm DPClus. Plants have been represented as binary vectors, implying relations with structurally similar metabolite groups, and classified using Ward's method of hierarchical clustering. Despite incomplete data, the resulting plant clusters are consistent with the known evolutional relations of plants. This finding reveals the significance of metabolite content as a taxonomic marker. We also discuss the predictive power of metabolite content in exploring nutritional and medicinal properties in plants. As a byproduct of our analysis, we could predict some currently unknown species-metabolite relations.
Directory of Open Access Journals (Sweden)
William R Swindell
Full Text Available Development of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features of the human disease, and standardized validation criteria for psoriasis mouse models have not been widely applied. In this study, whole-genome transcriptional profiling is used to compare gene expression patterns manifested by human psoriatic skin lesions with those that occur in five psoriasis mouse models (K5-Tie2, imiquimod, K14-AREG, K5-Stat3C and K5-TGFbeta1. While the cutaneous gene expression profiles associated with each mouse phenotype exhibited statistically significant similarity to the expression profile of psoriasis in humans, each model displayed distinctive sets of similarities and differences in comparison to human psoriasis. For all five models, correspondence to the human disease was strong with respect to genes involved in epidermal development and keratinization. Immune and inflammation-associated gene expression, in contrast, was more variable between models as compared to the human disease. These findings support the value of all five models as research tools, each with identifiable areas of convergence to and divergence from the human disease. Additionally, the approach used in this paper provides an objective and quantitative method for evaluation of proposed mouse models of psoriasis, which can be strategically applied in future studies to score strengths of mouse phenotypes relative to specific aspects of human psoriasis.
Institute of Scientific and Technical Information of China (English)
Pan Jiayi; Chin-Pang Jack Cheng; Gloria T. Lau; Kincho H. Law
2008-01-01
The objective of this paper is to introduce three semi-automated approaches for ontology mapping using relatedness analysis techniques. In the architecture, engineering, and construction (AEC) industry, there exist a number of ontological standards to describe the semantics of building models. Although the standards share similar scopes of interest, the task of comparing and mapping concepts among standards is challenging due to their differences in terminologies and perspectives. Ontology mapping is therefore necessary to achieve information interoperability, which allows two or more information sources to exchange data and to re-use the data for further purposes. The attribute-based approach, corpus-based approach, and name-based approach presented in this paper adopt the statistical relatedness analysis techniques to discover related concepts from heterogeneous ontologies. A pilot study is conducted on IFC and CIS/2 ontologies to evaluate the approaches. Preliminary results show that the attribute-based approach outperforms the other two approaches in terms of precision and F-measure.
An Efficient Similarity Digests Database Lookup - A Logarithmic Divide & Conquer Approach
Directory of Open Access Journals (Sweden)
Frank Breitinger
2014-09-01
Full Text Available Investigating seized devices within digital forensics represents a challenging task due to the increasing amount of data. Common procedures utilize automated file identification, which reduces the amount of data an investigator has to examine manually. In the past years the research field of approximate matching arises to detect similar data. However, if n denotes the number of similarity digests in a database, then the lookup for a single similarity digest is of complexity of O(n. This paper presents a concept to extend existing approximate matching algorithms, which reduces the lookup complexity from O(n to O(log(n. Our proposed approach is based on the well-known divide and conquer paradigm and builds a Bloom filter-based tree data structure in order to enable an efficient lookup of similarity digests. Further, it is demonstrated that the presented technique is highly scalable operating a trade-off between storage requirements and computational efficiency. We perform a theoretical assessment based on recently published results and reasonable magnitudes of input data, and show that the complexity reduction achieved by the proposed technique yields a 220-fold acceleration of look-up costs.
Institute of Scientific and Technical Information of China (English)
Suparerk JANJARASJITT
2014-01-01
Self-similarity or scale-invariance is a fascinating characteristic found in various signals including electroencephalogram (EEG) signals. A common measure used for characterizing self-similarity or scale-invariance is the spectral exponent. In this study, a computational method for estimating the spectral exponent based on wavelet transform was examined. A series of Daubechies wavelet bases with various numbers of vanishing moments were applied to analyze the self-similar characteristics of intracranial EEG data corresponding to different pathological states of the brain, i.e., ictal and interictal states, in patients with epilepsy. The computational results show that the spectral exponents of intracranial EEG signals obtained during epileptic seizure activity tend to be higher than those obtained during non-seizure periods. This suggests that the intracranial EEG signals obtained during epileptic seizure activity tend to be more self-similar than those obtained during non-seizure periods. The computational results obtained using the wavelet-based approach were validated by comparison with results obtained using the power spectrum method.
Comparison Latent Semantic and WordNet Approach for Semantic Similarity Calculation
Wicaksana, I Wayan Simri
2011-01-01
Information exchange among many sources in Internet is more autonomous, dynamic and free. The situation drive difference view of concepts among sources. For example, word 'bank' has meaning as economic institution for economy domain, but for ecology domain it will be defined as slope of river or lake. In this aper, we will evaluate latent semantic and WordNet approach to calculate semantic similarity. The evaluation will be run for some concepts from different domain with reference by expert or human. Result of the evaluation can provide a contribution for mapping of concept, query rewriting, interoperability, etc.
A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis
Directory of Open Access Journals (Sweden)
Ferrane Isabelle
2011-01-01
Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.
Accretion disk dynamics. α-viscosity in self-similar self-gravitating models
Kubsch, Marcus; Illenseer, Tobias F.; Duschl, Wolfgang J.
2016-04-01
Aims: We investigate the suitability of α-viscosity in self-similar models for self-gravitating disks with a focus on active galactic nuclei (AGN) disks. Methods: We use a self-similar approach to simplify the partial differential equations arising from the evolution equation, which are then solved using numerical standard procedures. Results: We find a self-similar solution for the dynamical evolution of self-gravitating α-disks and derive the significant quantities. In the Keplerian part of the disk our model is consistent with standard stationary α-disk theory, and self-consistent throughout the self-gravitating regime. Positive accretion rates throughout the disk demand a high degree of self-gravitation. Combined with the temporal decline of the accretion rate and its low amount, the model prohibits the growth of large central masses. Conclusions: α-viscosity cannot account for the evolution of the whole mass spectrum of super-massive black holes (SMBH) in AGN. However, considering the involved scales it seems suitable for modelling protoplanetary disks.
Teixeira, Ana L; Falcao, Andre O
2014-07-28
Structurally similar molecules tend to have similar properties, i.e. closer molecules in the molecular space are more likely to yield similar property values while distant molecules are more likely to yield different values. Based on this principle, we propose the use of a new method that takes into account the high dimensionality of the molecular space, predicting chemical, physical, or biological properties based on the most similar compounds with measured properties. This methodology uses ordinary kriging coupled with three different molecular similarity approaches (based on molecular descriptors, fingerprints, and atom matching) which creates an interpolation map over the molecular space that is capable of predicting properties/activities for diverse chemical data sets. The proposed method was tested in two data sets of diverse chemical compounds collected from the literature and preprocessed. One of the data sets contained dihydrofolate reductase inhibition activity data, and the second molecules for which aqueous solubility was known. The overall predictive results using kriging for both data sets comply with the results obtained in the literature using typical QSPR/QSAR approaches. However, the procedure did not involve any type of descriptor selection or even minimal information about each problem, suggesting that this approach is directly applicable to a large spectrum of problems in QSAR/QSPR. Furthermore, the predictive results improve significantly with the similarity threshold between the training and testing compounds, allowing the definition of a confidence threshold of similarity and error estimation for each case inferred. The use of kriging for interpolation over the molecular metric space is independent of the training data set size, and no reparametrizations are necessary when more compounds are added or removed from the set, and increasing the size of the database will consequentially improve the quality of the estimations. Finally it is shown
Accretion disk dynamics: {\\alpha}-viscosity in self-similar self-gravitating models
Kubsch, Marcus; Duschl, W J
2016-01-01
Aims: We investigate the suitability of {\\alpha}-viscosity in self-similar models for self-gravitating disks with a focus on active galactic nuclei (AGN) disks. Methods: We use a self-similar approach to simplify the partial differential equations arising from the evolution equation, which are then solved using numerical standard procedures. Results: We find a self-similar solution for the dynamical evolution of self-gravitating {\\alpha}-disks and derive the significant quantities. In the Keplerian part of the disk our model is consistent with standard stationary {\\alpha}-disk theory, and self-consistent throughout the self-gravitating regime. Positive accretion rates throughout the disk demand a high degree of self-gravitation. Combined with the temporal decline of the accretion rate and its low amount, the model prohibits the growth of large central masses. Conclusions: {\\alpha}-viscosity cannot account for the evolution of the whole mass spectrum of super-massive black holes (SMBH) in AGN. However, conside...
Self-similarity of phase-space networks of frustrated spin models and lattice gas models
Peng, Yi; Wang, Feng; Han, Yilong
2013-03-01
We studied the self-similar properties of the phase-spaces of two frustrated spin models and two lattice gas models. The frustrated spin models included (1) the anti-ferromagnetic Ising model on a two-dimensional triangular lattice (1a) at the ground states and (1b) above the ground states and (2) the six-vertex model. The two lattice gas models were (3) the one-dimensional lattice gas model and (4) the two-dimensional lattice gas model. The phase spaces were mapped to networks so that the fractal analysis of complex networks could be applied, i.e. the box-covering method and the cluster-growth method. These phase spaces, in turn, establish new classes of networks with unique self-similar properties. Models 1a, 2, and 3 with long-range power-law correlations in real space exhibit fractal phase spaces, while models 1b and 4 with short-range exponential correlations in real space exhibit nonfractal phase spaces. This behavior agrees with one of untested assumptions in Tsallis nonextensive statistics. Hong Kong GRC grants 601208 and 601911
Training of tonal similarity ratings in non-musicians: a "rapid learning" approach.
Oechslin, Mathias S; Läge, Damian; Vitouch, Oliver
2012-01-01
Although cognitive music psychology has a long tradition of expert-novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based "rapid learning" paradigm, participants had to decide for single tones and chords whether paired sounds matched each other well. Before and after the training sessions, they provided similarity judgments for a complete set of sound pairs. From these similarity matrices, individual relational sound maps, intended to display mental representations, were calculated by means of non-metric multidimensional scaling (NMDS), and were compared to an expert model through procrustean transformation. Approximately half of the novices showed substantial learning success, with some participants even reaching the level of professional musicians. Results speak for a fundamental ability to quickly train an understanding of harmony, show inter-individual differences in learning success, and demonstrate the suitability of the scaling method used for learning research in music and other domains. Results are discussed in the context of the "giftedness" debate.
Training of Tonal Similarity Ratings in Non-Musicians: A “Rapid Learning” Approach
Oechslin, Mathias S.; Läge, Damian; Vitouch, Oliver
2012-01-01
Although cognitive music psychology has a long tradition of expert–novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based “rapid learning” paradigm, participants had to decide for single tones and chords whether paired sounds matched each other well. Before and after the training sessions, they provided similarity judgments for a complete set of sound pairs. From these similarity matrices, individual relational sound maps, intended to display mental representations, were calculated by means of non-metric multidimensional scaling (NMDS), and were compared to an expert model through procrustean transformation. Approximately half of the novices showed substantial learning success, with some participants even reaching the level of professional musicians. Results speak for a fundamental ability to quickly train an understanding of harmony, show inter-individual differences in learning success, and demonstrate the suitability of the scaling method used for learning research in music and other domains. Results are discussed in the context of the “giftedness” debate. PMID:22629252
Training of tonal similarity ratings in non-musicians: a rapid learning approach
Directory of Open Access Journals (Sweden)
Mathias S Oechslin
2012-05-01
Full Text Available Although music psychology has a long tradition of expert-novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based rapid learning paradigm, participants had to decide for single tones and chords whether paired sounds matched each other well. Before and after the training sessions, they provided similarity judgments for a complete set of sound pairs. From these similarity matrices, individual relational sound maps, aiming to map the mental representations, were calculated by means of non-metric multidimensional scaling (NMDS, which were compared to an expert model through procrustean transformation. Approximately half of the novices showed substantial learning success, with some participants even reaching the level of professional musicians. Results speak for a fundamental ability to quickly train an understanding of harmony, show inter-individual differences in learning success, and demonstrate the suitability of the scaling method used for music psychological research. Results are discussed in the context of the giftedness debate.
Mapping the Similarities of Spectra: Global and Locally-biased Approaches to SDSS Galaxy Data
Lawlor, David; Mahoney, Michael W
2016-01-01
We apply a novel spectral graph technique, that of locally-biased semi-supervised eigenvectors, to study the diversity of galaxies. This technique permits us to characterize empirically the natural variations in observed spectra data, and we illustrate how this approach can be used in an exploratory manner to highlight both large-scale global as well as small-scale local structure in Sloan Digital Sky Survey (SDSS) data. We use this method in a way that simultaneously takes into account the measurements of spectral lines as well as the continuum shape. Unlike Principal Component Analysis, this method does not assume that the Euclidean distance between galaxy spectra is a good global measure of similarity between all spectra, but instead it only assumes that local difference information between similar spectra is reliable. Moreover, unlike other nonlinear dimensionality methods, this method can be used to characterize very finely both small-scale local as well as large-scale global properties of realistic nois...
How similar are nut-cracking and stone-flaking? A functional approach to percussive technology.
Bril, Blandine; Parry, Ross; Dietrich, Gilles
2015-11-19
Various authors have suggested similarities between tool use in early hominins and chimpanzees. This has been particularly evident in studies of nut-cracking which is considered to be the most complex skill exhibited by wild apes, and has also been interpreted as a precursor of more complex stone-flaking abilities. It has been argued that there is no major qualitative difference between what the chimpanzee does when he cracks a nut and what early hominins did when they detached a flake from a core. In this paper, similarities and differences between skills involved in stone-flaking and nut-cracking are explored through an experimental protocol with human subjects performing both tasks. We suggest that a 'functional' approach to percussive action, based on the distinction between functional parameters that characterize each task and parameters that characterize the agent's actions and movements, is a fruitful method for understanding those constraints which need to be mastered to perform each task successfully, and subsequently, the nature of skill involved in both tasks.
Pohle, Ina; Glendell, Miriam; Stutter, Marc I.; Helliwell, Rachel C.
2017-04-01
An understanding of catchment response to climate and land use change at a regional scale is necessary for the assessment of mitigation and adaptation options addressing diffuse nutrient pollution. It is well documented that the physicochemical properties of a river ecosystem respond to change in a non-linear fashion. This is particularly important when threshold water concentrations, relevant to national and EU legislation, are exceeded. Large scale (regional) model assessments required for regulatory purposes must represent the key processes and mechanisms that are more readily understood in catchments with water quantity and water quality data monitored at high spatial and temporal resolution. While daily discharge data are available for most catchments in Scotland, nitrate and phosphorus are mostly available on a monthly basis only, as typified by regulatory monitoring. However, high resolution (hourly to daily) water quantity and water quality data exist for a limited number of research catchments. To successfully implement adaptation measures across Scotland, an upscaling from data-rich to data-sparse catchments is required. In addition, the widespread availability of spatial datasets affecting hydrological and biogeochemical responses (e.g. soils, topography/geomorphology, land use, vegetation etc.) provide an opportunity to transfer predictions between data-rich and data-sparse areas by linking processes and responses to catchment attributes. Here, we develop a framework of catchment typologies as a prerequisite for transferring information from data-rich to data-sparse catchments by focusing on how hydrological catchment similarity can be used as an indicator of grouped behaviours in water quality response. As indicators of hydrological catchment similarity we use flow indices derived from observed discharge data across Scotland as well as hydrological model parameters. For the latter, we calibrated the lumped rainfall-runoff model TUWModel using multiple
Wang, Mian; Helbling, Damian E
2016-10-01
There is growing concern over the formation of new types of disinfection byproducts (DBPs) from pharmaceuticals and other emerging contaminants during drinking water production. Free chlorine is a widely used disinfectant that reacts non-selectively with organic molecules to form a variety of byproducts. In this research, we aimed to investigate the DBPs formed from three structurally similar sulfonamide antibiotics (sulfamethoxazole, sulfathiazole, and sulfadimethoxine) to determine how chemical structure influences the types of chlorination reactions observed. We conducted free chlorination experiments and developed a non-target approach to extract masses from the experimental dataset that represent the masses of candidate DBPs. Structures were assigned to the candidate DBPs based on analytical data and knowledge of chlorine chemistry. Confidence levels were assigned to each proposed structure according to conventions in the field. In total, 11, 12, and 15 DBP structures were proposed for sulfamethoxazole, sulfathiazole, and sulfadimethoxine, respectively. The structures of the products suggest a variety of reaction types including chlorine substitution, SC cleavage, SN hydrolysis, desulfonation, oxidation/hydroxylation, and conjugation reactions. Some reaction types were common to all of the sulfonamide antibiotics, but unique reaction types were also observed for each sulfonamide antibiotic suggesting that selective prediction of DBP structures of other sulfonamide antibiotics based on chemical structure is unlikely to be possible based on these data alone. This research offers an approach to comprehensively identify DBPs of organic molecules and fills in much needed data on the formation of specific DBPs from three environmentally relevant sulfonamide antibiotics.
Directory of Open Access Journals (Sweden)
Yogapriya Jaganathan
2013-01-01
Full Text Available For the past few years, massive upgradation is obtained in the pasture of Content Based Medical Image Retrieval (CBMIR for effective utilization of medical images based on visual feature analysis for the purpose of diagnosis and educational research. The existing medical image retrieval systems are still not optimal to solve the feature dimensionality reduction problem which increases the computational complexity and decreases the speed of a retrieval process. The proposed CBMIR is used a hybrid approach based on Feature Extraction, Optimization of Feature Vectors, Classification of Features and Similarity Measurements. This type of CBMIR is called Feature Optimized Classification Similarity (FOCS framework. The selected features are Textures using Gray level Co-occurrence Matrix Features (GLCM and Tamura Features (TF in which extracted features are formed as feature vector database. The Fuzzy based Particle Swarm Optimization (FPSO technique is used to reduce the feature vector dimensionality and classification is performed using Fuzzy based Relevance Vector Machine (FRVM to form groups of relevant image features that provide a natural way to classify dimensionally reduced feature vectors of images. The Euclidean Distance (ED is used as similarity measurement to measure the significance between the query image and the target images. This FOCS approach can get the query from the user and has retrieved the needed images from the databases. The retrieval algorithm performances are estimated in terms of precision and recall. This FOCS framework comprises several benefits when compared to existing CBMIR. GLCM and TF are used to extract texture features and form a feature vector database. Fuzzy-PSO is used to reduce the feature vector dimensionality issues while selecting the important features in the feature vector database in which computational complexity is decreased. Fuzzy based RVM is used for feature classification in which it increases the
Kurtz, Camille; Beaulieu, Christopher F; Napel, Sandy; Rubin, Daniel L
2014-06-01
Computer-assisted image retrieval applications could assist radiologist interpretations by identifying similar images in large archives as a means to providing decision support. However, the semantic gap between low-level image features and their high level semantics may impair the system performances. Indeed, it can be challenging to comprehensively characterize the images using low-level imaging features to fully capture the visual appearance of diseases on images, and recently the use of semantic terms has been advocated to provide semantic descriptions of the visual contents of images. However, most of the existing image retrieval strategies do not consider the intrinsic properties of these terms during the comparison of the images beyond treating them as simple binary (presence/absence) features. We propose a new framework that includes semantic features in images and that enables retrieval of similar images in large databases based on their semantic relations. It is based on two main steps: (1) annotation of the images with semantic terms extracted from an ontology, and (2) evaluation of the similarity of image pairs by computing the similarity between the terms using the Hierarchical Semantic-Based Distance (HSBD) coupled to an ontological measure. The combination of these two steps provides a means of capturing the semantic correlations among the terms used to characterize the images that can be considered as a potential solution to deal with the semantic gap problem. We validate this approach in the context of the retrieval and the classification of 2D regions of interest (ROIs) extracted from computed tomographic (CT) images of the liver. Under this framework, retrieval accuracy of more than 0.96 was obtained on a 30-images dataset using the Normalized Discounted Cumulative Gain (NDCG) index that is a standard technique used to measure the effectiveness of information retrieval algorithms when a separate reference standard is available. Classification
Mahdizadeh, Mousa; Heydari, Abbas; Moonaghi, Hossien Karimi
2015-01-01
Introduction: So far, various models of interdisciplinary collaboration in clinical nursing have been presented, however, yet a comprehensive model is not available. The purpose of this study is to review the evidences that had presented model or framework with qualitative approach about interdisciplinary collaboration in clinical nursing. Methods: All the articles and theses published from 1990 to 10 June 2014 which in both English and Persian models or frameworks of clinicians had presented model or framework of clinical collaboration were searched using databases of Proquest, Scopus, pub Med, Science Direct, and Iranian databases of Sid, Magiran, and Iranmedex. In this review, for published articles and theses, keywords according with MESH such as nurse-physician relations, care team, collaboration, interdisciplinary relations and their Persian equivalents were used. Results: In this study contexts, processes and outcomes of interdisciplinary collaboration as findings were extracted. One of the major components affecting on collaboration that most of the models had emphasized was background of collaboration. Most of studies suggested that the outcome of collaboration were improved care, doctors and nurses’ satisfaction, controlling costs, reducing clinical errors and patient’s safety. Conclusion: Models and frameworks had different structures, backgrounds, and conditions, but the outcomes were similar. Organizational structure, culture and social factors are important aspects of clinical collaboration. So it is necessary to improve the quality and effectiveness of clinical collaboration these factors to be considered. PMID:26153158
A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space
Directory of Open Access Journals (Sweden)
Jinjun Li
2011-01-01
Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.
Directory of Open Access Journals (Sweden)
David N Quan
Full Text Available Bacterial cell-cell communication is mediated by small signaling molecules known as autoinducers. Importantly, autoinducer-2 (AI-2 is synthesized via the enzyme LuxS in over 80 species, some of which mediate their pathogenicity by recognizing and transducing this signal in a cell density dependent manner. AI-2 mediated phenotypes are not well understood however, as the means for signal transduction appears varied among species, while AI-2 synthesis processes appear conserved. Approaches to reveal the recognition pathways of AI-2 will shed light on pathogenicity as we believe recognition of the signal is likely as important, if not more, than the signal synthesis. LMNAST (Local Modular Network Alignment Similarity Tool uses a local similarity search heuristic to study gene order, generating homology hits for the genomic arrangement of a query gene sequence. We develop and apply this tool for the E. coli lac and LuxS regulated (Lsr systems. Lsr is of great interest as it mediates AI-2 uptake and processing. Both test searches generated results that were subsequently analyzed through a number of different lenses, each with its own level of granularity, from a binary phylogenetic representation down to trackback plots that preserve genomic organizational information. Through a survey of these results, we demonstrate the identification of orthologs, paralogs, hitchhiking genes, gene loss, gene rearrangement within an operon context, and also horizontal gene transfer (HGT. We found a variety of operon structures that are consistent with our hypothesis that the signal can be perceived and transduced by homologous protein complexes, while their regulation may be key to defining subsequent phenotypic behavior.
Model-observer similarity, error modeling and social learning in rhesus macaques.
Monfardini, Elisabetta; Hadj-Bouziane, Fadila; Meunier, Martine
2014-01-01
Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.
Hayat, Tasawar; Ijaz Khan, Muhammad; Imtiaz, Maria; Alsaedi, Ahmed; Waqas, Muhammad
2016-10-01
A simple model of chemical reactions for two dimensional ferrofluid flows is constructed. The impact of magnetic dipole and mixed convection is further analyzed. Flow is caused by linear stretching of the sheet. Similarity transformation is adopted to convert the partial differential equations into ordinary differential equations and then solved by Euler's explicit method. The characteristics of sundry parameters on the velocity, temperature, and concentration fields are graphically elaborated. It is noted that the impact of magneto-thermomechanical interaction is to slow down the fluid motion. The skin friction coefficient enhances and affects the rate of heat transfer. For higher values of ferrohydrodynamics, the interaction velocity shows decreasing behavior. Further the Prandtl number on temperature has opposite behavior when compared with thermal radiation and ferrohydrodynamics interaction.
Directory of Open Access Journals (Sweden)
S. Arivazhagan
2014-03-01
Full Text Available Biometric security artifacts for establishing the identity of a person with high confidence have evoked enormous interest in security and access control applications for the past few years. Biometric systems based solely on unimodal biometrics often suffer from problems such as noise, intra-class variations and spoof attacks. This paper presents a novel multimodal biometric recognition system by integrating three biometric traits namely iris, fingerprint and face using weighted similarity approach. In this work, the multi-resolution features are extracted independently from query images using curvelet and ridgelet transforms, and are then compared to the enrolled templates stored in the database containing features of each biometric trait. The final decision is made by normalizing the feature vectors, assigning different weights to the modalities and fusing the computed scores using score combination techniques. This system is tested with the public unimodal databases such as CASIA–Iris-V3-Interval, FVC2004, ORL and self-built multimodal databases. Experimental results obtained shows that the designed system achieves an excellent recognition rate of 98.75 per cent and 100 per cent for the public and self-built databases respectively and provides ultra high security than unimodal biometric systems.Defence Science Journal, 2014, 64(2, pp. 106-114. DOI: http://dx.doi.org/10.14429/dsj.64.3469
Uncovering highly obfuscated plagiarism cases using fuzzy semantic-based similarity model
Directory of Open Access Journals (Sweden)
Salha M. Alzahrani
2015-07-01
Full Text Available Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and suspicious texts of short lengths, which implement the semantic relatedness between words as a membership function to a fuzzy set. In order to minimize the number of false positives and false negatives, a learning method that combines a permission threshold and a variation threshold is used to decide true plagiarism cases. The proposed model and the baselines are evaluated on 99,033 ground-truth annotated cases extracted from different datasets, including 11,621 (11.7% handmade paraphrases, 54,815 (55.4% artificial plagiarism cases, and 32,578 (32.9% plagiarism-free cases. We conduct extensive experimental verifications, including the study of the effects of different segmentations schemes and parameter settings. Results are assessed using precision, recall, F-measure and granularity on stratified 10-fold cross-validation data. The statistical analysis using paired t-tests shows that the proposed approach is statistically significant in comparison with the baselines, which demonstrates the competence of fuzzy semantic-based model to detect plagiarism cases beyond the literal plagiarism. Additionally, the analysis of variance (ANOVA statistical test shows the effectiveness of different segmentation schemes used with the proposed approach.
van Zalk, Maarten; Denissen, Jaap
2015-07-01
In the current studies, the authors examined how peers influence friendship choices through individuals' perceptions of similarity between their own and others' Big Five traits. Self-reported and peer-reported data were gathered from 3 independent samples using longitudinal round-robin designs. Peers' ratings of how similar 2 persons appeared in extraversion and agreeableness predicted friendship formation likelihood between these 2 persons in all samples. This association was mediated by perceived similarity. Furthermore, another mediation effect was found for similarity in interaction style: Persons who were viewed by peers as having similar extraversion and agreeableness levels became more similar in interaction styles. Thus, the current studies indicate that extraversion and agreeableness influence the emergence of social relationships through intrapersonal perceptions of similarity and interpersonal social interactions. We encourage researchers to look at specific similarity effects that influence interpersonal and intrapersonal processes to understand how relationships are formed. (c) 2015 APA, all rights reserved).
Achieving Full Dynamic Similarity with Small-Scale Wind Turbine Models
Miller, Mark; Kiefer, Janik; Westergaard, Carsten; Hultmark, Marcus
2016-11-01
Power and thrust data as a function of Reynolds number and Tip Speed Ratio are presented at conditions matching those of a full scale turbine. Such data has traditionally been very difficult to acquire due to the large length-scales of wind turbines, and the limited size of conventional wind tunnels. Ongoing work at Princeton University employs a novel, high-pressure wind tunnel (up to 220 atmospheres of static pressure) which uses air as the working fluid. This facility allows adjustment of the Reynolds number (via the fluid density) independent of the Tip Speed Ratio, up to a Reynolds number (based on chord and velocity at the tip) of over 3 million. Achieving dynamic similarity using this approach implies very high power and thrust loading, which results in mechanical loads greater than 200 times those experienced by a similarly sized model in a conventional wind tunnel. In order to accurately report the power coefficients, a series of tests were carried out on a specially designed model turbine drive-train using an external testing bench to replicate tunnel loading. An accurate map of the drive-train performance at various operating conditions was determined. Finally, subsequent corrections to the power coefficient are discussed in detail. Supported by: National Science Foundation Grant CBET-1435254 (program director Gregory Rorrer).
Landau, Arie
2013-07-07
This paper presents a new method for calculating spectroscopic properties in the framework of response theory utilizing a sequence of similarity transformations (STs). The STs are preformed using the coupled cluster (CC) and Fock-space coupled cluster operators. The linear and quadratic response functions of the new similarity transformed CC response (ST-CCR) method are derived. The poles of the linear response yield excitation-energy (EE) expressions identical to the ones in the similarity transformed equation-of-motion coupled cluster (STEOM-CC) approach. ST-CCR and STEOM-CC complement each other, in analogy to the complementarity of CC response (CCR) and equation-of-motion coupled cluster (EOM-CC). ST-CCR/STEOM-CC and CCR/EOM-CC yield size-extensive and size-intensive EEs, respectively. Other electronic-properties, e.g., transition dipole strengths, are also size-extensive within ST-CCR, in contrast to STEOM-CC. Moreover, analysis suggests that in comparison with CCR, the ST-CCR expressions may be confined to a smaller subspace, however, the precise scope of the truncation can only be determined numerically. In addition, reformulation of the time-independent STEOM-CC using the same parameterization as in ST-CCR, as well as an efficient truncation scheme, is presented. The shown convergence of the time-dependent and time-independent expressions displays the completeness of the presented formalism.
Content-Based Search on a Database of Geometric Models: Identifying Objects of Similar Shape
Energy Technology Data Exchange (ETDEWEB)
XAVIER, PATRICK G.; HENRY, TYSON R.; LAFARGE, ROBERT A.; MEIRANS, LILITA; RAY, LAWRENCE P.
2001-11-01
The Geometric Search Engine is a software system for storing and searching a database of geometric models. The database maybe searched for modeled objects similar in shape to a target model supplied by the user. The database models are generally from CAD models while the target model may be either a CAD model or a model generated from range data collected from a physical object. This document describes key generation, database layout, and search of the database.
A New Retrieval Model Based on TextTiling for Document Similarity Search
Institute of Scientific and Technical Information of China (English)
Xiao-Jun Wan; Yu-Xin Peng
2005-01-01
Document similarity search is to find documents similar to a given query document and return a ranked list of similar documents to users, which is widely used in many text and web systems, such as digital library, search engine,etc. Traditional retrieval models, including the Okapi's BM25 model and the Smart's vector space model with length normalization, could handle this problem to some extent by taking the query document as a long query. In practice,the Cosine measure is considered as the best model for document similarity search because of its good ability to measure similarity between two documents. In this paper, the quantitative performances of the above models are compared using experiments. Because the Cosine measure is not able to reflect the structural similarity between documents, a new retrieval model based on TextTiling is proposed in the paper. The proposed model takes into account the subtopic structures of documents. It first splits the documents into text segments with TextTiling and calculates the similarities for different pairs of text segments in the documents. Lastly the overall similarity between the documents is returned by combining the similarities of different pairs of text segments with optimal matching method. Experiments are performed and results show:1) the popular retrieval models (the Okapi's BM25 model and the Smart's vector space model with length normalization)do not perform well for document similarity search; 2) the proposed model based on TextTiling is effective and outperforms other models, including the Cosine measure; 3) the methods for the three components in the proposed model are validated to be appropriately employed.
A Stabilized Scale-Similarity Model for Explicitly-Filtered LES
Edoh, Ayaboe; Karagozian, Ann; Sankaran, Venkateswaran
2016-11-01
Accurate simulation of the filtered-scales in LES is affected by the competing presence of modeling and discretization errors. In order to properly assess modeling techniques, it is imperative to minimize the influence of the numerical scheme. The current investigation considers the inclusion of resolved and un-resolved sub-filter stress ([U]RSFS) components in the governing equations, which is suggestive of a mixed-model approach. Taylor-series expansions of discrete filter stencils are used to inform proper scaling of a Scale-Similarity model representation of the RSFS term, and accompanying stabilization is provided by tunable and scale-discriminant filter-based artificial dissipation techniques that represent the URSFS term implicitly. Effective removal of numerical error from the LES solution is studied with respect to the 1D Burgers equation with synthetic turbulence, and extension to 3D Navier-Stokes system computations is motivated. Distribution A: Approved for public release, distribution unlimited. Supported by AFOSR (PMs: Drs. Chiping Li and Michael Kendra).
Grimm, Fabian A; Iwata, Yasuhiro; Sirenko, Oksana; Chappell, Grace A; Wright, Fred A; Reif, David M; Braisted, John; Gerhold, David L; Yeakley, Joanne M; Shepard, Peter; Seligmann, Bruce; Roy, Tim; Boogaard, Peter J; Ketelslegers, Hans B; Rohde, Arlean M; Rusyn, Ivan
2016-08-21
Comparative assessment of potential human health impacts is a critical step in evaluating both chemical alternatives and existing products on the market. Most alternatives assessments are conducted on a chemical-by-chemical basis and it is seldom acknowledged that humans are exposed to complex products, not individual substances. Indeed, substances of Unknown or Variable composition, Complex reaction products, and Biological materials (UVCBs) are ubiquitous in commerce yet they present a major challenge for registration and health assessments. Here, we present a comprehensive experimental and computational approach to categorize UVCBs according to global similarities in their bioactivity using a suite of in vitro models. We used petroleum substances, an important group of UVCBs which are grouped for regulatory approval and read-across primarily on physico-chemical properties and the manufacturing process, and only partially based on toxicity data, as a case study. We exposed induced pluripotent stem cell-derived cardiomyocytes and hepatocytes to DMSO-soluble extracts of 21 petroleum substances from five product groups. Concentration-response data from high-content imaging in cardiomyocytes and hepatocytes, as well as targeted high-throughput transcriptomic analysis of the hepatocytes, revealed distinct groups of petroleum substances. Data integration showed that bioactivity profiling affords clustering of petroleum substances in a manner similar to the manufacturing process-based categories. Moreover, we observed a high degree of correlation between bioactivity profiles and physico-chemical properties, as well as improved groupings when chemical and biological data were combined. Altogether, we demonstrate how novel in vitro screening approaches can be effectively utilized in combination with physico-chemical characteristics to group complex substances and enable read-across. This approach allows for rapid and scientifically-informed evaluation of health impacts of
Improved similarity criterion for seepage erosion using mesoscopic coupled PFC-CFD model
Institute of Scientific and Technical Information of China (English)
倪小东; 王媛; 陈珂; 赵帅龙
2015-01-01
Conventional model tests and centrifuge tests are frequently used to investigate seepage erosion. However, the centrifugal test method may not be efficient according to the results of hydraulic conductivity tests and piping erosion tests. The reason why seepage deformation in model tests may deviate from similarity was first discussed in this work. Then, the similarity criterion for seepage deformation in porous media was improved based on the extended Darcy-Brinkman-Forchheimer equation. Finally, the coupled particle flow code–computational fluid dynamics (PFC−CFD) model at the mesoscopic level was proposed to verify the derived similarity criterion. The proposed model maximizes its potential to simulate seepage erosion via the discrete element method and satisfy the similarity criterion by adjusting particle size. The numerical simulations achieved identical results with the prototype, thus indicating that the PFC−CFD model that satisfies the improved similarity criterion can accurately reproduce the processes of seepage erosion at the mesoscopic level.
An Improved Modeling for Network Traffic Based on Alpha-Stable Self-similar Processes
Institute of Scientific and Technical Information of China (English)
GEXiaohu; ZHUGuangxi; ZHUYaoting
2003-01-01
This paper produces an improved model based on alpha-stable processes. First, this paper introduces the basic of self-similarity, and then the reason why the alpha-stable processes have been used for self-similar network traffic modeling is given out; Second, the research in this field is advanced, and the paper analyzes the drawback of the S4 model, which is supported by the related mathematical proof and confirmations of experiments. In order to make up for the drawback of the S4 model andaccurately describe the varieties of the heavily tailed distributions, an improved network traffic model is proposed. By comparison with simulation data (including the S4 model and the improved model) and actual data, the advantage of the improved model has been demonstrated. In the end, the significance of the self-similar network traffic model has been put forward, and the future work is discussed.
A new k-epsilon model consistent with Monin-Obukhov similarity theory
DEFF Research Database (Denmark)
van der Laan, Paul; Kelly, Mark C.; Sørensen, Niels N.
2016-01-01
A new k-" model is introduced that is consistent with Monin–Obukhov similarity theory (MOST). The proposed k-" model is compared with another k-" model that was developed in an attempt to maintain inlet profiles compatible with MOST. It is shown that the previous k-" model is not consistent with ...
Wood, D.J.; Vlieg, J. de; Wagener, M.; Ritschel, T.
2012-01-01
Bioisosteres have been defined as structurally different molecules or substructures that can form comparable intermolecular interactions, and therefore, fragments that bind to similar protein structures exhibit a degree of bioisosterism. We present KRIPO (Key Representation of Interaction in POckets
A technical study and analysis on fuzzy similarity based models for text classification
Puri, Shalini; 10.5121/ijdkp.2012.2201
2012-01-01
In this new and current era of technology, advancements and techniques, efficient and effective text document classification is becoming a challenging and highly required area to capably categorize text documents into mutually exclusive categories. Fuzzy similarity provides a way to find the similarity of features among various documents. In this paper, a technical review on various fuzzy similarity based models is given. These models are discussed and compared to frame out their use and necessity. A tour of different methodologies is provided which is based upon fuzzy similarity related concerns. It shows that how text and web documents are categorized efficiently into different categories. Various experimental results of these models are also discussed. The technical comparisons among each model's parameters are shown in the form of a 3-D chart. Such study and technical review provide a strong base of research work done on fuzzy similarity based text document categorization.
Layer Decomposition: An Effective Structure-based Approach for Scientific Workflow Similarity
Starlinger, Johannes; Cohen-Boulakia, Sarah; Khanna, Sanjeev; Davidson, Susan; Leser, Ulf
2014-01-01
International audience; Scientific workflows have become a valuable tool for large-scale data processing and analysis. This has led to the creation of specialized online repositories to facilitate workflow sharing and reuse. Over time, these repositories have grown to sizes that call for advanced methods to support workflow discovery, in particular for effective similarity search. Here, we present a novel and intuitive workflow similarity measure that is based on layer decomposition. Layer de...
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...
Directory of Open Access Journals (Sweden)
Mikael Collan
2015-01-01
Full Text Available This paper introduces new closeness coefficients for fuzzy similarity based TOPSIS. The new closeness coefficients are based on multidistance or fuzzy entropy, are able to take into consideration the level of similarity between analysed criteria, and can be used to account for the consistency or homogeneity of, for example, performance measuring criteria. The commonly known OWA operator is used in the aggregation process over the fuzzy similarity values. A range of orness values is considered in creating a fuzzy overall ranking for each object, after which the fuzzy rankings are ordered to find a final linear ranking. The presented method is numerically applied to a research and development project selection problem and the effect of using two new closeness coefficients based on multidistance and fuzzy entropy is numerically illustrated.
van Zalk, M.H.W.; Denissen, J.J.A.
2015-01-01
In the current studies, the authors examined how peers influence friendship choices through individuals' perceptions of similarity between their own and others' Big Five traits. Self-reported and peer-reported data were gathered from 3 independent samples using longitudinal round-robin designs. Peer
DEFF Research Database (Denmark)
Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero
In this paper we address the problem of visualizing the proportions and the similarities attached to a set of individuals. We represent this information using a rectangular map, i.e., a subdivision of a rectangle into rectangular portions so that each portion is associated with one individual, th...
My Understanding of the Main Similarities and Differences between the Three Translation Models
Institute of Scientific and Technical Information of China (English)
支志
2009-01-01
In this paper,the author wants to prove that the three translation models not only have similarities but also have differences,with the similarities being that they all refer to faithful and free translation and the status of reader,the differences being that their focuses are quite different and their influence upon the present translation theory and practice vary.
Numerical verification of similar Cam-clay model based on generalized potential theory
Institute of Scientific and Technical Information of China (English)
钟志辉; 杨光华; 傅旭东; 温勇; 张玉成
2014-01-01
From the mathematical principles, the generalized potential theory can be employed to create constitutive model of geomaterial directly. The similar Cam-clay model, which is created based on the generalized potential theory, has less assumptions, clearer mathematical basis, and better computational accuracy. Theoretically, it is more scientific than the traditional Cam-clay models. The particle flow code PFC3D was used to make numerical tests to verify the rationality and practicality of the similar Cam-clay model. The verification process was as follows: 1) creating the soil sample for numerical test in PFC3D, and then simulating the conventional triaxial compression test, isotropic compression test, and isotropic unloading test by PFC3D; 2) determining the parameters of the similar Cam-clay model from the results of above tests; 3) predicting the sample’s behavior in triaxial tests under different stress paths by the similar Cam-clay model, and comparing the predicting results with predictions by the Cam-clay model and the modified Cam-clay model. The analysis results show that the similar Cam-clay model has relatively high prediction accuracy, as well as good practical value.
Grice, Kliti; Melendez, Ines; Tulipani, Svenja
2015-04-01
WA Organic and Isotope Geochemistry Centre, The Institute for Geoscience Research, Department of Chemistry, Curtin University, GPO Box U1987 Perth, WA 6845, Australia Photic zone euxinia in ancient seas has proven signficant for elucidating biogeochemical changes that occurred during three of the five Phanerozoic mass extinctions, viz. the Permian/Triassic [1], Triassic/Jurassic [2] and Late Givetian (Devonian) [3] events, including the conditions associated with exceptional fossil preservation [4,5]. The series of events preceding, during and post the Triassic/Jurassic event, is remarkably similar to that reported for the Permian/Triassic extinction, the largest of the Phanerozoic Era. For the Late Givetian event, the first forests evolved and reef-building communities and associated fauna in tropical, marine settings were largely affected [6]. Sedimentary rocks on the margins of the Devonian reef slope in the Canning Basin, WA, contain novel biomarker, isotopic and palynological evidence for the existence of a persistently stratified water-column (comprising a freshwater lens overlying a more saline hypolimnion), with prevailing anoxia and PZE [7]. Also from the Canning Basin, the exceptional preservation of a suite of biomarkers in a Devonian invertebrate fossil within a carbonate concretion supports rapid encasement of the crustacean (identified by % of C27 steroids) enhanced by sulfate reducing bacteria under PZE conditions. PZE plays a critical role in fossil (including soft tissue) and biomarker preservation. In the same sample, the oldest occurrence of intact sterols shows that they have been preserved for ca. 380 Ma [5]. The exceptional preservation of this biomass is attributed to microbially induced carbonate encapsulation, preventing full decomposition and transformation, thus extending the record of sterol occurrences in the geosphere by 250 Ma. A suite of ca. 50 diagenetic transformation products of sterols is also reported, showing the unique
Oliveri, Maria E.; Ercikan, Kadriye
2011-01-01
In this study, we examine the degree of construct comparability and possible sources of incomparability of the English and French versions of the Programme for International Student Assessment (PISA) 2003 problem-solving measure administered in Canada. Several approaches were used to examine construct comparability at the test- (examination of…
Oliveri, Maria E.; Ercikan, Kadriye
2011-01-01
In this study, we examine the degree of construct comparability and possible sources of incomparability of the English and French versions of the Programme for International Student Assessment (PISA) 2003 problem-solving measure administered in Canada. Several approaches were used to examine construct comparability at the test- (examination of…
Are Approaches to Learning in Kindergarten Associated with Academic and Social Competence Similarly?
Razza, Rachel A.; Martin, Anne; Brooks-Gunn, Jeanne
2015-01-01
Background: Approaches to learning (ATL) is a key domain of school readiness with important implications for children's academic trajectories. Interestingly, however, the impact of early ATL on children's social competence has not been examined. Objective: This study examines associations between children's ATL at age 5 and academic achievement…
Training of tonal similarity ratings in Non-Musicians: a “Rapid Learning” approach
Oechslin, Mathias S.; Läge, Damian; Vitouch, Oliver
2012-01-01
Although cognitive music psychology has a long tradition of expert–novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based “rapid learning” paradigm, participants had to decide for ...
Training of Tonal Similarity Ratings in Non-Musicians: A “Rapid Learning” Approach
Oechslin, Mathias S.; Läge, Damian; Vitouch, Oliver
2012-01-01
Although cognitive music psychology has a long tradition of expert–novice comparisons, experimental training studies are rare. Studies on the learning progress of trained novices in hearing harmonic relationships are still largely lacking. This paper presents a simple training concept using the example of tone/triad similarity ratings, demonstrating the gradual progress of non-musicians compared to musical experts: In a feedback-based “rapid learning” paradigm, participants had to decide for ...
a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud
Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng
2016-06-01
This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
Energy Technology Data Exchange (ETDEWEB)
Hooper, Sean D.; Anderson, Iain J; Pati, Amrita; Dalevi, Daniel; Mavromatis, Konstantinos; Kyrpides, Nikos C
2009-01-01
In order to simplify and meaningfully categorize large sets of protein sequence data, it is commonplace to cluster proteins based on the similarity of those sequences. However, it quickly becomes clear that the sequence flexibility allowed a given protein varies significantly among different protein families. The degree to which sequences are conserved not only differs for each protein family, but also is affected by the phylogenetic divergence of the source organisms. Clustering techniques that use similarity thresholds for protein families do not always allow for these variations and thus cannot be confidently used for applications such as automated annotation and phylogenetic profiling. In this work, we applied a spectral bipartitioning technique to all proteins from 53 archaeal genomes. Comparisons between different taxonomic levels allowed us to study the effects of phylogenetic distances on cluster structure. Likewise, by associating functional annotations and phenotypic metadata with each protein, we could compare our protein similarity clusters with both protein function and associated phenotype. Our clusters can be analyzed graphically and interactively online.
Applying Statistical Models and Parametric Distance Measures for Music Similarity Search
Lukashevich, Hanna; Dittmar, Christian; Bastuck, Christoph
Automatic deriving of similarity relations between music pieces is an inherent field of music information retrieval research. Due to the nearly unrestricted amount of musical data, the real-world similarity search algorithms have to be highly efficient and scalable. The possible solution is to represent each music excerpt with a statistical model (ex. Gaussian mixture model) and thus to reduce the computational costs by applying the parametric distance measures between the models. In this paper we discuss the combinations of applying different parametric modelling techniques and distance measures and weigh the benefits of each one against the others.
Radev, Dimitar; Lokshina, Izabella
2010-11-01
The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.
A Control Chart Approach for Representing and Mining Data Streams with Shape Based Similarity
Energy Technology Data Exchange (ETDEWEB)
Omitaomu, Olufemi A [ORNL
2014-01-01
The mining of data streams for online condition monitoring is a challenging task in several domains including (electric) power grid system, intelligent manufacturing, and consumer science. Considering a power grid application in which thousands of sensors, called the phasor measurement units, are deployed on the power grid network to continuously collect streams of digital data for real-time situational awareness and system management. Depending on design, each sensor could stream between ten and sixty data samples per second. The myriad of sensory data captured could convey deeper insights about sequence of events in real-time and before major damages are done. However, the timely processing and analysis of these high-velocity and high-volume data streams is a challenge. Hence, a new data processing and transformation approach, based on the concept of control charts, for representing sequence of data streams from sensors is proposed. In addition, an application of the proposed approach for enhancing data mining tasks such as clustering using real-world power grid data streams is presented. The results indicate that the proposed approach is very efficient for data streams storage and manipulation.
Self-similar approach to the explosion of droplets by a high energy laser beam
Energy Technology Data Exchange (ETDEWEB)
Chitanvis, S.M.
1987-09-25
We have constructed a model in which a small droplet is exploded by the absorption of energy from a high energy laser beam. The beam flux is so high that we assume the formation of a plasma. We have a single-fluid model of a plasma droplet interacting with laser radiation. Selfsimilarity is invoked to reduce the spherically symmetric problem involving hydrodynamics and Maxwell's equations to quadrature. We show analytically that our model reproduces in a qualitative manner certain features observed experimentally by Eickmans et al.
Model Construct Based Enterprise Model Architecture and Its Modeling Approach
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.
Nava, Jaime
2015-01-01
This book demonstrates how to describe and analyze a system's behavior and extract the desired prediction and control algorithms from this analysis. A typical prediction is based on observing similar situations in the past, knowing the outcomes of these past situations, and expecting that the future outcome of the current situation will be similar to these past observed outcomes. In mathematical terms, similarity corresponds to symmetry, and similarity of outcomes to invariance. This book shows how symmetries can be used in all classes of algorithmic problems of sciences and engineering: from analysis to prediction to control. Applications cover chemistry, geosciences, intelligent control, neural networks, quantum physics, and thermal physics. Specifically, it is shown how the approach based on symmetry and similarity can be used in the analysis of real-life systems, in the algorithms of prediction, and in the algorithms of control.
Cabell, R.; Delle Monache, L.; Alessandrini, S.; Rodriguez, L.
2015-12-01
Climate-based studies require large amounts of data in order to produce accurate and reliable results. Many of these studies have used 30-plus year data sets in order to produce stable and high-quality results, and as a result, many such data sets are available, generally in the form of global reanalyses. While the analysis of these data lead to high-fidelity results, its processing can be very computationally expensive. This computational burden prevents the utilization of these data sets for certain applications, e.g., when rapid response is needed in crisis management and disaster planning scenarios resulting from release of toxic material in the atmosphere. We have developed a methodology to reduce large climate datasets to more manageable sizes while retaining statistically similar results when used to produce ensembles of possible outcomes. We do this by employing a Self-Organizing Map (SOM) algorithm to analyze general patterns of meteorological fields over a regional domain of interest to produce a small set of "typical days" with which to generate the model ensemble. The SOM algorithm takes as input a set of vectors and generates a 2D map of representative vectors deemed most similar to the input set and to each other. Input predictors are selected that are correlated with the model output, which in our case is an Atmospheric Transport and Dispersion (T&D) model that is highly dependent on surface winds and boundary layer depth. To choose a subset of "typical days," each input day is assigned to its closest SOM map node vector and then ranked by distance. Each node vector is treated as a distribution and days are sampled from them by percentile. Using a 30-node SOM, with sampling every 20th percentile, we have been able to reduce 30 years of the Climate Forecast System Reanalysis (CFSR) data for the month of October to 150 "typical days." To estimate the skill of this approach, the "Measure of Effectiveness" (MOE) metric is used to compare area and overlap
Ambroise, Jérôme; Robert, Annie; Macq, Benoit; Gala, Jean-Luc
2012-01-06
An important challenge in system biology is the inference of biological networks from postgenomic data. Among these biological networks, a gene transcriptional regulatory network focuses on interactions existing between transcription factors (TFs) and and their corresponding target genes. A large number of reverse engineering algorithms were proposed to infer such networks from gene expression profiles, but most current methods have relatively low predictive performances. In this paper, we introduce the novel TNIFSED method (Transcriptional Network Inference from Functional Similarity and Expression Data), that infers a transcriptional network from the integration of correlations and partial correlations of gene expression profiles and gene functional similarities through a supervised classifier. In the current work, TNIFSED was applied to predict the transcriptional network in Escherichia coli and in Saccharomyces cerevisiae, using datasets of 445 and 170 affymetrix arrays, respectively. Using the area under the curve of the receiver operating characteristics and the F-measure as indicators, we showed the predictive performance of TNIFSED to be better than unsupervised state-of-the-art methods. TNIFSED performed slightly worse than the supervised SIRENE algorithm for the target genes identification of the TF having a wide range of yet identified target genes but better for TF having only few identified target genes. Our results indicate that TNIFSED is complementary to the SIRENE algorithm, and particularly suitable to discover target genes of "orphan" TFs.
On Measuring Process Model Similarity Based on High-Level Change Operations
Li, C.; Reichert, M.U.; Wombacher, A.
2008-01-01
For various applications there is the need to compare the similarity between two process models. For example, given the as-is and to-be models of a particular business process, we would like to know how much they differ from each other and how we can efficiently transform the as-is to the to-be mode
On Measuring Process Model Similarity based on High-level Change Operations
Li, C.; Reichert, M.U.; Wombacher, A.
2007-01-01
For various applications there is the need to compare the similarity between two process models. For example, given the as-is and to-be models of a particular business process, we would like to know how much they differ from each other and how we can efficiently transform the as-is to the to-be mode
Directory of Open Access Journals (Sweden)
Jun Li
2014-08-01
Full Text Available Paper similarity detection depends on grammatical and semantic analysis, word segmentation, similarity detection, document summarization and other technologies, involving multiple disciplines. However, there are some problems in the existing main detection models, such as incomplete segmentation preprocessing specification, impact of the semantic orders on detection, near-synonym evaluation, difficulties in paper backtrack and etc. Therefore, this paper presents a two-step segmentation model of special identifier and Sharpley value specific to above problems, which can improve segmentation accuracy. In the aspect of similarity comparison, a distance matrix model with row-column order penalty factor is proposed, which recognizes new words through search engine exponent. This model integrates the characteristics of vector detection, hamming distance and the longest common substring and carries out detection specific to near-synonyms, word deletion and changes in word order by redefining distance matrix and adding ordinal measures, making sentence similarity detection in terms of semantics and backbone word segmentation more effective. Compared with the traditional paper similarity retrieval, the present method has advantages in accuracy of word segmentation, low computation, reliability and high efficiency, which is of great academic significance in word segmentation, similarity detection and document summarization.
DEFF Research Database (Denmark)
van Opbroek, Annegreet; Ikram, M. Arfan; Vernooij, Meike W.;
2013-01-01
Many successful methods for biomedical image segmentation are based on supervised learning, where a segmentation algorithm is trained based on manually labeled training data. For supervised-learning algorithms to perform well, this training data has to be representative for the target data....... In practice however, due to differences between scanners such representative training data is often not available. We therefore present a segmentation algorithm in which labeled training data does not necessarily need to be representative for the target data, which allows for the use of training data from......-tissue segmentation with training and target data from four substantially different studies our method improved mean classification errors with up to 25% compared to common supervised-learning approaches. © 2013 Springer International Publishing....
Stepinski, T. F.; Netzel, P.; Jasiewicz, J.
2014-12-01
We have developed a novel method for classification and search of climate over the global land surface excluding Antarctica. Our method classifies climate on the basis of the outcome of time series segmentation and clustering. We use WorldClim 30 arc sec. (approx. 1 km) resolution grid data which is based on 50 years of climatic observations. Each cell in a grid is assigned a 12 month series consisting of 50-years monthly averages of mean, maximum, and minimum temperatures as well as the total precipitation. The presented method introduces several innovations with comparison to existing data-driven methods of world climate classifications. First, it uses only climatic rather than bioclimatic data. Second, it employs object-oriented methodology - the grid is first segmented before climatic segments are classified. Third, and most importantly, the similarity between climates in two given cells is performed using the dynamic time warping (DTW) measure instead of the Euclidean distance. The DTW is known to be superior to Euclidean distance for time series, but has not been utilized before in classification of global climate. To account for computational expense of DTW we use highly efficient GeoPAT software (http://sil.uc.edu/gitlist/) that, in the first step, segments the grid into local regions of uniform climate. In the second step, the segments are classified. We also introduce a climate search - a GeoWeb-based method for interactive presentation of global climate information in the form of query-and-retrieval. A user selects a geographical location and the system returns a global map indicating level of similarity between local climates and a climate in the selected location. The results of the search for location: "University of Cincinnati, Main Campus" are presented on attached map. The results of the search for location: "University of Cincinnati, Main Campus" are presented on the map. We have compared the results of our method to Koeppen classification scheme
An Exactly Soluble Hierarchical Clustering Model Inverse Cascades, Self-Similarity, and Scaling
Gabrielov, A; Turcotte, D L
1999-01-01
We show how clustering as a general hierarchical dynamical process proceeds via a sequence of inverse cascades to produce self-similar scaling, as an intermediate asymptotic, which then truncates at the largest spatial scales. We show how this model can provide a general explanation for the behavior of several models that has been described as ``self-organized critical,'' including forest-fire, sandpile, and slider-block models.
Leshinskaya, Anna; Contreras, Juan Manuel; Caramazza, Alfonso; Mitchell, Jason P
2017-01-01
The present experiment identified neural regions that represent a class of concepts that are independent of perceptual or sensory attributes. During functional magnetic resonance imaging scanning, participants viewed names of social groups (e.g. Atheists, Evangelicals, and Economists) and performed a one-back similarity judgment according to 1 of 2 dimensions of belief attributes: political orientation (Liberal to Conservative) or spiritualism (Spiritualist to Materialist). By generalizing across a wide variety of social groups that possess these beliefs, these attribute concepts did not coincide with any specific sensory quality, allowing us to target conceptual, rather than perceptual, representations. Multi-voxel pattern searchlight analysis was used to identify regions in which activation patterns distinguished the 2 ends of both dimensions: Conservative from Liberal social groups when participants focused on the political orientation dimension, and spiritual from Materialist groups when participants focused on the spiritualism dimension. A cluster in right precuneus exhibited such a pattern, indicating that it carries information about belief-attribute concepts and forms part of semantic memory-perhaps a component particularly concerned with psychological traits. This region did not overlap with the theory of mind network, which engaged nearby, but distinct, parts of precuneus. These findings have implications for the neural organization of conceptual knowledge, especially the understanding of social groups. © The Author 2017. Published by Oxford University Press.
On two-layer models and the similarity functions for the PBL
Brown, R. A.
1982-01-01
An operational Planetary Boundary Layer model which employs similarity principles and two-layer patching to provide state-of-the-art parameterization for the PBL flow is used to study the popularly used similarity functions, A and B. The expected trends with stratification are shown. The effects of baroclinicity, secondary flow, humidity, latitude, surface roughness variation and choice of characteristic height scale are discussed.
Uncovering highly obfuscated plagiarism cases using fuzzy semantic-based similarity model
Salha M. Alzahrani; Naomie Salim; Vasile Palade
2015-01-01
Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS) tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and su...
Decoding levels of representation in reading: A representational similarity approach.
Fischer-Baum, Simon; Bruggemann, Dorothy; Gallego, Ivan Felipe; Li, Donald S P; Tamez, Emilio R
2017-05-01
Multiple levels of representation are involved in reading single words: visual representations of letter shape, orthographic representations of letter identity and order, phonological representations of the word's pronunciation, and semantic representations of its meaning. Previous lesion and neuroimaging studies have identified a network of regions recruited during word reading, including ventral occipital-temporal regions and the angular gyrus (AG). However, there is still debate about what information is being represented and processed in these regions. This study has two aims. The first is to help adjudicate between competing hypotheses concerning the role of ventral occipital cortex in reading. The second is to adjudicate between competing hypotheses concerning the role of the AG in reading. Participants read words in the scanner while performing a proper name detection task and we use a multivariate pattern analysis technique for analyzing fMRI data - representational similarity analysis (RSA) - to decode the type of information being represented in these regions based on computationally explicit theories. Distributed patterns of activation in the left ventral occipitotemporal cortex (vOT) and the AG show evidence of some type of orthographic processing, while the right hemisphere homologues of the vOT supports visual, but not orthographic, information processing of letter strings. In addition, there is evidence of left-lateralized semantic processing in the lvOT and evidence of top-down feedback in the lvOT. Taken together, these results suggest an interactive activation theory of visual word processing in which both the lvOT and lAG are neural loci of an orthographic level of representations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hydraulic Modeling of Lock Approaches
2016-08-01
cation was that the guidewall design changed from a solid wall to one on pilings in which water was allowed to flow through and/or under the wall ...develops innovative solutions in civil and military engineering, geospatial sciences, water resources, and environmental sciences for the Army, the...magnitudes and directions at lock approaches for open river conditions. The meshes were developed using the Surface- water Modeling System. The two
LP Approach to Statistical Modeling
Mukhopadhyay, Subhadeep; Parzen, Emanuel
2014-01-01
We present an approach to statistical data modeling and exploratory data analysis called `LP Statistical Data Science.' It aims to generalize and unify traditional and novel statistical measures, methods, and exploratory tools. This article outlines fundamental concepts along with real-data examples to illustrate how the `LP Statistical Algorithm' can systematically tackle different varieties of data types, data patterns, and data structures under a coherent theoretical framework. A fundament...
LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation.
Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl
2015-08-01
Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average
A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure
Directory of Open Access Journals (Sweden)
Yang Zhou
2016-01-01
Full Text Available It is an important content to generate visual place cells (VPCs in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs’ generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs’ firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF and firing rate’s threshold (FRT.
Similar extrusion and mapping optimization of die cavity modeling for special-shaped products
Institute of Scientific and Technical Information of China (English)
QI Hong-yuan; WANG Shuang-xin; ZHU Heng-jun
2006-01-01
Aimed at the modeling issues in design and quick processing of extruding die for special-shaped products, with the help of Conformal Mapping theory, Conformal Mapping function is determined by the given method of numerical trigonometric interpolation. Three-dimensional forming problems are transformed into two-dimensional problems, and mathematical model of die cavity surface is established based on different kinds of vertical curve, as well as the mathematical model of plastic flow in extruding deformation of special-shaped products gets completed. By upper bound method, both vertical curves of die cavity and its parameters are optimized. Combining the optimized model with the latest NC technology, NC Program of die cavity and its CAM can be realized. Taking the similar extrusion of square-shaped products with arc radius as instance, both metal plastic similar extrusion and die cavity optimization are carried out.
Experimental Study of Dowel Bar Alternatives Based on Similarity Model Test
Directory of Open Access Journals (Sweden)
Chichun Hu
2017-01-01
Full Text Available In this study, a small-scaled accelerated loading test based on similarity theory and Accelerated Pavement Analyzer was developed to evaluate dowel bars with different materials and cross-sections. Jointed concrete specimen consisting of one dowel was designed as scaled model for the test, and each specimen was subjected to 864 thousand loading cycles. Deflections between jointed slabs were measured with dial indicators, and strains of the dowel bars were monitored with strain gauges. The load transfer efficiency, differential deflection, and dowel-concrete bearing stress for each case were calculated from these measurements. The test results indicated that the effect of the dowel modulus on load transfer efficiency can be characterized based on the similarity model test developed in the study. Moreover, round steel dowel was found to have similar performance to larger FRP dowel, and elliptical dowel can be preferentially considered in practice.
A Bayesian Shrinkage Approach for AMMI Models.
da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio
2015-01-01
Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior
A Bayesian Shrinkage Approach for AMMI Models.
Directory of Open Access Journals (Sweden)
Carlos Pereira da Silva
Full Text Available Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI model, are widely applicable to genotype-by-environment interaction (GEI studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05 in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct
Numerical model of a non-steady atmospheric planetary boundary layer, based on similarity theory
DEFF Research Database (Denmark)
Zilitinkevich, S.S.; Fedorovich, E.E.; Shabalova, M.V.
1992-01-01
A numerical model of a non-stationary atmospheric planetary boundary layer (PBL) over a horizontally homogeneous flat surface is derived on the basis of similarity theory. The two most typical turbulence regimes are reproduced: one corresponding to a convectively growing PBL and another correspon...
Similarity Reduction and Integrability for the Nonlinear Wave Equations from EPM Model
Institute of Scientific and Technical Information of China (English)
YAN ZhenYa
2001-01-01
Four types of similarity reductions are obtained for the nonlinear wave equation arising in the elasto-plasticmicrostructure model by using both the direct method due to Clarkson and Kruskal and the improved direct method due to Lou. As a result, the nonlinear wave equation is not integrable.``
Approaches to Modeling of Recrystallization
Directory of Open Access Journals (Sweden)
Håkan Hallberg
2011-10-01
Full Text Available Control of the material microstructure in terms of the grain size is a key component in tailoring material properties of metals and alloys and in creating functionally graded materials. To exert this control, reliable and efficient modeling and simulation of the recrystallization process whereby the grain size evolves is vital. The present contribution is a review paper, summarizing the current status of various approaches to modeling grain refinement due to recrystallization. The underlying mechanisms of recrystallization are briefly recollected and different simulation methods are discussed. Analytical and empirical models, continuum mechanical models and discrete methods as well as phase field, vertex and level set models of recrystallization will be considered. Such numerical methods have been reviewed previously, but with the present focus on recrystallization modeling and with a rapidly increasing amount of related publications, an updated review is called for. Advantages and disadvantages of the different methods are discussed in terms of applicability, underlying assumptions, physical relevance, implementation issues and computational efficiency.
Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira
2015-01-01
Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound
Riccati-coupled similarity shock wave solutions for multispeed discrete Boltzmann models
Energy Technology Data Exchange (ETDEWEB)
Cornille, H. (Service de Physique Theorique, Gif-sur-Yvette (France)); Platkowski, T. (Warsaw Univ. (Poland))
1993-05-01
The authors study nonstandard shock wave similarity solutions for three multispeed discrete boltzmann models: (1) the square 8[upsilon][sub i] model with speeds 1 and [radical]2 with the x axis along one median, (2) the Cabannes cubic 14[upsilon][sub i] model with speeds 1 and [radical]3 and the x axis perpendicular to one face, and (3) another 14[upsilon][sub i] model with speeds 1 and [radical]2. These models have five independent densities and two nonlinear Riccati-coupled equations. The standard similarity shock waves, solutions of scalar Riccati equations, are monotonic and the same behavior holds for the conservative macroscopic quantities. First, the exact similarity shock-wave solutions of coupled Riccati equations are determined and the nonmonotonic behavior for one density and a smaller effect for one conservative macroscopic quantity are observed when a violation of the microreversibility is allowed. Second, new results are obtained on the Whitham weak shock wave propagation. Third, the corresponding dynamical system is numerically solved, with microreversibility satisfied or not, and the analogous nonmonotonic behavior is observed. 9 refs., 2 figs., 1 tab.
Content-based similarity for 3D model retrieval and classification
Institute of Scientific and Technical Information of China (English)
Ke Lü; Ning He; Jian Xue
2009-01-01
With the rapid development of 3D digital shape information,content-based 3D model retrieval and classification has become an important research area.This paper presents a novel 3D model retrieval and classification algorithm.For feature representation,a method combining a distance histogram and moment invariants is proposed to improve the retrieval performance.The major advantage of using a distance histogram is its invariance to the transforms of scaling,translation and rotation.Based on the premise that two similar objects should have high mutual information,the querying of 3D data should convey a great deal of information on the shape of the two objects,and so we propose a mutual information distance measurement to perform the similarity comparison of 3D objects.The proposed algorithm is tested with a 3D model retrieval and classification prototype,and the experimental evaluation demonstrates satisfactory retrieval results and classification accuracy.
Scaling and interaction of self-similar modes in models of high Reynolds number wall turbulence
Sharma, A. S.; Moarref, R.; McKeon, B. J.
2017-03-01
Previous work has established the usefulness of the resolvent operator that maps the terms nonlinear in the turbulent fluctuations to the fluctuations themselves. Further work has described the self-similarity of the resolvent arising from that of the mean velocity profile. The orthogonal modes provided by the resolvent analysis describe the wall-normal coherence of the motions and inherit that self-similarity. In this contribution, we present the implications of this similarity for the nonlinear interaction between modes with different scales and wall-normal locations. By considering the nonlinear interactions between modes, it is shown that much of the turbulence scaling behaviour in the logarithmic region can be determined from a single arbitrarily chosen reference plane. Thus, the geometric scaling of the modes is impressed upon the nonlinear interaction between modes. Implications of these observations on the self-sustaining mechanisms of wall turbulence, modelling and simulation are outlined.
Image Denoising via Bandwise Adaptive Modeling and Regularization Exploiting Nonlocal Similarity.
Xiong, Ruiqin; Liu, Hangfan; Zhang, Xinfeng; Zhang, Jian; Ma, Siwei; Wu, Feng; Gao, Wen
2016-09-27
This paper proposes a new image denoising algorithm based on adaptive signal modeling and regularization. It improves the quality of images by regularizing each image patch using bandwise distribution modeling in transform domain. Instead of using a global model for all the patches in an image, it employs content-dependent adaptive models to address the non-stationarity of image signals and also the diversity among different transform bands. The distribution model is adaptively estimated for each patch individually. It varies from one patch location to another and also varies for different bands. In particular, we consider the estimated distribution to have non-zero expectation. To estimate the expectation and variance parameters for every band of a particular patch, we exploit the nonlocal correlation in image to collect a set of highly similar patches as the data samples to form the distribution. Irrelevant patches are excluded so that such adaptively-learned model is more accurate than a global one. The image is ultimately restored via bandwise adaptive soft-thresholding, based on a Laplacian approximation of the distribution of similar-patch group transform coefficients. Experimental results demonstrate that the proposed scheme outperforms several state-of-the-art denoising methods in both the objective and the perceptual qualities.
STUDY ON SIMILARITY LAWS OF A DISTORTED RIVER MODEL WITH A MOVABLE BED
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
In this study, by considering the scale ratio related to thespecific gravity of the submerged bed material,and introducing a degree of distortion, n the similarity laws for a distorted river model with a movable bed were derived under the conditions that the values of dual dimensionless parameters in a regime-criterion diagram for the bars are the same in a model as they are in a prototype, and that a resistance law such as the Manning-Strickler-type formula is to be valid for a model and a prototype. The usefulness of the similarity laws derived in this study was verified by comparing the bed forms from the distroted model experiments with the bed forms from the 1/50-scale undistorted model experiments, which were performed by the Hokkaido Development Bureau (H. D.B. ), Japan, to examine the tentative plan for the improvement of a low-flow channel in the Chubetsu River, which is a tributary of the Ishikari River. It is considered that the distorted model experiments to be valid with either sand or lightweight bed material.
Ceré, Raphaël; Kaiser, Christian
2015-04-01
models (DEM) or individual building vector layers. Morphological properties can be calculated for different scales using different moving window sizes. Multi-scale measures such as fractal or lacunarity can be integrated into the analysis. Other properties such as different densities and ratios are also easy to calculate and include. Based on a rather extensive set of properties or features, a feature selection or extraction method such as Principal Component Analysis can be used to obtain a subset of relevant properties. In a second step, an unsupervised classification algorithm such as Self-Organizing Maps can be used to group similar locations together, and criteria such as the intra-group distance and geographic distribution can be used for selecting relevant locations to be displayed in an interactive data exploration interface along with a given main location. A case study for a part of Switzerland illustrates the presented approach within a working interactive tool, showing the feasibility and allowing for an investigation of the usefulness of our method.
Non-frontal model based approach to forensic face recognition
Dutta, Abhishek; Veldhuis, Raymond; Spreeuwers, Luuk
2012-01-01
In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance vie
Short-Term Power Forecasting Model for Photovoltaic Plants Based on Historical Similarity
Directory of Open Access Journals (Sweden)
M. Sonia Terreros-Olarte
2013-05-01
Full Text Available This paper proposes a new model for short-term forecasting of electric energy production in a photovoltaic (PV plant. The model is called HIstorical SImilar MIning (HISIMI model; its final structure is optimized by using a genetic algorithm, based on data mining techniques applied to historical cases composed by past forecasted values of weather variables, obtained from numerical tools for weather prediction, and by past production of electric power in a PV plant. The HISIMI model is able to supply spot values of power forecasts, and also the uncertainty, or probabilities, associated with those spot values, providing new useful information to users with respect to traditional forecasting models for PV plants. Such probabilities enable analysis and evaluation of risk associated with those spot forecasts, for example, in offers of energy sale for electricity markets. The results of spot forecasting of an illustrative example obtained with the HISIMI model for a real-life grid-connected PV plant, which shows high intra-hour variability of its actual power output, with forecasting horizons covering the following day, have improved those obtained with other two power spot forecasting models, which are a persistence model and an artificial neural network model.
a Version-Similarity Based Trust Degree Computation Model for Crowdsourcing Geographic Data
Zhou, Xiaoguang; Zhao, Yijiang
2016-06-01
Quality evaluation and control has become the main concern of VGI. In this paper, trust is used as a proxy of VGI quality, a version-similarity based trust degree computation model for crowdsourcing geographic data is presented. This model is based on the assumption that the quality of VGI objects mainly determined by the professional skill and integrity (called reputation in this paper), and the reputation of the contributor is movable. The contributor's reputation is calculated using the similarity degree among the multi-versions for the same entity state. The trust degree of VGI object is determined by the trust degree of its previous version, the reputation of the last contributor and the modification proportion. In order to verify this presented model, a prototype system for computing the trust degree of VGI objects is developed by programming with Visual C# 2010. The historical data of Berlin of OpenStreetMap (OSM) are employed for experiments. The experimental results demonstrate that the quality of crowdsourcing geographic data is highly positive correlation with its trustworthiness. As the evaluation is based on version-similarity, not based on the direct subjective evaluation among users, the evaluation result is objective. Furthermore, as the movability property of the contributors' reputation is used in this presented method, our method has a higher assessment coverage than the existing methods.
Nie, Lin-Fei; Teng, Zhi-Dong; Nieto, Juan J.; Jung, Il Hyo
2015-07-01
For reasons of preserving endangered languages, we propose, in this paper, a novel two-languages competitive model with bilingualism and interlinguistic similarity, where state-dependent impulsive control strategies are introduced. The novel control model includes two control threshold values, which are different from the previous state-dependent impulsive differential equations. By using qualitative analysis method, we obtain that the control model exhibits two stable positive order-1 periodic solutions under some general conditions. Moreover, numerical simulations clearly illustrate the main theoretical results and feasibility of state-dependent impulsive control strategies. Meanwhile numerical simulations also show that state-dependent impulsive control strategy can be applied to other general two-languages competitive model and obtain the desired result. The results indicate that the fractions of two competitive languages can be kept within a reasonable level under almost any circumstances. Theoretical basis for finding a new control measure to protect the endangered language is offered.
A model for cross-referencing and calculating similarity of metal alloys
Directory of Open Access Journals (Sweden)
Svetlana Pocajt
2013-12-01
Full Text Available This paper presents an innovative model for the comparison and crossreferencing of metal alloys, in order to determine their interchangeability in engineering, manufacturing and material sourcing. The model uses a large alloy database and statistical approach to estimate missing composition and mechanical properties parameters and to calculate property intervals. A classification of metals and fuzzy logic are then applied to compare metal alloys. The model and its algorithm have been implemented and tested in real-life applications. In this paper, an application of the model in finding unknown equivalent metals by comparing their compositions and mechanical properties in a very large metals database is described, and possibilities for further research and new applications are presented.
Modeling of 3D-structure for regular fragments of low similarity unknown structure proteins
Institute of Scientific and Technical Information of China (English)
Peng Zhihong; Chen Jie; Lin Xiwen; Sang Yanchao
2007-01-01
Because it is hard to search similar structure for low similarity unknown structure proteins dimefly from the Protein Data Bank(PDB)database,3D-structure is modeled in this paper for secondary structure regular fragments(α-Helices,β-Strands)of such proteins by the protein secondary structure prediction software,the Basic Local Alignment Search Tool(BLAST)and the side chain construction software SCWRL3.First.the protein secondary structure prediction software is adopted to extract secondary structure fragments from the unknown structure proteins.Then.regular fragments are regulated by BLAST based on comparative modeling,providing main chain configurations.Finally,SCWRL3 is applied to assemble side chains for regular fragments,so that 3D-structure of regular fragments of low similarity un known structure protein is obtained.Regular fragments of several neurotoxins ale used for test.Simulation results show that the prediction errors are less than 0.06nm for regular fragments less than 10 amino acids,implying the simpleness and effectiveness of the proposed method.
Possible Implications of a Vortex Gas Model and Self-Similarity for Tornadogenesis and Maintenance
Dokken, Doug; Shvartsman, Misha; Běl\\'\\ik, Pavel; Potvin, Corey; Dahl, Brittany; McGover, Amy
2014-01-01
We describe tornado genesis and maintenance using the 3-dimensional vortex gas model presented in Chorin (1994). High-energy vortices with negative temperature in the sense of Onsager (1949) play an important role in the model. We speculate that the formation of high-temperature vortices is related to the helicity inherited as they form or tilt into the vertical. We also exploit the notion of self-similarity to justify power laws derived from observations of weak and strong tornadoes presented in Cai (2005), Wurman and Gill (2000), and Wurman and Alexander (2005). Analysis of a Bryan Cloud Model (CM1) simulation of a tornadic supercell reveals scaling consistent with the observational studies.
A new approach for Bayesian model averaging
Institute of Scientific and Technical Information of China (English)
TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun
2012-01-01
Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.
Rahman, Md Mahmudur; Antani, Sameer K; Thoma, George R
2011-07-01
This paper presents a classification-driven biomedical image retrieval framework based on image filtering and similarity fusion by employing supervised learning techniques. In this framework, the probabilistic outputs of a multiclass support vector machine (SVM) classifier as category prediction of query and database images are exploited at first to filter out irrelevant images, thereby reducing the search space for similarity matching. Images are classified at a global level according to their modalities based on different low-level, concept, and keypoint-based features. It is difficult to find a unique feature to compare images effectively for all types of queries. Hence, a query-specific adaptive linear combination of similarity matching approach is proposed by relying on the image classification and feedback information from users. Based on the prediction of a query image category, individual precomputed weights of different features are adjusted online. The prediction of the classifier may be inaccurate in some cases and a user might have a different semantic interpretation about retrieved images. Hence, the weights are finally determined by considering both precision and rank order information of each individual feature representation by considering top retrieved relevant images as judged by the users. As a result, the system can adapt itself to individual searches to produce query-specific results. Experiment is performed in a diverse collection of 5 000 biomedical images of different modalities, body parts, and orientations. It demonstrates the efficiency (about half computation time compared to search on entire collection) and effectiveness (about 10%-15% improvement in precision at each recall level) of the retrieval approach.
Directory of Open Access Journals (Sweden)
Linliang Guo
2017-04-01
Full Text Available To satisfy the validation requirements of flight control law for advanced aircraft, a wind tunnel based virtual flight testing has been implemented in a low speed wind tunnel. A 3-degree-of-freedom gimbal, ventrally installed in the model, was used in conjunction with an actively controlled dynamically similar model of aircraft, which was equipped with the inertial measurement unit, attitude and heading reference system, embedded computer and servo-actuators. The model, which could be rotated around its center of gravity freely by the aerodynamic moments, together with the flow field, operator and real time control system made up the closed-loop testing circuit. The model is statically unstable in longitudinal direction, and it can fly stably in wind tunnel with the function of control augmentation of the flight control laws. The experimental results indicate that the model responds well to the operator’s instructions. The response of the model in the tests shows reasonable agreement with the simulation results. The difference of response of angle of attack is less than 0.5°. The effect of stability augmentation and attitude control law was validated in the test, meanwhile the feasibility of virtual flight test technique treated as preliminary evaluation tool for advanced flight vehicle configuration research was also verified.
Muniandy, S V; Lim, S C
2001-04-01
Fractional Brownian motion (FBM) is widely used in the modeling of phenomena with power spectral density of power-law type. However, FBM has its limitation since it can only describe phenomena with monofractal structure or a uniform degree of irregularity characterized by the constant Holder exponent. For more realistic modeling, it is necessary to take into consideration the local variation of irregularity, with the Holder exponent allowed to vary with time (or space). One way to achieve such a generalization is to extend the standard FBM to multifractional Brownian motion (MBM) indexed by a Holder exponent that is a function of time. This paper proposes an alternative generalization to MBM based on the FBM defined by the Riemann-Liouville type of fractional integral. The local properties of the Riemann-Liouville MBM (RLMBM) are studied and they are found to be similar to that of the standard MBM. A numerical scheme to simulate the locally self-similar sample paths of the RLMBM for various types of time-varying Holder exponents is given. The local scaling exponents are estimated based on the local growth of the variance and the wavelet scalogram methods. Finally, an example of the possible applications of RLMBM in the modeling of multifractal time series is illustrated.
Locally self-similar phase diagram of the disordered Potts model on the hierarchical lattice.
Anglès d'Auriac, J-Ch; Iglói, Ferenc
2013-02-01
We study the critical behavior of the random q-state Potts model in the large-q limit on the diamond hierarchical lattice with an effective dimensionality d(eff)>2. By varying the temperature and the strength of the frustration the system has a phase transition line between the paramagnetic and the ferromagnetic phases which is controlled by four different fixed points. According to our renormalization group study the phase boundary in the vicinity of the multicritical point is self-similar; it is well represented by a logarithmic spiral. We expect an infinite number of reentrances in the thermodynamic limit; consequently one cannot define standard thermodynamic phases in this region.
Merging Digital Surface Models Implementing Bayesian Approaches
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES
Directory of Open Access Journals (Sweden)
H. Sadeq
2016-06-01
Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Self-similarities of periodic structures for a discrete model of a two-gene system
Energy Technology Data Exchange (ETDEWEB)
Souza, S.L.T. de, E-mail: thomaz@ufsj.edu.br [Departamento de Física e Matemática, Universidade Federal de São João del-Rei, Ouro Branco, MG (Brazil); Lima, A.A. [Escola de Farmácia, Universidade Federal de Ouro Preto, Ouro Preto, MG (Brazil); Caldas, I.L. [Instituto de Física, Universidade de São Paulo, São Paulo, SP (Brazil); Medrano-T, R.O. [Departamento de Ciências Exatas e da Terra, Universidade Federal de São Paulo, Diadema, SP (Brazil); Guimarães-Filho, Z.O. [Aix-Marseille Univ., CNRS PIIM UMR6633, International Institute for Fusion Science, Marseille (France)
2012-03-12
We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. -- Highlights: ► The existence of noticeable periodic windows has been reported recently for several nonlinear systems. ► The periodic window distributions appear highly organized in two-parameter space. ► We characterize self-similar properties of Arnold tongues and shrimps for a two-gene model. ► We determine the period of the Arnold tongues recognizing a Fibonacci-type sequence. ► We explore self-similar features of the shrimps identifying multiple period-three structures.
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The problem of state space explosion is still an outstanding challenge in Markovian performance analysis for multiserver multiqueue (MSMQ) systems. The system behavior of a MSMQ system is described using stochastic high-level Petri net (SHLPN) models, and an approximate performance analysis technique is proposed based on decomposition and refinement methods as well as iteration technique. A real MSMQ system, Web-server cluster, is investigated. The performance of an integrated scheme of request dispatching and scheduling is analyzed with both Poisson and self-similar request arrivals. The study shows that the approximate analysis technique significantly reduces the complexity of the model solution and is also efficient for accuracy of numerical results.
Modeling the self-organization of vocabularies under phonological similarity effects
Vera, Javier
2016-01-01
This work develops a computational model (by Automata Networks) of short-term memory constraints involved in the formation of linguistic conventions on artificial populations of speakers. The individuals confound phonologically similar words according to a predefined parameter. The main hypothesis of this paper is that there is a critical range of working memory capacities, in particular, a critical phonological degree of confusion, which implies drastic changes in the final consensus of the entire population. A theoretical result proves the convergence of a particular case of the model. Computer simulations describe the evolution of an energy function that measures the amount of local agreement between individuals. The main finding is the appearance of sudden changes in the energy function at critical parameters. Finally, the results are related to previous work on the absence of stages in the formation of languages.
Hard state of the urban canopy layer turbulence and its self-similar multiplicative cascade models
Institute of Scientific and Technical Information of China (English)
HU; Fei; CHENG; Xueling; ZHAO; Songnian; QUAN; Lihong
2005-01-01
It is found by experiment that under the thermal convection condition, the temperature fluctuation in the urban canopy layer turbulence has the hard state character, and the temperature difference between two points has the exponential probability density function distribution. At the same time, the turbulent energy dissipation rate fits the log-normal distribution, and is in accord with the hypothesis proposed by Kolmogorov in 1962 and lots of reported experimental results. In this paper, the scaling law of hard state temperature n order structure function is educed by the self-similar multiplicative cascade models. The theory formula is Sn = n/3μ{n(n+6)/72+[2lnn!-nln2]/2ln6}, and μ Is intermittent exponent. The formula can fit the experimental results up to order 8 exponents, is superior to the predictions by the Kolmogorov theory, the β And log-normal model.
Wilderjans, T F; Ceulemans, E; Kuppens, P
2012-06-01
In many areas of the behavioral sciences, different groups of objects are measured on the same set of binary variables, resulting in coupled binary object × variable data blocks. Take, as an example, success/failure scores for different samples of testees, with each sample belonging to a different country, regarding a set of test items. When dealing with such data, a key challenge consists of uncovering the differences and similarities between the structural mechanisms that underlie the different blocks. To tackle this challenge for the case of a single data block, one may rely on HICLAS, in which the variables are reduced to a limited set of binary bundles that represent the underlying structural mechanisms, and the objects are given scores for these bundles. In the case of multiple binary data blocks, one may perform HICLAS on each data block separately. However, such an analysis strategy obscures the similarities and, in the case of many data blocks, also the differences between the blocks. To resolve this problem, we proposed the new Clusterwise HICLAS generic modeling strategy. In this strategy, the different data blocks are assumed to form a set of mutually exclusive clusters. For each cluster, different bundles are derived. As such, blocks belonging to the same cluster have the same bundles, whereas blocks of different clusters are modeled with different bundles. Furthermore, we evaluated the performance of Clusterwise HICLAS by means of an extensive simulation study and by applying the strategy to coupled binary data regarding emotion differentiation and regulation.
Directory of Open Access Journals (Sweden)
Burt Susan
2006-11-01
Full Text Available Abstract Background The protection, promotion and support of breastfeeding are now major public health priorities. It is well established that skilled support, voluntary or professional, proactively offered to women who want to breastfeed, can increase the initiation and/or duration of breastfeeding. Low levels of breastfeeding uptake and continuation amongst adolescent mothers in industrialised countries suggest that this is a group that is in particular need of breastfeeding support. Using qualitative methods, the present study aimed to investigate the similarities and differences in the approaches of midwives and qualified breastfeeding supporters (the Breastfeeding Network (BfN in supporting breastfeeding adolescent mothers. Methods The study was conducted in the North West of England between September 2001 and October 2002. The supportive approaches of 12 midwives and 12 BfN supporters were evaluated using vignettes, short descriptions of an event designed to obtain specific information from participants about their knowledge, perceptions and attitudes to a particular situation. Responses to vignettes were analysed using thematic networks analysis, involving the extraction of basic themes by analysing each script line by line. The basic themes were then grouped to form organising themes and finally central global themes. Discussion and consensus was reached related to the systematic development of the three levels of theme. Results Five components of support were identified: emotional, esteem, instrumental, informational and network support. Whilst the supportive approaches of both groups incorporated elements of each of the five components of support, BfN supporters placed greater emphasis upon providing emotional and esteem support and highlighted the need to elicit the mothers' existing knowledge, checking understanding through use of open questions and utilising more tentative language. Midwives were more directive and gave more
Otero-Espinar, M. V.; Seoane, L. F.; Nieto, J. J.; Mira, J.
2013-12-01
An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also suggest ways to further improve the model and facilitate the comparison of its consequences with those from other approaches or with real data.
Directory of Open Access Journals (Sweden)
Xiuli Sang
2012-01-01
Full Text Available We constructed a similarity model (based on Euclidean distance between rainfall and runoff to study time-correlated characteristics of rainfall-runoff similar patterns in the upstream Red River Basin and presented a detailed evaluation of the time correlation of rainfall-runoff similarity. The rainfall-runoff similarity was used to determine the optimum similarity. The results showed that a time-correlated model was found to be capable of predicting the rainfall-runoff similarity in the upstream Red River Basin in a satisfactory way. Both noised and denoised time series by thresholding the wavelet coefficients were applied to verify the accuracy of model. And the corresponding optimum similar sets obtained as the equation solution conditions showed an interesting and stable trend. On the whole, the annual mean similarity presented a gradually rising trend, for quantitatively estimating comprehensive influence of climate change and of human activities on rainfall-runoff similarity.
Self-Similar Models for the Mass Profiles of Early-type Lens Galaxies
Rusin, D; Keeton, C R
2003-01-01
We introduce a self-similar mass model for early-type galaxies, and constrain it using the aperture mass-radius relations determined from the geometries of 22 gravitational lenses. The model consists of two components: a concentrated component which traces the light distribution, and a more extended power-law component (rho propto r^-n) which represents the dark matter. We find that lens galaxies have total mass profiles which are nearly isothermal, or slightly steeper, on the several-kiloparsec radial scale spanned by the lensed images. In the limit of a single-component, power-law radial profile, the model implies n=2.07+/-0.13, consistent with isothermal (n=2). Models in which mass traces light are excluded at >99 percent confidence. An n=1 cusp (such as the Navarro-Frenk-White profile) requires a projected dark matter mass fraction of f_cdm = 0.22+/-0.10 inside 2 effective radii. These are the best statistical constraints yet obtained on the mass profiles of lenses, and provide clear evidence for a small ...
Validation of Modeling Flow Approaching Navigation Locks
2013-08-01
instrumentation, direction vernier . ........................................................................ 8 Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model
Energy Technology Data Exchange (ETDEWEB)
Yoon, Sae Rom [Dept of Quantum Energy Chemical Engineering, Korea University of Science and Technology (KUST), Daejeon (Korea, Republic of); Choi, Sung Yeol [Ulsan National Institute of Science and Technology, Ulju (Korea, Republic of); Ko, Wonil [Nonproliferation System Development Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2017-02-15
The focus on the issues surrounding spent nuclear fuel and lifetime extension of old nuclear power plants continues to grow nowadays. A transparent decision-making process to identify the best suitable nuclear fuel cycle (NFC) is considered to be the key task in the current situation. Through this study, an attempt is made to develop an equilibrium model for the NFC to calculate the material flows based on 1 TWh of electricity production, and to perform integrated multicriteria decision-making method analyses via the analytic hierarchy process technique for order of preference by similarity to ideal solution, preference ranking organization method for enrichment evaluation, and multiattribute utility theory methods. This comparative study is aimed at screening and ranking the three selected NFC options against five aspects: sustainability, environmental friendliness, economics, proliferation resistance, and technical feasibility. The selected fuel cycle options include pressurized water reactor (PWR) once-through cycle, PWR mixed oxide cycle, or pyroprocessing sodium-cooled fast reactor cycle. A sensitivity analysis was performed to prove the robustness of the results and explore the influence of criteria on the obtained ranking. As a result of the comparative analysis, the pyroprocessing sodium-cooled fast reactor cycle is determined to be the most competitive option among the NFC scenarios.
Directory of Open Access Journals (Sweden)
Saerom Yoon
2017-02-01
Full Text Available The focus on the issues surrounding spent nuclear fuel and lifetime extension of old nuclear power plants continues to grow nowadays. A transparent decision-making process to identify the best suitable nuclear fuel cycle (NFC is considered to be the key task in the current situation. Through this study, an attempt is made to develop an equilibrium model for the NFC to calculate the material flows based on 1 TWh of electricity production, and to perform integrated multicriteria decision-making method analyses via the analytic hierarchy process technique for order of preference by similarity to ideal solution, preference ranking organization method for enrichment evaluation, and multiattribute utility theory methods. This comparative study is aimed at screening and ranking the three selected NFC options against five aspects: sustainability, environmental friendliness, economics, proliferation resistance, and technical feasibility. The selected fuel cycle options include pressurized water reactor (PWR once-through cycle, PWR mixed oxide cycle, or pyroprocessing sodium-cooled fast reactor cycle. A sensitivity analysis was performed to prove the robustness of the results and explore the influence of criteria on the obtained ranking. As a result of the comparative analysis, the pyroprocessing sodium-cooled fast reactor cycle is determined to be the most competitive option among the NFC scenarios.
Fraune, Johanna; Wiesner, Miriam; Benavente, Ricardo
2014-03-20
The synaptonemal complex (SC) is an evolutionarily well-conserved structure that mediates chromosome synapsis during prophase of the first meiotic division. Although its structure is conserved, the characterized protein components in the current metazoan meiosis model systems (Drosophila melanogaster, Caenorhabditis elegans, and Mus musculus) show no sequence homology, challenging the question of a single evolutionary origin of the SC. However, our recent studies revealed the monophyletic origin of the mammalian SC protein components. Many of them being ancient in Metazoa and already present in the cnidarian Hydra. Remarkably, a comparison between different model systems disclosed a great similarity between the SC components of Hydra and mammals while the proteins of the ecdysozoan systems (D. melanogaster and C. elegans) differ significantly. In this review, we introduce the basal-branching metazoan species Hydra as a potential novel invertebrate model system for meiosis research and particularly for the investigation of SC evolution, function and assembly. Also, available methods for SC research in Hydra are summarized.
3D simulations of disc-winds extending radially self-similar MHD models
Stute, Matthias; Vlahakis, Nektarios; Tsinganos, Kanaris; Mignone, Andrea; Massaglia, Silvano
2014-01-01
Disc-winds originating from the inner parts of accretion discs are considered as the basic component of magnetically collimated outflows. The only available analytical MHD solutions to describe disc-driven jets are those characterized by the symmetry of radial self-similarity. However, radially self-similar MHD jet models, in general, have three geometrical shortcomings, (i) a singularity at the jet axis, (ii) the necessary assumption of axisymmetry, and (iii) the non-existence of an intrinsic radial scale, i.e. the jets formally extend to radial infinity. Hence, numerical simulations are necessary to extend the analytical solutions towards the axis, by solving the full three-dimensional equations of MHD and impose a termination radius at finite radial distance. We focus here on studying the effects of relaxing the (ii) assumption of axisymmetry, i.e. of performing full 3D numerical simulations of a disc-wind crossing all magnetohydrodynamic critical surfaces. We compare the results of these runs with previou...
Murray, Robin M; Sham, Pak; Van Os, Jim; Zanelli, Jolanta; Cannon, Mary; McDonald, Colm
2004-12-01
Schizophrenia and mania have a number of symptoms and epidemiological characteristics in common, and both respond to dopamine blockade. Family, twin and molecular genetic studies suggest that the reason for these similarities may be that the two conditions share certain susceptibility genes. On the other hand, individuals with schizophrenia have more obvious brain structural and neuropsychological abnormalities than those with bipolar disorder; and pre-schizophrenic children are characterised by cognitive and neuromotor impairments, which are not shared by children who later develop bipolar disorder. Furthermore, the risk-increasing effect of obstetric complications has been demonstrated for schizophrenia but not for bipolar disorder. Perinatal complications such as hypoxia are known to result in smaller volume of the amygdala and hippocampus, which have been frequently reported to be reduced in schizophrenia; familial predisposition to schizophrenia is also associated with decreased volume of these structures. We suggest a model to explain the similarities and differences between the disorders and propose that, on a background of shared genetic predisposition to psychosis, schizophrenia, but not bipolar disorder, is subject to additional genes or early insults, which impair neurodevelopment, especially of the medial temporal lobe.
Similar pattern of peripheral neuropathy in mouse models of type 1 diabetes and Alzheimer's disease.
Jolivalt, C G; Calcutt, N A; Masliah, E
2012-01-27
There is an increasing awareness that diabetes has an impact on the CNS and that diabetes is a risk factor for Alzheimer's disease (AD). Links between AD and diabetes point to impaired insulin signaling as a common mechanism leading to defects in the brain. However, diabetes is predominantly characterized by peripheral, rather than central, neuropathy, and despite the common central mechanisms linking AD and diabetes, little is known about the effect of AD on the peripheral nervous system (PNS). In this study, we compared indexes of peripheral neuropathy and investigated insulin signaling in the sciatic nerve of insulin-deficient mice and amyloid precursor protein (APP) overexpressing transgenic mice. Insulin-deficient and APP transgenic mice displayed similar patterns of peripheral neuropathy with decreased motor nerve conduction velocity, thermal hypoalgesia, and loss of tactile sensitivity. Phosphorylation of the insulin receptor and glycogen synthase kinase 3β (GSK3β) was similarly affected in insulin-deficient and APP transgenic mice despite significantly different blood glucose and plasma insulin levels, and nerve of both models showed accumulation of Aβ-immunoreactive protein. Although diabetes and AD have different primary etiologies, both diseases share many abnormalities in both the brain and the PNS. Our data point to common deficits in the insulin-signaling pathway in both neurodegenerative diseases and support the idea that AD may cause disorders outside the higher CNS.
Model Mapping Approach Based on Ontology Semantics
Directory of Open Access Journals (Sweden)
Jinkui Hou
2013-09-01
Full Text Available The mapping relations between different models are the foundation for model transformation in model-driven software development. On the basis of ontology semantics, model mappings between different levels are classified by using structural semantics of modeling languages. The general definition process for mapping relations is explored, and the principles of structure mapping are proposed subsequently. The approach is further illustrated by the mapping relations from class model of object oriented modeling language to the C programming codes. The application research shows that the approach provides a theoretical guidance for the realization of model mapping, and thus can make an effective support to model-driven software development
Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella
2015-10-30
The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different
Similarity dark energy models in Bianchi type -I space-time
Ali, Ahmad T; Alzahrani, Abdulah K
2015-01-01
We investigate some new similarity solutions of anisotropic dark energy and perfect fluid in Bianchi type-I space-time. Three different time dependent skewness parameters along the spatial directions are introduced to quantify the deviation of pressure from isotropy. We consider the case when the dark energy is minimally coupled to the perfect fluid as well as direct interaction with it. The Lie symmetry generators that leave the equation invariant are identified and we generate an optimal system of one-dimensional subalgebras. Each element of the optimal system is used to reduce the partial differential equation to an ordinary differential equation which is further analyzed. We solve the Einstein field equations, described by a system of non-linear partial differential equations (NLPDEs), by using the Lie point symmetry analysis method. The geometrical and kinematical features of the models and the behavior of the anisotropy of dark energy, are examined in detail.
Directory of Open Access Journals (Sweden)
Gianni Pagnini
2012-01-01
inhomogeneity and nonstationarity properties of the medium. For instance, when this superposition is applied to the time-fractional diffusion process, the resulting Master Equation emerges to be the governing equation of the Erdélyi-Kober fractional diffusion, that describes the evolution of the marginal distribution of the so-called generalized grey Brownian motion. This motion is a parametric class of stochastic processes that provides models for both fast and slow anomalous diffusion: it is made up of self-similar processes with stationary increments and depends on two real parameters. The class includes the fractional Brownian motion, the time-fractional diffusion stochastic processes, and the standard Brownian motion. In this framework, the M-Wright function (known also as Mainardi function emerges as a natural generalization of the Gaussian distribution, recovering the same key role of the Gaussian density for the standard and the fractional Brownian motion.
Evaluating face trustworthiness: a model based approach.
Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N
2008-06-01
Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.
Learning Action Models: Qualitative Approach
Bolander, T.; Gierasimczuk, N.; van der Hoek, W.; Holliday, W.H.; Wang, W.-F.
2015-01-01
In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite
Directory of Open Access Journals (Sweden)
Dinorah (Dina Martinez Tyson
2011-01-01
Full Text Available The Surgeon General's report, “Culture, Race, and Ethnicity: A Supplement to Mental Health,” points to the need for subgroup specific mental health research that explores the cultural variation and heterogeneity of the Latino population. Guided by cognitive anthropological theories of culture, we utilized ethnographic interviewing techniques to explore cultural models of depression among foreign-born Mexican (n=30, Cuban (n=30, Columbian (n=30, and island-born Puerto Ricans (n=30, who represent the largest Latino groups in Florida. Results indicate that Colombian, Cuban, Mexican, and Puerto Rican immigrants showed strong intragroup consensus in their models of depression causality, symptoms, and treatment. We found more agreement than disagreement among all four groups regarding core descriptions of depression, which was largely unexpected but can potentially be explained by their common immigrant experiences. Findings expand our understanding about Latino subgroup similarities and differences in their conceptualization of depression and can be used to inform the adaptation of culturally relevant interventions in order to better serve Latino immigrant communities.
Lévy Flights and Self-Similar Exploratory Behaviour of Termite Workers: Beyond Model Fitting
Miramontes, Octavio; DeSouza, Og; Paiva, Leticia Ribeiro; Marins, Alessandra; Orozco, Sirio
2014-01-01
Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties –including Lévy flights– in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale. PMID:25353958
Levy flights and self-similar exploratory behaviour of termite workers: beyond model fitting.
Directory of Open Access Journals (Sweden)
Octavio Miramontes
Full Text Available Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties--including Lévy flights--in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.
Similarity on neural stem cells and brain tumor stem cells in transgenic brain tumor mouse models
Institute of Scientific and Technical Information of China (English)
Guanqun Qiao; Qingquan Li; Gang Peng; Jun Ma; Hongwei Fan; Yingbin Li
2013-01-01
Although it is believed that glioma is derived from brain tumor stem cells, the source and molecular signal pathways of these cells are stil unclear. In this study, we used stable doxycycline-inducible transgenic mouse brain tumor models (c-myc+/SV40Tag+/Tet-on+) to explore the malignant trans-formation potential of neural stem cells by observing the differences of neural stem cel s and brain tumor stem cells in the tumor models. Results showed that chromosome instability occurred in brain tumor stem cells. The numbers of cytolysosomes and autophagosomes in brain tumor stem cells and induced neural stem cel s were lower and the proliferative activity was obviously stronger than that in normal neural stem cells. Normal neural stem cells could differentiate into glial fibril ary acidic protein-positive and microtubule associated protein-2-positive cells, which were also negative for nestin. However, glial fibril ary acidic protein/nestin, microtubule associated protein-2/nestin, and glial fibril ary acidic protein/microtubule associated protein-2 double-positive cells were found in induced neural stem cells and brain tumor stem cel s. Results indicate that induced neural stem cells are similar to brain tumor stem cells, and are possibly the source of brain tumor stem cells.
Directory of Open Access Journals (Sweden)
L. Rowland
2014-11-01
Full Text Available Accurately predicting the response of Amazonia to climate change is important for predicting changes across the globe. However, changes in multiple climatic factors simultaneously may result in complex non-linear responses, which are difficult to predict using vegetation models. Using leaf and canopy scale observations, this study evaluated the capability of five vegetation models (CLM3.5, ED2, JULES, SiB3, and SPA to simulate the responses of canopy and leaf scale productivity to changes in temperature and drought in an Amazonian forest. The models did not agree as to whether gross primary productivity (GPP was more sensitive to changes in temperature or precipitation. There was greater model–data consistency in the response of net ecosystem exchange to changes in temperature, than in the response to temperature of leaf area index (LAI, net photosynthesis (An and stomatal conductance (gs. Modelled canopy scale fluxes are calculated by scaling leaf scale fluxes to LAI, and therefore in this study similarities in modelled ecosystem scale responses to drought and temperature were the result of inconsistent leaf scale and LAI responses among models. Across the models, the response of An to temperature was more closely linked to stomatal behaviour than biochemical processes. Consequently all the models predicted that GPP would be higher if tropical forests were 5 °C colder, closer to the model optima for gs. There was however no model consistency in the response of the An–gs relationship when temperature changes and drought were introduced simultaneously. The inconsistencies in the An–gs relationships amongst models were caused by to non-linear model responses induced by simultaneous drought and temperature change. To improve the reliability of simulations of the response of Amazonian rainforest to climate change the mechanistic underpinnings of vegetation models need more complete validation to improve accuracy and consistency in the scaling
A monkey model of acetaminophen-induced hepatotoxicity; phenotypic similarity to human.
Tamai, Satoshi; Iguchi, Takuma; Niino, Noriyo; Mikamoto, Kei; Sakurai, Ken; Sayama, Ayako; Shimoda, Hitomi; Takasaki, Wataru; Mori, Kazuhiko
2017-01-01
Species-specific differences in the hepatotoxicity of acetaminophen (APAP) have been shown. To establish a monkey model of APAP-induced hepatotoxicity, which has not been previously reported, APAP at doses up to 2,000 mg/kg was administered orally to fasting male and female cynomolgus monkeys (n = 3-5/group) pretreated intravenously with or without 300 mg/kg of the glutathione biosynthesis inhibitor, L-buthionine-(S,R)-sulfoximine (BSO). In all the animals, APAP at 2,000 mg/kg with BSO but not without BSO induced hepatotoxicity, which was characterized histopathologically by centrilobular necrosis and vacuolation of hepatocytes. Plasma levels of APAP and its reactive metabolite N-acethyl-p-benzoquinone imine (NAPQI) increased 4 to 7 hr after the APAP treatment. The mean Cmax level of APAP at 2,000 mg/kg with BSO was approximately 200 µg/mL, which was comparable to high-risk cutoff value of the Rumack-Matthew nomogram. Interestingly, plasma alanine aminotransferase (ALT) did not change until 7 hr and increased 24 hr or later after the APAP treatment, indicating that this phenotypic outcome was similar to that in humans. In addition, circulating liver-specific miR-122 and miR-192 levels also increased 24 hr or later compared with ALT, suggesting that circulating miR-122 and miR-192 may serve as potential biomarkers to detect hepatotoxicity in cynomolgus monkeys. These results suggest that the hepatotoxicity induced by APAP in the monkey model shown here was translatable to humans in terms of toxicokinetics and its toxic nature, and this model would be useful to investigate mechanisms of drug-induced liver injury and also potential translational biomarkers in humans.
Learning Actions Models: Qualitative Approach
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2015-01-01
—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning......In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite...... identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power...
Geometrical approach to fluid models
Kuvshinov, B. N.; Schep, T. J.
1997-01-01
Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical
Geometrical approach to fluid models
Kuvshinov, B. N.; Schep, T. J.
1997-01-01
Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notio
Model based feature fusion approach
Schwering, P.B.W.
2001-01-01
In recent years different sensor data fusion approaches have been analyzed and evaluated in the field of mine detection. In various studies comparisons have been made between different techniques. Although claims can be made for advantages for using certain techniques, until now there has been no si
Nisius, Britta; Vogt, Martin; Bajorath, Jürgen
2009-06-01
The contribution of individual fingerprint bit positions to similarity search performance is systematically evaluated. A method is introduced to determine bit significance on the basis of Kullback-Leibler divergence analysis of bit distributions in active and database compounds. Bit divergence analysis and Bayesian compound screening share a common methodological foundation. Hence, given the significance ranking of all individual bit positions comprising a fingerprint, subsets of bits are evaluated in the context of Bayesian screening, and minimal fingerprint representations are determined that meet or exceed the search performance of unmodified fingerprints. For fingerprints of different design evaluated on many compound activity classes, we consistently find that subsets of fingerprint bit positions are responsible for search performance. In part, these subsets are very small and contain in some cases only a few fingerprint bit positions. Structural or pharmacophore patterns captured by preferred bit positions can often be directly associated with characteristic features of active compounds. In some cases, reduced fingerprint representations clearly exceed the search performance of the original fingerprints. Thus, fingerprint reduction likely represents a promising approach for practical applications.
Study on similar model of high pressure water jet impacting coal rock
Liu, Jialiang; Wang, Mengjin; Zhang, Di
2017-08-01
Based on the similarity theory and dimensional analysis, the similarity criterion of the coal rock mechanical parameters were deduced. The similar materials were mainly built by the cement, sand, nitrile rubber powder and polystyrene, by controlling the water-cement ratio, cement-sand ratio, curing time and additives volume ratio. The intervals of the factors were obtained by carrying out series of material compression tests. By comparing the basic mechanical parameters such as the bulk density, compressive strength, Poisson ratio and elastic modulus between the coal rock prototype and similar materials, the optimal producing proposal of the coal rock similar materials was generated based on the orthogonal design tests finally.
Vogt, Martin; Bajorath, Jürgen
2011-10-24
A statistical approach named the conditional correlated Bernoulli model is introduced for modeling of similarity scores and predicting the potential of fingerprint search calculations to identify active compounds. Fingerprint features are rationalized as dependent Bernoulli variables and conditional distributions of Tanimoto similarity values of database compounds given a reference molecule are assessed. The conditional correlated Bernoulli model is utilized in the context of virtual screening to estimate the position of a compound obtaining a certain similarity value in a database ranking. Through the generation of receiver operating characteristic curves from cumulative distribution functions of conditional similarity values for known active and random database compounds, one can predict how successful a fingerprint search might be. The comparison of curves for different fingerprints makes it possible to identify fingerprints that are most likely to identify new active molecules in a database search given a set of known reference molecules.
Global energy modeling - A biophysical approach
Energy Technology Data Exchange (ETDEWEB)
Dale, Michael
2010-09-15
This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.
A POMDP approach to Affective Dialogue Modeling
Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob; Keller, E.; Marinaro, M.; Bratanic, M.
2007-01-01
We propose a novel approach to developing a dialogue model that is able to take into account some aspects of the user's affective state and to act appropriately. Our dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user's
The chronic diseases modelling approach
Hoogenveen RT; Hollander AEM de; Genugten MLL van; CCM
1998-01-01
A mathematical model structure is described that can be used to simulate the changes of the Dutch public health state over time. The model is based on the concept of demographic and epidemiologic processes (events) and is mathematically based on the lifetable method. The population is divided over s
Rosas, Marcela; Osorio, Fabiola; Robinson, Matthew J; Davies, Luke C; Dierkes, Nicola; Jones, Simon A; Reis e Sousa, Caetano; Taylor, Philip R
2011-02-01
We have examined the potential to generate bona fide macrophages (MØ) from conditionally immortalised murine bone marrow precursors. MØ can be derived from Hoxb8 conditionally immortalised macrophage precursor cell lines (MØP) using either M-CSF or GM-CSF. When differentiated in GM-CSF (GM-MØP) the resultant cells resemble GM-CSF bone marrow-derived dendritic cells (BMDC) in morphological phenotype, antigen phenotype and functional responses to microbial stimuli. In spite of this high similarity between the two cell types and the ability of GM-MØP to effectively present antigen to a T-cell hybridoma, these cells are comparatively poor at priming the expansion of IFN-γ responses from naïve CD4(+) T cells. The generation of MØP from transgenic or genetically aberrant mice provides an excellent opportunity to study the inflammatory role of GM-MØP, and reduces the need for mouse colonies in many studies. Hence differentiation of conditionally immortalised MØPs in GM-CSF represents a unique in vitro model of inflammatory monocyte-like cells, with important differences from bone marrow-derived dendritic cells, which will facilitate functional studies relating to the many 'sub-phenotypes' of inflammatory monocytes.
Learning Actions Models: Qualitative Approach
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2015-01-01
identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power......—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning...
A Unified Approach to Modeling and Programming
DEFF Research Database (Denmark)
Madsen, Ole Lehrmann; Møller-Pedersen, Birger
2010-01-01
of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...
Marcelos, Maria Fátima; Nagem, Ronaldo L.
2010-06-01
Our objective is to contribute to the teaching of Classical Darwinian Evolution by means of a study of analogies and metaphors. Throughout the history of knowledge about Evolution and in Science teaching, tree structures have been used an analogs to refer to Evolution, such as by Darwin in the Tree of Life passage contained in On The Origin of Species (1859). We analyze the analogies and metaphors found in the Darwinian text the Tree of Life and propose Comparative Structural Models of Similarities and Differences between the vehicle and target, considering the viability of their use in teaching Sciences. Our foundation is the Theory of Conceptual Metaphor by Lakoff and Johnson (1980) and the Methodology of Teaching with Analogies—- MECA—by Nagem et al. (2001). The analogies and metaphors were classified and analyzed and the similarities and differences were highlighted. We found conceptual metaphors in the text. The analogies and metaphors in the Tree of Life are complex and appropriate for didactic use, but require an adequate methodological approach.
Comparative modeling of the human monoamine transporters: similarities in substrate binding.
Koldsø, Heidi; Christiansen, Anja B; Sinning, Steffen; Schiøtt, Birgit
2013-02-20
The amino acid compositions of the substrate binding pockets of the three human monoamine transporters are compared as is the orientation of the endogenous substrates, serotonin, dopamine, and norepinephrine, bound in these. Through a combination of homology modeling, induced fit dockings, molecular dynamics simulations, and uptake experiments in mutant transporters, we propose a common binding mode for the three substrates. The longitudinal axis of the substrates is similarly oriented with these, forming an ionic interaction between the ammonium group and a highly conserved aspartate, Asp98 (serotonin transporter, hSERT), Asp79 (dopamine transporter, hDAT), and Asp75 (norepinephrine transporter, hNET). The 6-position of serotonin and the para-hydroxyl groups of dopamine and norepinephrine were found to face Ala173 in hSERT, Gly153 in hDAT, and Gly149 in hNET. Three rotations of the substrates around the longitudinal axis were identified. In each mode, an aromatic hydroxyl group of the substrates occupied equivalent volumes of the three binding pockets, where small changes in amino acid composition explains the differences in selectivity. Uptake experiments support that the 5-hydroxyl group of serotonin and the meta-hydroxyl group norepinephrine and dopamine are placed in the hydrophilic pocket around Ala173, Ser438, and Thr439 in hSERT corresponding to Gly149, Ser419, Ser420 in hNET and Gly153 Ser422 and Ala423 in hDAT. Furthermore, hDAT was found to possess an additional hydrophilic pocket around Ser149 to accommodate the para-hydroxyl group. Understanding these subtle differences between the binding site compositions of the three transporters is imperative for understanding the substrate selectivity, which could eventually aid in developing future selective medicines.
Directory of Open Access Journals (Sweden)
Maria Moroni
Full Text Available BACKGROUND: The animal efficacy rule addressing development of drugs for selected disease categories has pointed out the need to develop alternative large animal models. Based on this rule, the pathophysiology of the disease in the animal model must be well characterized and must reflect that in humans. So far, manifestations of the acute radiation syndrome (ARS have been extensively studied only in two large animal models, the non-human primate (NHP and the canine. We are evaluating the suitability of the minipig as an additional large animal model for development of radiation countermeasures. We have previously shown that the Gottingen minipig manifests hematopoietic ARS phases and symptoms similar to those observed in canines, NHPs, and humans. PRINCIPAL FINDINGS: We establish here the LD50/30 dose (radiation dose at which 50% of the animals succumb within 30 days, and show that at this dose the time of nadir and the duration of cytopenia resemble those observed for NHP and canines, and mimic closely the kinetics of blood cell depletion and recovery in human patients with reversible hematopoietic damage (H3 category, METREPOL approach. No signs of GI damage in terms of diarrhea or shortening of villi were observed at doses up to 1.9 Gy. Platelet counts at days 10 and 14, number of days to reach critical platelet values, duration of thrombocytopenia, neutrophil stress response at 3 hours and count at 14 days, and CRP-to-platelet ratio were correlated with survival. The ratios between neutrophils, lymphocytes and platelets were significantly correlated with exposure to irradiation at different time intervals. SIGNIFICANCE: As a non-rodent animal model, the minipig offers a useful alternative to NHP and canines, with attractive features including ARS resembling human ARS, cost, and regulatory acceptability. Use of the minipig may allow accelerated development of radiation countermeasures.
Szekeres models: a covariant approach
Apostolopoulos, Pantelis S
2016-01-01
We exploit the 1+1+2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an \\emph{average scale length} can be defined \\emph{covariantly} which satisfies a 2d equation of motion driven from the \\emph{effective gravitational mass} (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field $E_{ab}$. In addition the notions of the Apparent and Absolute Apparent Horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express the Sachs optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.
Matrix Model Approach to Cosmology
Chaney, A; Stern, A
2015-01-01
We perform a systematic search for rotationally invariant cosmological solutions to matrix models, or more specifically the bosonic sector of Lorentzian IKKT-type matrix models, in dimensions $d$ less than ten, specifically $d=3$ and $d=5$. After taking a continuum (or commutative) limit they yield $d-1$ dimensional space-time surfaces, with an attached Poisson structure, which can be associated with closed, open or static cosmologies. For $d=3$, we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a matrix resolution of cosmological singularities. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the $d=3$ soluti...
Institute of Scientific and Technical Information of China (English)
LEE Hyeon-deok; SON Myeong-jo; OH Min-jae; LEE Hyung-woo; KIM Tae-wan
2012-01-01
In early 2000,large domestic shipyards introduced shipbuilding 3D computer-aided design (CAD)to the hull production design process to define manufacturing and assembly information.The production design process accounts for most of the man-hours (M/H) of the entire design process and is closely connected to yard production because designs must take into account the production schedule of the shipyard,the current state of the dock needed to mount the ship's block,and supply information.Therefore,many shipyards are investigating the complete automation of the production design process to reduce the M/H for designers.However,these problems are still currently unresolved,and a clear direction is needed for research on the automatic design base of manufacturing rules,batches reflecting changed building specifications,batch updates of boundary information for hull members,and management of the hull model change history to automate the production design process.In this study,a process was developed to aid production design engineers in designing a new ship's hull block model from that of a similar ship previously built,based on AVEVA Marine.An automation system that uses the similar ship's hull block model is proposed to reduce M/H and human errors by the production design engineer.First,scheme files holding important information were constructed in a database to automatically update hull block model modifications.Second,for batch updates,the database's table,including building specifications and the referential integrity of a relational database were compared.In particular,this study focused on reflecting the frequent modification of building specifications and regeneration of boundary information of the adjacent panel due to changes in a specific panel.Third,the rollback function is proposed in which the database (DB) is used to return to the previously designed panels.
A new approach to adaptive data models
Directory of Open Access Journals (Sweden)
Ion LUNGU
2016-12-01
Full Text Available Over the last decade, there has been a substantial increase in the volume and complexity of data we collect, store and process. We are now aware of the increasing demand for real time data processing in every continuous business process that evolves within the organization. We witness a shift from a traditional static data approach to a more adaptive model approach. This article aims to extend understanding in the field of data models used in information systems by examining how an adaptive data model approach for managing business processes can help organizations accommodate on the fly and build dynamic capabilities to react in a dynamic environment.
Modeling software behavior a craftsman's approach
Jorgensen, Paul C
2009-01-01
A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth
Bruns, Gina L; Carter, Michele M
2015-04-01
Media exposure has been positively correlated with body dissatisfaction. While body image concerns are common, being African American has been found to be a protective factor in the development of body dissatisfaction. Participants either viewed ten advertisements showing 1) ethnically-similar thin models; 2) ethnically-different thin models; 3) ethnically-similar plus-sized models; and 4) ethnically-diverse plus-sized models. Following exposure, body image was measured. African American women had less body dissatisfaction than Caucasian women. Ethnically-similar thin-model conditions did not elicit greater body dissatisfaction scores than ethnically-different thin or plus-sized models nor did the ethnicity of the model impact ratings of body dissatisfaction for women of either race. There were no differences among the African American women exposed to plus-sized versus thin models. Among Caucasian women exposure to plus-sized models resulted in greater body dissatisfaction than exposure to thin models. Results support existing literature that African American women experience less body dissatisfaction than Caucasian women even following exposure to an ethnically-similar thin model. Additionally, women exposed to plus-sized model conditions experienced greater body dissatisfaction than those shown thin models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Current approaches to gene regulatory network modelling
Directory of Open Access Journals (Sweden)
Brazma Alvis
2007-09-01
Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.
A Model for Comparative Analysis of the Similarity between Android and iOS Operating Systems
Directory of Open Access Journals (Sweden)
Lixandroiu R.
2014-12-01
Full Text Available Due to recent expansion of mobile devices, in this article we try to do an analysis of two of the most used mobile OSS. This analysis is made on the method of calculating Jaccard's similarity coefficient. To complete the analysis, we developed a hierarchy of factors in evaluating OSS. Analysis has shown that the two OSS are similar in terms of functionality, but there are a number of factors that weighted make a difference.
Model Oriented Approach for Industrial Software Development
Directory of Open Access Journals (Sweden)
P. D. Drobintsev
2015-01-01
Full Text Available The article considers the specifics of a model oriented approach to software development based on the usage of Model Driven Architecture (MDA, Model Driven Software Development (MDSD and Model Driven Development (MDD technologies. Benefits of this approach usage in the software development industry are described. The main emphasis is put on the system design, automated code generation for large systems, verification, proof of system properties and reduction of bug density. Drawbacks of the approach are also considered. The approach proposed in the article is specific for industrial software systems development. These systems are characterized by different levels of abstraction, which is used on modeling and code development phases. The approach allows to detail the model to the level of the system code, at the same time store the verified model semantics and provide the checking of the whole detailed model. Steps of translating abstract data structures (including transactions, signals and their parameters into data structures used in detailed system implementation are presented. Also the grammar of a language for specifying rules of abstract model data structures transformation into real system detailed data structures is described. The results of applying the proposed method in the industrial technology are shown.The article is published in the authors’ wording.
Walter, F.; Bruch, H.
2008-01-01
This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship qualit
Walter, F.; Bruch, H.
This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
Dr. S. Jayakumar; Geetha, S.
2017-01-01
Mathematical Model is an idealization of the real world Phenomenon and never a completely accurate representation. Any Model has its limitations a good one can provide valuable results and conclusions. Mathematical Model as a mathematical construct designed to study a particular real world systems or behavior of Interest. The Model allows us to reach mathematical conclusions about the behavior; These conclusions can be interpreted to help a decision maker plan for the future. Most models simp...
Differences in Effects of Zuojin Pills(左金丸)and Its Similar Formulas on Wei Cold Model in Rats
Institute of Scientific and Technical Information of China (English)
赵艳玲; 史文丽; 山丽梅; 王伽伯; 赵海平; 肖小河
2009-01-01
Objective:To explore the effects of Zuojin Pills(左金丸)and its similar formulas on the stomach cold syndrome in a Wei cold model in rats.Methods:The rat Wei cold model was established by intragastric administration of glacial NaOH,and the gastric mucosa injury indices,together with the levels of motilin and gastrin in the stomach,were determined.The preventive and curative effects of Zuojin Pills and its similar formulas on gastric mucosa injury were investigated.Results:Zuojin Pills and its similar formul...
DEFF Research Database (Denmark)
Andersen, Allan T.; Nielsen, Bo Friis
1997-01-01
We present a modelling framework and a fitting method for modelling second order self-similar behaviour with the Markovian arrival process (MAP). The fitting method is based on fitting to the autocorrelation function of counts a second order self-similar process. It is shown that with this fitting...... algorithm it is possible closely to match the autocorrelation function of counts for a second order self-similar process over 3-5 time-scales with 8-16 state MAPs with a very simple structure, i.e. a superposition of 3 and 4 interrupted Poisson processes (IPP) respectively and a Poisson process. The fitting...
Swindell, William R.; Johnston, Andrew; Carbajal, Steve; Han, Gangwen; Wohn, Christian; Lu, Jun; Xing, Xianying; Nair, Rajan P.; Voorhees, John J.; Elder, James T.; Wang, Xiao-Jing; Sano, Shigetoshi; Prens, Errol P.; DiGiovanni, John; Pittelkow, Mark R.; Ward, Nicole L.; Gudjonsson, Johann E.
2011-01-01
Development of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features of the huma
W.R. Swindell (William R.); A. Johnston (Andrew); S. Carbajal (Steve); G. Han (Gangwen); C.T. Wohn (Christopher); J. Lu (Jun); X. Xing (Xianying); R.P. Nair (Rajan P.); J.J. Voorhees (John); J.T. Elder (James); X.J. Wang (Xian Jiang); S. Sano (Shigetoshi); E.P. Prens (Errol); J. DiGiovanni (John); M.R. Pittelkow (Mark R.); N.L. Ward (Nicole); J.E. Gudjonsson (Johann Eli)
2011-01-01
textabstractDevelopment of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features
W.R. Swindell (William R.); A. Johnston (Andrew); S. Carbajal (Steve); G. Han (Gangwen); C.T. Wohn (Christopher); J. Lu (Jun); X. Xing (Xianying); R.P. Nair (Rajan P.); J.J. Voorhees (John); J.T. Elder (James); X.J. Wang (Xian Jiang); S. Sano (Shigetoshi); E.P. Prens (Errol); J. DiGiovanni (John); M.R. Pittelkow (Mark R.); N.L. Ward (Nicole); J.E. Gudjonsson (Johann Eli)
2011-01-01
textabstractDevelopment of a suitable mouse model would facilitate the investigation of pathomechanisms underlying human psoriasis and would also assist in development of therapeutic treatments. However, while many psoriasis mouse models have been proposed, no single model recapitulates all features
Chuk, Tim; Chan, Antoni B; Hsiao, Janet H
2017-05-04
The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Set Theoretical Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models ch...
Institute of Scientific and Technical Information of China (English)
L(U) Wei-cai; XU Shao-quan
2004-01-01
Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.
Modeling diffuse pollution with a distributed approach.
León, L F; Soulis, E D; Kouwen, N; Farquhar, G J
2002-01-01
The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.
Directory of Open Access Journals (Sweden)
Meimei Chen
2016-11-01
Full Text Available In this study, in silico approaches, including multiple QSAR modeling, structural similarity analysis, and molecular docking, were applied to develop QSAR classification models as a fast screening tool for identifying highly-potent ABCA1 up-regulators targeting LXRβ based on a series of new flavonoids. Initially, four modeling approaches, including linear discriminant analysis, support vector machine, radial basis function neural network, and classification and regression trees, were applied to construct different QSAR classification models. The statistics results indicated that these four kinds of QSAR models were powerful tools for screening highly potent ABCA1 up-regulators. Then, a consensus QSAR model was developed by combining the predictions from these four models. To discover new ABCA1 up-regulators at maximum accuracy, the compounds in the ZINC database that fulfilled the requirement of structural similarity of 0.7 compared to known potent ABCA1 up-regulator were subjected to the consensus QSAR model, which led to the discovery of 50 compounds. Finally, they were docked into the LXRβ binding site to understand their role in up-regulating ABCA1 expression. The excellent binding modes and docking scores of 10 hit compounds suggested they were highly-potent ABCA1 up-regulators targeting LXRβ. Overall, this study provided an effective strategy to discover highly potent ABCA1 up-regulators.
MODULAR APPROACH WITH ROUGH DECISION MODELS
Directory of Open Access Journals (Sweden)
Ahmed T. Shawky
2012-09-01
Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.
Modular Approach with Rough Decision Models
Directory of Open Access Journals (Sweden)
Ahmed T. Shawky
2012-10-01
Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.
On use of the alpha stable self-similar stochastic process to model aggregated VBR video traffic
Institute of Scientific and Technical Information of China (English)
Huang Tianyun
2006-01-01
The alpha stable self-similar stochastic process has been proved an effective model for high variable data traffic. A deep insight into some special issues and considerations on use of the process to model aggregated VBR video traffic is made. Different methods to estimate stability parameter α and self-similar parameter H are compared. Processes to generate the linear fractional stable noise (LFSN) and the alpha stable random variables are provided. Model construction and the quantitative comparisons with fractional Brown motion (FBM) and real traffic are also examined. Open problems and future directions are also given with thoughtful discussions.
Poirrette, A. R.; Artymiuk, P. J.; Grindley, H. M.; Rice, D.W.; Willett, P.
1994-01-01
Using searching techniques based on algorithms derived from graph theory, we have established a similarity between a 3-dimensional cluster of side chains implicated in drug binding in influenza sialidase and side chains involved in isocitrate binding in Escherichia coli isocitrate dehydrogenase. The possible implications of the use of such comparative methods in drug design are discussed.
Modeling approach suitable for energy system
Energy Technology Data Exchange (ETDEWEB)
Goetschel, D. V.
1979-01-01
Recently increased attention has been placed on optimization problems related to the determination and analysis of operating strategies for energy systems. Presented in this paper is a nonlinear model that can be used in the formulation of certain energy-conversion systems-modeling problems. The model lends itself nicely to solution approaches based on nonlinear-programming algorithms and, in particular, to those methods falling into the class of variable metric algorithms for nonlinearly constrained optimization.
Stormwater infiltration trenches: a conceptual modelling approach.
Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare
2009-01-01
In recent years, limitations linked to traditional urban drainage schemes have been pointed out and new approaches are developing introducing more natural methods for retaining and/or disposing of stormwater. These mitigation measures are generally called Best Management Practices or Sustainable Urban Drainage System and they include practices such as infiltration and storage tanks in order to reduce the peak flow and retain part of the polluting components. The introduction of such practices in urban drainage systems entails an upgrade of existing modelling frameworks in order to evaluate their efficiency in mitigating the impact of urban drainage systems on receiving water bodies. While storage tank modelling approaches are quite well documented in literature, some gaps are still present about infiltration facilities mainly dependent on the complexity of the involved physical processes. In this study, a simplified conceptual modelling approach for the simulation of the infiltration trenches is presented. The model enables to assess the performance of infiltration trenches. The main goal is to develop a model that can be employed for the assessment of the mitigation efficiency of infiltration trenches in an integrated urban drainage context. Particular care was given to the simulation of infiltration structures considering the performance reduction due to clogging phenomena. The proposed model has been compared with other simplified modelling approaches and with a physically based model adopted as benchmark. The model performed better compared to other approaches considering both unclogged facilities and the effect of clogging. On the basis of a long-term simulation of six years of rain data, the performance and the effectiveness of an infiltration trench measure are assessed. The study confirmed the important role played by the clogging phenomenon on such infiltration structures.
Challenges in structural approaches to cell modeling.
Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A
2016-07-31
Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. Copyright © 2016 Elsevier Ltd. All rights reserved.
Otero-Espinar, Victoria; Nieto, Juan J; Mira, Jorge
2013-01-01
An in-depth analytic study of a model of language dynamics is presented: a model which tackles the problem of the coexistence of two languages within a closed community of speakers taking into account bilingualism and incorporating a parameter to measure the distance between languages. After previous numerical simulations, the model yielded that coexistence might lead to survival of both languages within monolingual speakers along with a bilingual community or to extinction of the weakest tongue depending on different parameters. In this paper, such study is closed with thorough analytical calculations to settle the results in a robust way and previous results are refined with some modifications. From the present analysis it is possible to almost completely assay the number and nature of the equilibrium points of the model, which depend on its parameters, as well as to build a phase space based on them. Also, we obtain conclusions on the way the languages evolve with time. Our rigorous considerations also sug...
Gor, G Yu
2009-01-01
The paper presents an analytical description of the growth of a two-component bubble in a binary liquid-gas solution. We obtain asymptotic self-similar time dependence of the bubble radius and analytical expressions for the non-steady profiles of dissolved gases around the bubble. We show that the necessary condition for the self-similar regime of bubble growth is the constant, steady-state composition of the bubble. The equation for the steady-state composition is obtained. We reveal the dependence of the steady-state composition on the solubility laws of the bubble components. Besides, the universal, independent from the solubility laws, expressions for the steady-state composition are obtained for the case of strong supersaturations, which are typical for the homogeneous nucleation of a bubble.
Building Water Models, A Different Approach
Izadi, Saeed; Onufriev, Alexey V
2014-01-01
Simplified, classical models of water are an integral part of atomistic molecular simulations, especially in biology and chemistry where hydration effects are critical. Yet, despite several decades of effort, these models are still far from perfect. Presented here is an alternative approach to constructing point charge water models - currently, the most commonly used type. In contrast to the conventional approach, we do not impose any geometry constraints on the model other than symmetry. Instead, we optimize the distribution of point charges to best describe the "electrostatics" of the water molecule, which is key to many unusual properties of liquid water. The search for the optimal charge distribution is performed in 2D parameter space of key lowest multipole moments of the model, to find best fit to a small set of bulk water properties at room temperature. A virtually exhaustive search is enabled via analytical equations that relate the charge distribution to the multipole moments. The resulting "optimal"...
Institute of Scientific and Technical Information of China (English)
ZHANG Di; ZHANG Min; YE Pei-da
2006-01-01
This article explores the short-range dependence (SRD) and the long-range dependence (LRD) of self-similar traffic generated by the fractal-binomial-noise-driven Poisson process (FBNDP) model and lays emphasis on the former. By simulation, the SRD decaying trends with the increase of Hurst value and peak rate are obtained, respectively. After a comprehensive analysis of accuracy of self-similarity intensity,the optimal range of peak rate is determined by taking into account the time cost, the accuracy of self-similarity intensity,and the effect of SRD.
Towards new approaches in phenological modelling
Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas
2014-05-01
Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.
CREATING PRODUCT MODELS FROM POINT CLOUD OF CIVIL STRUCTURES BASED ON GEOMETRIC SIMILARITY
Directory of Open Access Journals (Sweden)
N. Hidaka
2015-05-01
Full Text Available The existing civil structures must be maintained in order to ensure their expected lifelong serviceability. Careful rehabilitation and maintenance planning plays a significant role in that effort. Recently, construction information modelling (CIM techniques, such as product models, are increasingly being used to facilitate structure maintenance. Using this methodology, laser scanning systems can provide point cloud data that are used to produce highly accurate and dense representations of civil structures. However, while numerous methods for creating a single surface exist, part decomposition is required in order to create product models consisting of more than one part. This research aims at the development of a surface reconstruction system that utilizes point cloud data efficiently in order to create complete product models. The research proposes using the application of local shape matching to the input point clouds in order to define a set of representative parts. These representative parts are then polygonized and copied to locations where the same types of parts exist. The results of our experiments show that the proposed method can efficiently create product models using input point cloud data.
Annealed Ising model with site dilution on self-similar structures
Silva, V. S. T.; Andrade, R. F. S.; Salinas, S. R.
2014-11-01
We consider an Ising model on the triangular Apollonian network (AN), with a thermalized distribution of vacant sites. The statistical problem is formulated in a grand canonical ensemble, in terms of the temperature T and a chemical potential μ associated with the concentration of active magnetic sites. We use a well-known transfer-matrix method, with a number of adaptations, to write recursion relations between successive generations of this hierarchical structure. We also investigate the analogous model on the diamond hierarchical lattice (DHL). From the numerical analysis of the recursion relations, we obtain various thermodynamic quantities. In the μ →∞ limit, we reproduce the results for the uniform models: in the AN, the system is magnetically ordered at all temperatures, while in the DHL there is a ferromagnetic-paramagnetic transition at a finite value of T . Magnetic ordering, however, is shown to disappear for sufficiently large negative values of the chemical potential.
Theoretical Model for the Formation of Caveolae and Similar Membrane Invaginations
Sens, Pierre; Turner, Matthew S.
2004-01-01
We study a physical model for the formation of bud-like invaginations on fluid lipid membranes under tension, and apply this model to caveolae formation. We demonstrate that budding can be driven by membrane-bound proteins, provided that they exert asymmetric forces on the membrane that give rise to bending moments. In particular, caveolae formation does not necessarily require forces to be applied by the cytoskeleton. Our theoretical model is able to explain several features observed experimentally in caveolae, where proteins in the caveolin family are known to play a crucial role in the formation of caveolae buds. These include 1), the formation of caveolae buds with sizes in the 100-nm range and 2), that certain N- and C-termini deletion mutants result in vesicles that are an order-of-magnitude larger. Finally, we discuss the possible origin of the morphological striations that are observed on the surfaces of the caveolae. PMID:15041647
Self-similar transformations of lattice-Ising models at critical temperatures
Feng, You-gang
2012-01-01
We classify geometric blocks that serve as spin carriers into simple blocks and compound blocks by their topologic connectivity, define their fractal dimensions and describe the relevant transformations. By the hierarchical property of transformations and a block-spin scaling law we obtain a relation between the block spin and its carrier's fractal dimension. By mapping we set up a block-spin Gaussian model and get a formula connecting the critical point and the minimal fractal dimension of the carrier, which guarantees the uniqueness of a fixed point corresponding to the critical point, changing the complicated calculation of critical point into the simple one of the minimal fractal dimension. The numerical results of critical points with high accuracy for five conventional lattice-Ising models prove our method very effective and may be suitable to all lattice-Ising models. The origin of fluctuations in structure at critical temperature is discussed. Our method not only explains the problems met in the renor...
Bentley, T William; Harris, H Carl; Ryu, Zoon Ha; Lim, Gui Taek; Sung, Dae Dong; Szajda, Stanley R
2005-10-28
[reaction: see text] Rate constants and product selectivities (S = ([ester product]/[acid product]) x ([water]/[alcohol solvent]) are reported for solvolyses of chloroacetyl chloride (3) at -10 degrees C and phenylacetyl chloride (4) at 0 degrees C in ethanol/ and methanol/water mixtures. Additional kinetic data are reported for solvolyses in acetone/water, 2,2,2-trifluoroethanol(TFE)/water, and TFE/ethanol mixtures. Selectivities and solvent effects for 3, including the kinetic solvent isotope effect (KSIE) of 2.18 for methanol, are similar to those for solvolyses of p-nitrobenzoyl chloride (1, Z = NO(2)); rate constants in acetone/water are consistent with a third-order mechanism, and rates and products in ethanol/ and methanol/water mixtures can be explained quantitatively by competing third-order mechanisms in which one molecule of solvent (alcohol or water) acts as a nucleophile and another acts as a general base (an addition/elimination reaction channel). Selectivities increase for 3 as water is added to alcohol. Solvent effects on rate constants for solvolyses of 3 are very similar to those of methyl chloroformate, but acetyl chloride shows a lower KSIE, and a higher sensitivity to solvent-ionizing power, explained by a change to an S(N)2/S(N)1 (ionization) reaction channel. Solvolyses of 4 undergo a change from the addition/elimination channel in ethanol to the ionization channel in aqueous ethanol (<80% v/v alcohol). The reasons for change in reaction channels are discussed in terms of the gas-phase stabilities of acylium ions, calculated using Gaussian 03 (HF/6-31G(d), B3LYP/6-31G(d), and B3LYP/6-311G(d,p) MO theory).
Cohen, A.R.; Vitányi, P.M.B.
2015-01-01
Normalized web distance (NWD) is a similarity or normalized semantic distance based on the World Wide Web or any other large electronic database, for instance Wikipedia, and a search engine that returns reliable aggregate page counts. For sets of search terms the NWD gives a similarity on a scale fr
Merging tree algorithm of growing voids in self-similar and CDM models
Russell, Esra
2013-01-01
Observational studies show that voids are prominent features of the large-scale structure of the present-day Universe. Even though their emerging from the primordial density perturbations and evolutionary patterns differ from dark matter haloes, N-body simulations and theoretical models have shown t
Institute of Scientific and Technical Information of China (English)
GAO Yan-fa; ZHONG Ya-ping; LI Jian-min; WANG Su-hua; ZHANG Qing-song
2007-01-01
The subsidence prediction theory under the condition of grouting into bedseparated was developed. Reducing ground subsidence by grouting was carried out on eight fully-mechanized top-coal caving faces, by using the continuous grouting in multiple-layer to obtain experiment results of reducing subsidence under fully mining. The similar material model that can be dismantled under the condition of constant temperature and constant humidity was developed. The model was used to simulate the evolution of overburden bed-separated under such constraints of temperature and humidity, at the same time, and to test the hardening process of similar materials.
Carmichael, Kieran L C; Sellbom, Martin; Liggett, Jacqueline; Smith, Alexander
2016-11-01
The current study examined whether avoidant personality disorder (AvPD) and social anxiety disorder (SAD) should be considered distinct disorder constructs, which is a persistent and controversial issue in the clinical literature. We examined whether relative scores on SAD and AvPD were associated with the same personality profile and severity of impairment. The current research used a cross-sectional design and self-report inventories, including multiple measures of personality, impairment and psychopathology. Results from a mixed sample of 402 university and community participants found that scores on AvPD and SAD were similarly associated with personality traits and impairment indices. Moreover, a latent construct accounting for the shared variance for AvPD and SAD was associated with personality traits and impairment, whereas the residuals representing the uniquenesses of these disorder constructs were not. These findings support the view that AvPD and SAD are similar disorders from a phenotypic personality trait and impairment perspective. These findings are contrary to a prevalent view in the literature, known as severity continuum hypothesis, because the two disorders could not be meaningfully differentiated based on severity of impairment. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Modelling Coagulation Systems: A Stochastic Approach
Ryazanov, V V
2011-01-01
A general stochastic approach to the description of coagulating aerosol system is developed. As the object of description one can consider arbitrary mesoscopic values (number of aerosol clusters, their size etc). The birth-and-death formalism for a number of clusters can be regarded as a partial case of the generalized storage model. An application of the storage model to the number of monomers in a cluster is discussed.
A Multiple Model Approach to Modeling Based on LPF Algorithm
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``
Towards a Multiscale Approach to Cybersecurity Modeling
Energy Technology Data Exchange (ETDEWEB)
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.
Modeling Land Use Change In A Tropical Environment Using Similar Hydrologic Response Units
Guardiola-Claramonte, M.; Troch, P.
2006-12-01
Montane mainland South East Asia comprises areas of great biological and cultural diversity. Over the last decades the region has overcome an important conversion from traditional agriculture to cash crop agriculture driven by regional and global markets. Our study aims at understanding the hydrological implications of these land use changes at the catchment scale. In 2004, networks of hydro-meteorological stations observing water and energy fluxes were installed in two 70 km2 catchments in Northern Thailand (Chiang Mai Province) and Southern China (Yunnan Province). In addition, a detailed soil surveying campaign was done at the moment of instrument installation. Land use is monitored periodically using satellite data. The Thai catchment is switching from small agricultural fields to large extensions of cash crops. The Chinese catchment is replacing the traditional forest for rubber plantations. A first comparative study based on catchments' geomorphologic characteristics, field observations and rainfall-runoff response revealed the dominant hydrologic processes in the catchments. Land use information is then translated into three different Hydrologic Response Units (HRU): rice paddies, pervious and impervious surfaces. The pervious HRU include different land uses such as different stages of forest development, rubber plantations, and agricultural fields; the impervious ones are urban areas, roads and outcrops. For each HRU a water and energy balance model is developed incorporating field observed hydrologic processes, measured field parameters, and literature-based vegetation and soil parameters to better describe the root zone, surface and subsurface flow characteristics without the need of further calibration. The HRU water and energy balance models are applied to single hillslopes and their integrated hydrologic response are compared for different land covers. Finally, the response of individual hillslopes is routed through the channel network to represent
Directory of Open Access Journals (Sweden)
Marco A. L. Zuffi
2007-11-01
Full Text Available Sexes in Chelonia display marked differences. Sexual size dimorphism (SSD is important in evolutionary biology. Different sexual strategies result in species specific selection. Biometric variation in male and female tortoises of two species is studied. Eighteen biometrics were measured in 75 museum specimens (20 Testudo graeca; 55 T. hermanni. Nine of 18 parameters in T. hermanni and two of 18 in T. graeca were sexually dimorphic. Multivariate analyses (principal component analysis highlighted two components, with bridge length the first and anal divergence the second component. The bridge length can be used to separate sexes and species. Males of both species were most different, whereas females of two species overlapped in body shape measurements. We hypothesise that female similarity could be a by-product of reproductive biology and sexual selection that optimise individual fitness.
Markoff, Sera; Ceccobello, Chiara; Heemskerk, Martin; Cavecchi, Yuri; Polko, Peter; Meier, David
2017-08-01
Jets are ubiquitous and reveal themselves at different scales and redshifts, showing an extreme diversity in energetics, shapes and emission. Indeed jets are found to be characteristic features of black hole systems, such as X-ray binaries (XRBs) and active galactic nuclei (AGN), as well as of young stellar objects (YSOs) and gamma-ray bursts (GRBs). Observations suggest that jets are an energetically important component of the system that hosts them, because the jet power appears to be comparable to the accretion power. Significant evidence has been found of the impact of jets not only in the immediate proximity of the central object, but as well on their surrounding environment, where they deposit the energy extracted from the accretion flow. Moreover, the inflow/outflow system produces radiation over the entire electromagnetic spectrum, from radio to X-rays. Therefore it is a compelling problem to be solved and deeply understood. I present a new integration scheme to solve radial self-similar, stationary, axisymmetric relativistic magneto-hydro-dynamics (MHD) equations describing collimated, relativistic outflows crossing smoothly all the singular points (the Alfvén point and the modified slow/fast points). For the first time, the integration can be performed all the way from the disk mid-plane to downstream of the modified fast point. I will discuss an ensemble of jet solutions showing diverse jet dynamics (jet Lorentz factor ~ 1-10) and geometric properties (i.e. shock height ~ 103 - 107 gravitational radii), which makes our model suitable for application to many different systems where a relativistic jet is launched.
Khishfe, Rola
2013-11-01
The purpose of this study was to (a) investigate the effectiveness of explicit nature of science (NOS) instruction in the context of controversial socioscientific issues and (b) explore whether the transfer of acquired NOS understandings, which were explicitly taught in the context of one socioscientific context, into other similar contexts (familiar and unfamiliar) was possible. Participants were 10th grade students in two intact sections at one high school. The treatment involved teaching a six-week unit about genetic engineering. For one group (non-NOS group), there was no explicit instruction about NOS. For the other group (NOS group), explicit instruction about three NOS aspects (subjective, empirical, and tentative) was dispersed across the genetic engineering unit. A questionnaire including two open-ended scenarios, in conjunction with semi-structured interviews, was used to assess the change in participants' understandings of NOS and their ability to transfer their acquired understandings into similar contexts. The first scenario involved a familiar context about genetically modified food and the second one focused on an unfamiliar context about water fluoridation. Results showed no improvement in NOS understandings of participants in the non-NOS group in relation to the familiar and unfamiliar contexts. On the other hand, there was a general improvement in the NOS understandings of participants in the NOS group in relation to both the familiar and unfamiliar contexts. Implications about the transfer of participants' acquired NOS understandings on the basis of the distance between the context of learning and that of application are highlighted and discussed in link with the classroom learning environment.
Post-16 Biology--Some Model Approaches?
Lock, Roger
1997-01-01
Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)
Directory of Open Access Journals (Sweden)
CHEN Zhanlong
2016-02-01
Full Text Available A method about shape similarity measurement of complex holed objects is proposed in this paper. The method extracts features including centroid distance, multilevel chord length, bending degree and concavity-convexity of a geometric object, to construct complex functions based on multilevel bending degree and radius. The complex functions are capable of describing geometry shape from entirety to part. The similarity between geometric objects can be measured by the shape descriptor which is based on the fast Fourier transform of the complex functions. Meanwhile, the matching degree of each scene of complex holed polygons can be got by scene completeness and shape similarity model. And using the feature of multi-level can accomplish the shape similarity measurement among complex geometric objects. Experimenting on geometric objects of different space complexity, the results match human's perceive and show that this method is simple with precision.
Directory of Open Access Journals (Sweden)
Hossien Pourghassem
2011-04-01
Full Text Available Relevance feedback approaches is used to improve the performance of content-based image retrieval systems. In this paper, a novel relevance feedback approach based on similarity measure modification in an X-ray image retrieval system based on fuzzy representation using fuzzy attributed relational graph (FARG is presented. In this approach, optimum weight of each feature in feature vector is calculated using similarity rate between query image and relevant and irrelevant images in user feedback. The calculated weight is used to tune fuzzy graph matching algorithm as a modifier parameter in similarity measure. The standard deviation of the retrieved image features is applied to calculate the optimum weight. The proposed image retrieval system uses a FARG for representation of images, a fuzzy matching graph algorithm as similarity measure and a semantic classifier based on merging scheme for determination of the search space in image database. To evaluate relevance feedback approach in the proposed system, a standard X-ray image database consisting of 10000 images in 57 classes is used. The improvement of the evaluation parameters shows proficiency and efficiency of the proposed system.
Decomposition approach to model smart suspension struts
Song, Xubin
2008-10-01
Model and simulation study is the starting point for engineering design and development, especially for developing vehicle control systems. This paper presents a methodology to build models for application of smart struts for vehicle suspension control development. The modeling approach is based on decomposition of the testing data. Per the strut functions, the data is dissected according to both control and physical variables. Then the data sets are characterized to represent different aspects of the strut working behaviors. Next different mathematical equations can be built and optimized to best fit the corresponding data sets, respectively. In this way, the model optimization can be facilitated in comparison to a traditional approach to find out a global optimum set of model parameters for a complicated nonlinear model from a series of testing data. Finally, two struts are introduced as examples for this modeling study: magneto-rheological (MR) dampers and compressible fluid (CF) based struts. The model validation shows that this methodology can truly capture macro-behaviors of these struts.
Self-similar voiding solutions of a single layered model of folding rocks
Dodwell, Timothy; Budd, Christopher; Hunt, Giles
2011-01-01
In this paper we derive an obstacle problem with a free boundary to describe the formation of voids at areas of intense geological folding. An elastic layer is forced by overburden pressure against a V-shaped rigid obstacle. Energy minimization leads to representation as a nonlinear fourth-order ordinary differential equation, for which we prove their exists a unique solution. Drawing parallels with the Kuhn-Tucker theory, virtual work, and ideas of duality, we highlight the physical significance of this differential equation. Finally we show this equation scales to a single parametric group, revealing a scaling law connecting the size of the void with the pressure/stiffness ratio. This paper is seen as the first step towards a full multilayered model with the possibility of voiding.
Heat transfer modeling an inductive approach
Sidebotham, George
2015-01-01
This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...
Quigley, A; Williams, D R
2016-08-12
Self-interaction chromatography (SIC) has established itself as an important experimental technique for the measurement of the second osmotic virial coefficients B22. B22 data are critical for understanding a range of protein solution phenomena, particularly aggregation and crystallisation. A key limitation to the more extensive use of SIC is the need to develop a method for immobilising each specific protein of interest onto a chromatographic support. This requirement is both a time and protein consuming constraint, which means that SIC cannot be used as a high throughput method for screening a wide range of proteins and their variants. Here an experimental framework is presented for estimating B22 values using Similar Interaction Chromatography (SimIC). This work uses experimental B23 and B32 data for lysozyme, lactoferrin, catalase and concanavalin A to reliably estimate B22 using arithmetic mean field approximations and is demonstrated to give good agreement with SIC measurements of B22 for the same proteins. SimIC could form the basis of a rapid protein variant screening methods to assess the developability of protein therapeutic candidates for industrial and academic researchers with respect to aggregation behaviour by eluting target proteins through a series of well-characterised protein immobilized reference columns.
2016-01-01
The aim of this study was to determine how representative wear scars of simulator-tested polyethylene (PE) inserts compare with retrieved PE inserts from total knee replacement (TKR). By means of a nonparametric self-organizing feature map (SOFM), wear scar images of 21 postmortem- and 54 revision-retrieved components were compared with six simulator-tested components that were tested either in displacement or in load control according to ISO protocols. The SOFM network was then trained with the wear scar images of postmortem-retrieved components since those are considered well-functioning at the time of retrieval. Based on this training process, eleven clusters were established, suggesting considerable variability among wear scars despite an uncomplicated loading history inside their hosts. The remaining components (revision-retrieved and simulator-tested) were then assigned to these established clusters. Six out of five simulator components were clustered together, suggesting that the network was able to identify similarities in loading history. However, the simulator-tested components ended up in a cluster at the fringe of the map containing only 10.8% of retrieved components. This may suggest that current ISO testing protocols were not fully representative of this TKR population, and protocols that better resemble patients' gait after TKR containing activities other than walking may be warranted. PMID:27597955
Cardot, J-M; Garcia Arieta, A; Paixao, P; Tasevska, I; Davit, B
2016-07-01
The US-FDA recently posted a draft guideline for industry recommending procedures necessary to obtain a biowaiver for immediate-release oral dosage forms based on the Biopharmaceutics Classification System (BCS). This review compares the present FDA BCS biowaiver approach, with the existing European Medicines Agency (EMA) approach, with an emphasis on similarities, difficulties, and shared challenges. Some specifics of the current EMA BCS guideline are compared with those in the recently published draft US-FDA BCS guideline. In particular, similarities and differences in the EMA versus US-FDA approaches to establishing drug solubility, permeability, dissolution, and formulation suitability for BCS biowaiver are critically reviewed. Several case studies are presented to illustrate the (i) challenges of applying for BCS biowaivers for global registration in the face of differences in the EMA and US-FDA BCS biowaiver criteria, as well as (ii) challenges inherent in applying for BCS class I or III designation and common to both jurisdictions.
Lavender, Thomas Michael; Schamp, Brandon S; Lamb, Eric G
2016-01-01
Null models exploring species co-occurrence and trait-based limiting similarity are increasingly used to explore the influence of competition on community assembly; however, assessments of common models have not thoroughly explored the influence of variation in matrix size on error rates, in spite of the fact that studies have explored community matrices that vary considerably in size. To determine how smaller matrices, which are of greatest concern, perform statistically, we generated biologically realistic presence-absence matrices ranging in size from 3-50 species and sites, as well as associated trait matrices. We examined co-occurrence tests using the C-Score statistic and independent swap algorithm. For trait-based limiting similarity null models, we used the mean nearest neighbour trait distance (NN) and the standard deviation of nearest neighbour distances (SDNN) as test statistics, and considered two common randomization algorithms: abundance independent trait shuffling (AITS), and abundance weighted trait shuffling (AWTS). Matrices as small as three × three resulted in acceptable type I error rates (p ) was associated with increased type I error rates, particularly for matrices with fewer than eight species. Type I error rates increased for limiting similarity tests using the AWTS randomization scheme when community matrices contained more than 35 sites; a similar randomization used in null models of phylogenetic dispersion has previously been viewed as robust. Notwithstanding other potential deficiencies related to the use of small matrices to represent communities, the application of both classes of null model should be restricted to matrices with 10 or more species to avoid the possibility of type II errors. Additionally, researchers should restrict the use of the AWTS randomization to matrices with fewer than 35 sites to avoid type I errors when testing for trait-based limiting similarity. The AITS randomization scheme performed better in terms of
Directory of Open Access Journals (Sweden)
Thomas Michael Lavender
Full Text Available Null models exploring species co-occurrence and trait-based limiting similarity are increasingly used to explore the influence of competition on community assembly; however, assessments of common models have not thoroughly explored the influence of variation in matrix size on error rates, in spite of the fact that studies have explored community matrices that vary considerably in size. To determine how smaller matrices, which are of greatest concern, perform statistically, we generated biologically realistic presence-absence matrices ranging in size from 3-50 species and sites, as well as associated trait matrices. We examined co-occurrence tests using the C-Score statistic and independent swap algorithm. For trait-based limiting similarity null models, we used the mean nearest neighbour trait distance (NN and the standard deviation of nearest neighbour distances (SDNN as test statistics, and considered two common randomization algorithms: abundance independent trait shuffling (AITS, and abundance weighted trait shuffling (AWTS. Matrices as small as three × three resulted in acceptable type I error rates (p was associated with increased type I error rates, particularly for matrices with fewer than eight species. Type I error rates increased for limiting similarity tests using the AWTS randomization scheme when community matrices contained more than 35 sites; a similar randomization used in null models of phylogenetic dispersion has previously been viewed as robust. Notwithstanding other potential deficiencies related to the use of small matrices to represent communities, the application of both classes of null model should be restricted to matrices with 10 or more species to avoid the possibility of type II errors. Additionally, researchers should restrict the use of the AWTS randomization to matrices with fewer than 35 sites to avoid type I errors when testing for trait-based limiting similarity. The AITS randomization scheme performed better
Enriching consumer health vocabulary through mining a social Q&A site: A similarity-based approach.
He, Zhe; Chen, Zhiwei; Oh, Sanghee; Hou, Jinghui; Bian, Jiang
2017-05-01
The widely known vocabulary gap between health consumers and healthcare professionals hinders information seeking and health dialogue of consumers on end-user health applications. The Open Access and Collaborative Consumer Health Vocabulary (OAC CHV), which contains health-related terms used by lay consumers, has been created to bridge such a gap. Specifically, the OAC CHV facilitates consumers' health information retrieval by enabling consumer-facing health applications to translate between professional language and consumer friendly language. To keep up with the constantly evolving medical knowledge and language use, new terms need to be identified and added to the OAC CHV. User-generated content on social media, including social question and answer (social Q&A) sites, afford us an enormous opportunity in mining consumer health terms. Existing methods of identifying new consumer terms from text typically use ad-hoc lexical syntactic patterns and human review. Our study extends an existing method by extracting n-grams from a social Q&A textual corpus and representing them with a rich set of contextual and syntactic features. Using K-means clustering, our method, simiTerm, was able to identify terms that are both contextually and syntactically similar to the existing OAC CHV terms. We tested our method on social Q&A corpora on two disease domains: diabetes and cancer. Our method outperformed three baseline ranking methods. A post-hoc qualitative evaluation by human experts further validated that our method can effectively identify meaningful new consumer terms on social Q&A. Copyright © 2017 Elsevier Inc. All rights reserved.
Ylärinne, Janne H; Qu, Chengjuan; Lammi, Mikko J
2017-04-01
Numerous biomaterials are being considered for cartilage tissue engineering, while scaffold-free systems have also been introduced. Thus, it is important to know do the scaffolds improve the formation of manufactured neocartilages. This study compares scaffold-free cultures to two scaffold-containing ones. Six million bovine primary chondrocytes were embedded in HyStem™ or HydroMatrix™ scaffolds, or suspended in scaffold-free chondrocyte culture medium, and then loaded into agarose gel supported culture well pockets. Neocartilages were grown in the presence of hypertonic high glucose DMEM medium for up to 6 weeks. By the end of culture periods, the formed tissues were analyzed by histological staining for proteoglycans (PGs) and type II collagen, gene expression measurements of aggrecan, Sox9, procollagen α1(II), and procollagen α2(I) were performed using quantitative RT-PCR, and analyses of PG contents and structure were conducted by spectrophotometric and agarose gel electrophoretic methods. Histological stainings showed that the PGs and type II collagen were abundantly present in both the scaffold-free and the scaffold-containing tissues. The PG content gradually increased following the culture period. However, the mRNA expression levels of the cartilage-specific genes of aggrecan, procollagen α1(II) and Sox9 gradually decreased following culture period, while procollagen α2(I) levels increased. After 6-week-cultivations, the PG concentrations in neocartilage tissues manufactured with HyStem™ or HydroMatrix™ scaffolds, and in scaffold-free agarose gel-supported cell cultures, were similar to native cartilage. No obvious benefits could be seen on the extracellular matrix assembly in HyStem™ or HydroMatrix™ scaffolds cultures.
Two Echelon Supply Chain Integrated Inventory Model for Similar Products: A Case Study
Parjane, Manoj Baburao; Dabade, Balaji Marutirao; Gulve, Milind Bhaskar
2017-06-01
The purpose of this paper is to develop a mathematical model towards minimization of total cost across echelons in a multi-product supply chain environment. The scenario under consideration is a two-echelon supply chain system with one manufacturer, one retailer and M products. The retailer faces independent Poisson demand for each product. The retailer and the manufacturer are closely coupled in the sense that the information about any depletion in the inventory of a product at a retailer's end is immediately available to the manufacturer. Further, stock-out is backordered at the retailer's end. Thus the costs incurred at the retailer's end are the holding costs and the backorder costs. The manufacturer has only one processor which is time shared among the M products. Production changeover from one product to another entails a fixed setup cost and a fixed set up time. Each unit of a product has a production time. Considering the cost components, and assuming transportation time and cost to be negligible, the objective of the study is to minimize the expected total cost considering both the manufacturer and retailer. In the process two aspects are to be defined. Firstly, every time a product is taken up for production, how much of it (production batch size, q) should be produced. Considering a large value of q favors the manufacturer while a small value of q suits the retailers. Secondly, for a given batch size q, at what level of retailer's inventory (production queuing point), the batch size S of a product be taken up for production by the manufacturer. A higher value of S incurs more holding cost whereas a lower value of S increases the chance of backorder. A tradeoff between the holding and backorder cost must be taken into consideration while choosing an optimal value of S. It may be noted that due to multiple products and single processor, a product `taken' up for production may not get the processor immediately, and may have to wait in a queue. The `S
Two Echelon Supply Chain Integrated Inventory Model for Similar Products: A Case Study
Parjane, Manoj Baburao; Dabade, Balaji Marutirao; Gulve, Milind Bhaskar
2016-03-01
The purpose of this paper is to develop a mathematical model towards minimization of total cost across echelons in a multi-product supply chain environment. The scenario under consideration is a two-echelon supply chain system with one manufacturer, one retailer and M products. The retailer faces independent Poisson demand for each product. The retailer and the manufacturer are closely coupled in the sense that the information about any depletion in the inventory of a product at a retailer's end is immediately available to the manufacturer. Further, stock-out is backordered at the retailer's end. Thus the costs incurred at the retailer's end are the holding costs and the backorder costs. The manufacturer has only one processor which is time shared among the M products. Production changeover from one product to another entails a fixed setup cost and a fixed set up time. Each unit of a product has a production time. Considering the cost components, and assuming transportation time and cost to be negligible, the objective of the study is to minimize the expected total cost considering both the manufacturer and retailer. In the process two aspects are to be defined. Firstly, every time a product is taken up for production, how much of it (production batch size, q) should be produced. Considering a large value of q favors the manufacturer while a small value of q suits the retailers. Secondly, for a given batch size q, at what level of retailer's inventory (production queuing point), the batch size S of a product be taken up for production by the manufacturer. A higher value of S incurs more holding cost whereas a lower value of S increases the chance of backorder. A tradeoff between the holding and backorder cost must be taken into consideration while choosing an optimal value of S. It may be noted that due to multiple products and single processor, a product `taken' up for production may not get the processor immediately, and may have to wait in a queue. The `S
Institute of Scientific and Technical Information of China (English)
YU Xuexiang; XU Shaoquan; GAO Wei; LU Weicai
2003-01-01
A new similar single-difference mathematical model (SS-DM) and its corresponding algorithmare advanced to solve the deformationof monitoring point directly in singleepoch. The method for building theSSDM is introduced in detail, and themain error sources affecting the accu-racy of deformation measurement areanalyzed briefly, and the basic algo-rithm and steps of solving the deform-ation are discussed.In order to validate the correctnessand the accuracy of the similar single-difference model, the test with fivedual frequency receivers is carried outon a slideway which moved in plane inFeb. 2001. In the test,five sessions areobserved. The numerical results oftest data show that the advanced mod-el is correct.
Machine Learning Approaches for Modeling Spammer Behavior
Islam, Md Saiful; Islam, Md Rafiqul
2010-01-01
Spam is commonly known as unsolicited or unwanted email messages in the Internet causing potential threat to Internet Security. Users spend a valuable amount of time deleting spam emails. More importantly, ever increasing spam emails occupy server storage space and consume network bandwidth. Keyword-based spam email filtering strategies will eventually be less successful to model spammer behavior as the spammer constantly changes their tricks to circumvent these filters. The evasive tactics that the spammer uses are patterns and these patterns can be modeled to combat spam. This paper investigates the possibilities of modeling spammer behavioral patterns by well-known classification algorithms such as Na\\"ive Bayesian classifier (Na\\"ive Bayes), Decision Tree Induction (DTI) and Support Vector Machines (SVMs). Preliminary experimental results demonstrate a promising detection rate of around 92%, which is considerably an enhancement of performance compared to similar spammer behavior modeling research.
A Spatial Clustering Approach for Stochastic Fracture Network Modelling
Seifollahi, S.; Dowd, P. A.; Xu, C.; Fadakar, A. Y.
2014-07-01
Fracture network modelling plays an important role in many application areas in which the behaviour of a rock mass is of interest. These areas include mining, civil, petroleum, water and environmental engineering and geothermal systems modelling. The aim is to model the fractured rock to assess fluid flow or the stability of rock blocks. One important step in fracture network modelling is to estimate the number of fractures and the properties of individual fractures such as their size and orientation. Due to the lack of data and the complexity of the problem, there are significant uncertainties associated with fracture network modelling in practice. Our primary interest is the modelling of fracture networks in geothermal systems and, in this paper, we propose a general stochastic approach to fracture network modelling for this application. We focus on using the seismic point cloud detected during the fracture stimulation of a hot dry rock reservoir to create an enhanced geothermal system; these seismic points are the conditioning data in the modelling process. The seismic points can be used to estimate the geographical extent of the reservoir, the amount of fracturing and the detailed geometries of fractures within the reservoir. The objective is to determine a fracture model from the conditioning data by minimizing the sum of the distances of the points from the fitted fracture model. Fractures are represented as line segments connecting two points in two-dimensional applications or as ellipses in three-dimensional (3D) cases. The novelty of our model is twofold: (1) it comprises a comprehensive fracture modification scheme based on simulated annealing and (2) it introduces new spatial approaches, a goodness-of-fit measure for the fitted fracture model, a measure for fracture similarity and a clustering technique for proposing a locally optimal solution for fracture parameters. We use a simulated dataset to demonstrate the application of the proposed approach
Scientific Theories, Models and the Semantic Approach
Directory of Open Access Journals (Sweden)
Décio Krause
2007-12-01
Full Text Available According to the semantic view, a theory is characterized by a class of models. In this paper, we examine critically some of the assumptions that underlie this approach. First, we recall that models are models of something. Thus we cannot leave completely aside the axiomatization of the theories under consideration, nor can we ignore the metamathematics used to elaborate these models, for changes in the metamathematics often impose restrictions on the resulting models. Second, based on a parallel between van Fraassen’s modal interpretation of quantum mechanics and Skolem’s relativism regarding set-theoretic concepts, we introduce a distinction between relative and absolute concepts in the context of the models of a scientific theory. And we discuss the significance of that distinction. Finally, by focusing on contemporary particle physics, we raise the question: since there is no general accepted unification of the parts of the standard model (namely, QED and QCD, we have no theory, in the usual sense of the term. This poses a difficulty: if there is no theory, how can we speak of its models? What are the latter models of? We conclude by noting that it is unclear that the semantic view can be applied to contemporary physical theories.
He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian
2013-09-01
The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents
Multiscale Model Approach for Magnetization Dynamics Simulations
De Lucia, Andrea; Tretiakov, Oleg A; Kläui, Mathias
2016-01-01
Simulations of magnetization dynamics in a multiscale environment enable rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with f...
Scaling and interaction of self-similar modes in models of high-Reynolds number wall turbulence
Sharma, A S; McKeon, B J
2016-01-01
Previous work has established the usefulness of the resolvent operator that maps the terms nonlinear in the turbulent fluctuations to the fluctuations themselves. Further work has described the self-similarity of the resolvent arising from that of the mean velocity profile. The orthogonal modes provided by the resolvent analysis describe the wall-normal coherence of the motions and inherit that self-similarity. In this contribution, we present the implications of this similarity for the nonlinear interaction between modes with different scales and wall-normal locations. By considering the nonlinear interactions between modes, it is shown that much of the turbulence scaling behaviour in the logarithmic region can be determined from a single arbitrarily chosen reference plane. Thus, the geometric scaling of the modes is impressed upon the nonlinear interaction between modes. Implications of these observations on the self-sustaining mechanisms of wall turbulence, modelling and simulation are outlined.
Continuum modeling an approach through practical examples
Muntean, Adrian
2015-01-01
This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.
A Multivariate Approach to Functional Neuro Modeling
DEFF Research Database (Denmark)
Mørch, Niels J.S.
1998-01-01
This Ph.D. thesis, A Multivariate Approach to Functional Neuro Modeling, deals with the analysis and modeling of data from functional neuro imaging experiments. A multivariate dataset description is provided which facilitates efficient representation of typical datasets and, more importantly...... and overall conditions governing the functional experiment, via associated micro- and macroscopic variables. The description facilitates an efficient microscopic re-representation, as well as a handle on the link between brain and behavior; the latter is achieved by hypothesizing variations in the micro...... a generalization theoretical framework centered around measures of model generalization error. - Only few, if any, examples of the application of generalization theory to functional neuro modeling currently exist in the literature. - Exemplification of the proposed generalization theoretical framework...
Interfacial Fluid Mechanics A Mathematical Modeling Approach
Ajaev, Vladimir S
2012-01-01
Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail. Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also: Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...
Systematic approach to MIS model creation
Directory of Open Access Journals (Sweden)
Macura Perica
2004-01-01
Full Text Available In this paper-work, by application of basic principles of general theory of system (systematic approach, we have formulated a model of marketing information system. Bases for research were basic characteristics of systematic approach and marketing system. Informational base for management of marketing system, i.e. marketing instruments was presented in a way that the most important information for decision making were listed per individual marketing mix instruments. In projected model of marketing information system, information listed in this way create a base for establishing of data bases, i.e. bases of information (data bases of: product, price, distribution, promotion. This paper-work gives basic preconditions for formulation and functioning of the model. Model was presented by explication of elements of its structure (environment, data bases operators, analysts of information system, decision makers - managers, i.e. input, process, output, feedback and relations between these elements which are necessary for its optimal functioning. Beside that, here are basic elements for implementation of the model into business system, as well as conditions for its efficient functioning and development.
Schnack, Dalton D.
In Lecture 10, we introduced a non-dimensional parameter called the Lundquist number, denoted by S. This is just one of many non-dimensional parameters that can appear in the formulations of both hydrodynamics and MHD. These generally express the ratio of the time scale associated with some dissipative process to the time scale associated with either wave propagation or transport by flow. These are important because they define regions in parameter space that separate flows with different physical characteristics. All flows that have the same non-dimensional parameters behave in the same way. This property is called similarity scaling.
Regularization of turbulence - a comprehensive modeling approach
Geurts, B. J.
2011-12-01
Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.
Similarity of samples and trimming
Álvarez-Esteban, Pedro C; Cuesta-Albertos, Juan A; Matrán, Carlos; 10.3150/11-BEJ351
2012-01-01
We say that two probabilities are similar at level $\\alpha$ if they are contaminated versions (up to an $\\alpha$ fraction) of the same common probability. We show how this model is related to minimal distances between sets of trimmed probabilities. Empirical versions turn out to present an overfitting effect in the sense that trimming beyond the similarity level results in trimmed samples that are closer than expected to each other. We show how this can be combined with a bootstrap approach to assess similarity from two data samples.
Directory of Open Access Journals (Sweden)
Santiago Vilar
Full Text Available Identification of Drug-Drug Interactions (DDIs is a significant challenge during drug development and clinical practice. DDIs are responsible for many adverse drug effects (ADEs, decreasing patient quality of life and causing higher care expenses. DDIs are not systematically evaluated in pre-clinical or clinical trials and so the FDA U. S. Food and Drug Administration relies on post-marketing surveillance to monitor patient safety. However, existing pharmacovigilance algorithms show poor performance for detecting DDIs exhibiting prohibitively high false positive rates. Alternatively, methods based on chemical structure and pharmacological similarity have shown promise in adverse drug event detection. We hypothesize that the use of chemical biology data in a post hoc analysis of pharmacovigilance results will significantly improve the detection of dangerous interactions. Our model integrates a reference standard of DDIs known to cause arrhythmias with drug similarity data. To compare similarity between drugs we used chemical structure (both 2D and 3D molecular structure, adverse drug side effects, chemogenomic targets, drug indication classes, and known drug-drug interactions. We evaluated the method on external reference standards. Our results showed an enhancement of sensitivity, specificity and precision in different top positions with the use of similarity measures to rank the candidates extracted from pharmacovigilance data. For the top 100 DDI candidates, similarity-based modeling yielded close to twofold precision enhancement compared to the proportional reporting ratio (PRR. Moreover, the method helps in the DDI decision making through the identification of the DDI in the reference standard that generated the candidate.
Ziaimatin, Hasti; Groza, Tudor; Tudorache, Tania; Hunter, Jane
2016-12-01
Collaboration platforms provide a dynamic environment where the content is subject to ongoing evolution through expert contributions. The knowledge embedded in such platforms is not static as it evolves through incremental refinements - or micro-contributions. Such refinements provide vast resources of tacit knowledge and experience. In our previous work, we proposed and evaluated a Semantic and Time-dependent Expertise Profiling (STEP) approach for capturing expertise from micro-contributions. In this paper we extend our investigation to structured micro-contributions that emerge from an ontology engineering environment, such as the one built for developing the International Classification of Diseases (ICD) revision 11. We take advantage of the semantically related nature of these structured micro-contributions to showcase two major aspects: (i) a novel semantic similarity metric, in addition to an approach for creating bottom-up baseline expertise profiles using expertise centroids; and (ii) the application of STEP in this new environment combined with the use of the same semantic similarity measure to both compare STEP against baseline profiles, as well as to investigate the coverage of these baseline profiles by STEP.
A simplified GIS approach to modeling global leaf water isoscapes.
Directory of Open Access Journals (Sweden)
Jason B West
Full Text Available The stable hydrogen (delta(2H and oxygen (delta(18O isotope ratios of organic and inorganic materials record biological and physical processes through the effects of substrate isotopic composition and fractionations that occur as reactions proceed. At large scales, these processes can exhibit spatial predictability because of the effects of coherent climatic patterns over the Earth's surface. Attempts to model spatial variation in the stable isotope ratios of water have been made for decades. Leaf water has a particular importance for some applications, including plant organic materials that record spatial and temporal climate variability and that may be a source of food for migrating animals. It is also an important source of the variability in the isotopic composition of atmospheric gases. Although efforts to model global-scale leaf water isotope ratio spatial variation have been made (especially of delta(18O, significant uncertainty remains in models and their execution across spatial domains. We introduce here a Geographic Information System (GIS approach to the generation of global, spatially-explicit isotope landscapes (= isoscapes of "climate normal" leaf water isotope ratios. We evaluate the approach and the resulting products by comparison with simulation model outputs and point measurements, where obtainable, over the Earth's surface. The isoscapes were generated using biophysical models of isotope fractionation and spatially continuous precipitation isotope and climate layers as input model drivers. Leaf water delta(18O isoscapes produced here generally agreed with latitudinal averages from GCM/biophysical model products, as well as mean values from point measurements. These results show global-scale spatial coherence in leaf water isotope ratios, similar to that observed for precipitation and validate the GIS approach to modeling leaf water isotopes. These results demonstrate that relatively simple models of leaf water enrichment
Shih, Kuei-Chung; Lee, Chi-Ching; Tsai, Chi-Neu; Lin, Yu-Shan; Tang, Chuan-Yi
2014-01-01
Human dihydroorotate dehydrogenase (hDHODH) is a class-2 dihydroorotate dehydrogenase. Because it is extensively used by proliferating cells, its inhibition in autoimmune and inflammatory diseases, cancers, and multiple sclerosis is of substantial clinical importance. In this study, we had two aims. The first was to develop an hDHODH pharma-similarity index approach (PhSIA) using integrated molecular dynamics calculations, pharmacophore hypothesis, and comparative molecular similarity index analysis (CoMSIA) contour information techniques. The approach, for the discovery and design of novel inhibitors, was based on 25 diverse known hDHODH inhibitors. Three statistical methods were used to verify the performance of hDHODH PhSIA. Fischer's cross-validation test provided a 98% confidence level and the goodness of hit (GH) test score was 0.61. The q(2), r(2), and predictive r(2) values were 0.55, 0.97, and 0.92, respectively, for a partial least squares validation method. In our approach, each diverse inhibitor structure could easily be aligned with contour information, and common substructures were unnecessary. For our second aim, we used the proposed approach to design 13 novel hDHODH inhibitors using a scaffold-hopping strategy. Chemical features of the approach were divided into two groups, and the Vitas-M Laboratory fragment was used to create de novo inhibitors. This approach provides a useful tool for the discovery and design of potential inhibitors of hDHODH, and does not require docking analysis; thus, our method can assist medicinal chemists in their efforts to identify novel inhibitors.
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
The paper presents a dynamic model intervened by urban design for the decision-making of land development intensity, which expresses the inherent interaction mechanism between lands based on the evaluation of land attributes and their similarity relationship. Each land unit is described with several factors according to their condition and potential for development, such as land function, accessibility, historical site control, landscape control, and so on. Then, the dynamic reference relationship between land units is established according to the similarity relationship between their factors. That means lands with similar conditions tend to have similar development intensities, which expresses the rule of the spontaneous urban development. Therefore, the development intensities of the pending lands can be calculated by the confirmed ones. Furthermore, the system can be actively intervened by adjusting the parameters according to urban design or planning intentions. And the reaction of the system offers effective support and reference for reasonable decision. The system with multiple intervention input is not only a credible tool for deriving development intensities, but also a platform to activate urban design conception. Above all, the system as a socio-technical tool integrates the optimization of form, function and environment, and embodies the principle of impersonality, justice and flexibility in the decision of land development intensity.
Barabás, György; Meszéna, Géza
2009-05-07
We investigate the transition between limiting similarity and coexistence of a continuum in the competitive Lotka-Volterra model. It is known that there exist exceptional cases in which, contrary to the limiting similarity expectation, all phenotypes coexist along a trait axis. Earlier studies established that the distance between surviving phenotypes is in the magnitude of the niche width 2sigma provided that the carrying capacity curve differs from the exceptional one significantly enough. In this paper we studied the outcome of competition for small perturbations of the exceptional (Gaussian) carrying capacity. We found that the average distance between the surviving phenotypes goes to zero when the perturbation vanishes. The number of coexisting species in equilibrium is proportional to the negative logarithm of the perturbation. Nevertheless, the niche width provides a good order of magnitude for the distance between survivors if the perturbations are larger than 10%. Therefore, we conclude that limiting similarity is a good framework of biological thinking despite the lack of an absolute lower bound of similarity.
Hwang, W-Y Pauchy
2011-01-01
We introduce the "Dirac similarity principle" that states that only those point-like Dirac particles which can interact with the Dirac electron can be observed, such as in the Standard Model. We emphasize that the existing world of the Standard Model is a Dirac world satisfying the Dirac similarity principle and believe that the immediate extension of the Standard Model will remain to be so. On the other hand, we are looking for Higgs particles for the last forty years but something is yet to be found. This leads naturally to the "minimum Higgs hypotheses". Now we know firmly that neutrinos have tiny masses, but in the minimal Standard Model there is no natural sources for such tiny masses. If nothing else, this could be taken as the clue as the signature of the existence of the extra heavy $Z^{\\prime 0}$ since it requires the extra Higgs field, that would help in generating the neutrino tiny masses. Alternatively, we may have missed the right-hand sector for some reason. A simplified version of the left-righ...
Intuitionistic fuzzy similarity measure approach based on orientation%基于倾向性的直觉模糊相似度量方法
Institute of Scientific and Technical Information of China (English)
王毅; 刘三阳; 程月蒙; 余晓东
2015-01-01
针对现有直觉模糊相似度量所存在的不足，提出一种基于倾向性的直觉模糊相似度量方法。首先，基于直觉指数所表征的中立证据中支持与反对的程度呈均衡状态假设，揭示了影响直觉模糊集相似性大小的3个相互作用因素之间的内部关系，给出了相似度量的几何表示。其次，对现有部分相似度量方法在某些情况下无法表述的问题进行了分析，定义了满足直觉模糊相似性的直观约束条件，给出一种直觉模糊相似度量的公理化定义。再次，揭示了直觉指数对证据的倾向性影响，提出了一种基于倾向性的直觉模糊相似度量方法。最后，通过算例分析比较，验证该方法的正确性、合理性、有效性。%A approach to intuitionistic fuzzy similarity measure based on orientation is proposed.Aiming at the deficiency of the present Intuitionistic fuzzy similarity measures.First,the internal relationships of three in-teracting factors which directly affect intuitionistic fuzzy similarity are revealed and then three-dimensional illus-tration is presented based on the hypothesis that the supportability and opposability of neutral evidences are in an equilibrium state indicated by the intuitionistic index.Second,the problems that existing intuitionistic fuzzy similarity measures cannot express are analysed,some explicit constraints for intuitionistic fuzzy similarity are given and thus an axiomatic definition of intuitionistic fuzzy similarity measures is put forward.Third,with the revelation of the impact of intuitionistic index on evidence,a approach to intuitionistic fuzzy similarity measure is proposed.Finally,through analyzing and comparing by a set of calculating examples,it is proved that the pro-posed approach is correct,reasonable and valid.
AN AUTOMATIC APPROACH TO BOX & JENKINS MODELLING
MARCELO KRIEGER
1983-01-01
Apesar do reconhecimento amplo da qualidade das previsões obtidas na aplicação de um modelo ARIMA à previsão de séries temporais univariadas, seu uso tem permanecido restrito pela falta de procedimentos automáticos, computadorizados. Neste trabalho este problema é discutido e um algoritmo é proposto. Inspite of general recognition of the good forecasting ability of ARIMA models in predicting time series, this approach is not widely used because of the lack of ...
Modeling in transport phenomena a conceptual approach
Tosun, Ismail
2007-01-01
Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to
Institute of Scientific and Technical Information of China (English)
Ya-Li Li; Wei-Qun Xu; Yong-Hong Yan
2012-01-01
In this paper,we propose a novel co-occurrence probabilities based similarity measure for inducing semantic classes.Clustering with the new similarity measure outperforms the widely used distance based on Kullback-Leibler divergence in precision,recall and F1 evaluation.In our experiments,we induced semantic clases from unannotated in-domain corpus and then used the induced classes and structures to generate large in-domain corpus which was then used for language model adaptation.Character recognition rate was improved from 85.2％ to 91％.We imply a new measure to solve the lack of domain data problem by first induction then generation for a dialogue system.
Smoller, Joel
2012-01-01
We prove that the Einstein equations in Standard Schwarzschild Coordinates close to form a system of three ordinary differential equations for a family of spherically symmetric, self-similar expansion waves, and the critical ($k=0$) Friedmann universe associated with the pure radiation phase of the Standard Model of Cosmology (FRW), is embedded as a single point in this family. Removing a scaling law and imposing regularity at the center, we prove that the family reduces to an implicitly defined one parameter family of distinct spacetimes determined by the value of a new {\\it acceleration parameter} $a$, such that $a=1$ corresponds to FRW. We prove that all self-similar spacetimes in the family are distinct from the non-critical $k\
Modeling for fairness: A Rawlsian approach.
Diekmann, Sven; Zwart, Sjoerd D
2014-06-01
In this paper we introduce the overlapping design consensus for the construction of models in design and the related value judgments. The overlapping design consensus is inspired by Rawls' overlapping consensus. The overlapping design consensus is a well-informed, mutual agreement among all stakeholders based on fairness. Fairness is respected if all stakeholders' interests are given due and equal attention. For reaching such fair agreement, we apply Rawls' original position and reflective equilibrium to modeling. We argue that by striving for the original position, stakeholders expel invalid arguments, hierarchies, unwarranted beliefs, and bargaining effects from influencing the consensus. The reflective equilibrium requires that stakeholders' beliefs cohere with the final agreement and its justification. Therefore, the overlapping design consensus is not only an agreement to decisions, as most other stakeholder approaches, it is also an agreement to their justification and that this justification is consistent with each stakeholders' beliefs. For supporting fairness, we argue that fairness qualifies as a maxim in modeling. We furthermore distinguish values embedded in a model from values that are implied by its context of application. Finally, we conclude that for reaching an overlapping design consensus communication about properties of and values related to a model is required.
Sivapalan, Murugesu; Wood, Eric F.; Beven, Keith J.
1993-01-01
One of the shortcomings of the original theory of the geomorphologic unit hydrograph (GUH) is that it assumes that runoff is generated uniformly from the entire catchment area. It is now recognized that in many catchments much of the runoff during storm events is produced on partial areas which usually form on narrow bands along the stream network. A storm response model that includes runoff generation on partial areas by both Hortonian and Dunne mechanisms was recently developed by the authors. In this paper a methodology for integrating this partial area runoff generation model with the GUH-based runoff routing model is presented; this leads to a generalized GUH. The generalized GUH and the storm response model are then used to estimate physically based flood frequency distributions. In most previous work the initial moisture state of the catchment had been assumed to be constant for all the storms. In this paper we relax this assumption and allow the initial moisture conditions to vary between storms. The resulting flood frequency distributions are cast in a scaled dimensionless framework where issues such as catchment scale and similarity can be conveniently addressed. A number of experiments are performed to study the sensitivity of the flood frequency response to some of the 'similarity' parameters identified in this formulation. The results indicate that one of the most important components of the derived flood frequency model relates to the specification of processes within the runoff generation model; specifically the inclusion of both saturation excess and Horton infiltration excess runoff production mechanisms. The dominance of these mechanisms over different return periods of the flood frequency distribution can significantly affect the distributional shape and confidence limits about the distribution. Comparisons with observed flood distributions seem to indicate that such mixed runoff production mechanisms influence flood distribution shape. The
Vaillant, Fanny; Lauzier, Benjamin; Ruiz, Matthieu; Shi, Yanfen; Lachance, Dominic; Rivard, Marie-Eve; Bolduc, Virginie; Thorin, Eric; Tardif, Jean-Claude; Des Rosiers, Christine
2016-10-01
While heart rate reduction (HRR) is a target for the management of patients with heart disease, contradictory results were reported using ivabradine, which selectively inhibits the pacemaker If current, vs. β-blockers like metoprolol. This study aimed at testing whether similar HRR with ivabradine vs. metoprolol differentially modulates cardiac energy substrate metabolism, a factor determinant for cardiac function, in a mouse model of dyslipidemia (hApoB(+/+);LDLR(-/-)). Following a longitudinal study design, we used 3- and 6-mo-old mice, untreated or treated for 3 mo with ivabradine or metoprolol. Cardiac function was evaluated in vivo and ex vivo in working hearts perfused with (13)C-labeled substrates to assess substrate fluxes through energy metabolic pathways. Compared with 3-mo-old, 6-mo-old dyslipidemic mice had similar cardiac hemodynamics in vivo but impaired (P ivabradine-treated hearts displayed significantly higher stroke volume values and glycolysis vs. their metoprolol-treated counterparts ex vivo, values for the ivabradine group being often not significantly different from 3-mo-old mice. Further analyses highlighted additional significant cardiac alterations with disease progression, namely in the total tissue level of proteins modified by O-linked N-acetylglucosamine (O-GlcNAc), whose formation is governed by glucose metabolism via the hexosamine biosynthetic pathway, which showed a similar pattern with ivabradine vs. metoprolol treatment. Collectively, our results emphasize the implication of alterations in cardiac glucose metabolism and signaling linked to disease progression in our mouse model. Despite similar HRR, ivabradine, but not metoprolol, preserved cardiac function and glucose metabolism during disease progression.
Pedagogic process modeling: Humanistic-integrative approach
Directory of Open Access Journals (Sweden)
Boritko Nikolaj M.
2007-01-01
Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .
Nuclear level density: Shell-model approach
Sen'kov, Roman; Zelevinsky, Vladimir
2016-06-01
Knowledge of the nuclear level density is necessary for understanding various reactions, including those in the stellar environment. Usually the combinatorics of a Fermi gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally used parameters are also compared with standard phenomenological approaches.
Modeling Social Annotation: a Bayesian Approach
Plangprasopchok, Anon
2008-01-01
Collaborative tagging systems, such as del.icio.us, CiteULike, and others, allow users to annotate objects, e.g., Web pages or scientific papers, with descriptive labels called tags. The social annotations, contributed by thousands of users, can potentially be used to infer categorical knowledge, classify documents or recommend new relevant information. Traditional text inference methods do not make best use of socially-generated data, since they do not take into account variations in individual users' perspectives and vocabulary. In a previous work, we introduced a simple probabilistic model that takes interests of individual annotators into account in order to find hidden topics of annotated objects. Unfortunately, our proposed approach had a number of shortcomings, including overfitting, local maxima and the requirement to specify values for some parameters. In this paper we address these shortcomings in two ways. First, we extend the model to a fully Bayesian framework. Second, we describe an infinite ver...
Institute of Scientific and Technical Information of China (English)
Xu Jiankun; Wang Enyuan; Li Zhonghui; Wang Chao
2011-01-01
In order to compensate for the deficiency of present methods of monitoring plane displacement in similarity model tests,such as inadequate real-time monitoring and more manual intervention,an effective monitoring method was proposed in this study,and the major steps of the monitoring method include:firstly,time-series images of the similarity model in the test were obtained by a camera,and secondly,measuring points marked as artificial targets were automatically tracked and recognized from time-series images.Finally,the real-time plane displacement field was calculated by the fixed magnification between objects and images under the specific conditions.And then the application device of the method was designed and tested.At the same time,a sub-pixel location method and a distortion error model were used to improve the measuring accuracy.The results indicate that this method may record the entire test,especially the detailed non-uniform deformation and sudden deformation.Compared with traditional methods this method has a number of advantages,such as greater measurement accuracy and reliability,less manual intervention,higher automation,strong practical properties,much more measurement information and so on.
Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach
Directory of Open Access Journals (Sweden)
Alistair McNair Senior
2016-01-01
Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.
THE MODEL OF EXTERNSHIP ORGANIZATION FOR FUTURE TEACHERS: QUALIMETRIC APPROACH
Directory of Open Access Journals (Sweden)
Taisiya A. Isaeva
2015-01-01
Full Text Available The aim of the paper is to present author’s model for bachelors – future teachers of vocational training. The model is been worked out from the standpoint of qualimetric approach and provides a pedagogical training.Methods. The process is based on the literature analysis of externship organization for students in higher education and includes the SWOT-analysis techniques in pedagogical training. The method of group expert evaluation is the main method of pedagogical qualimetry. Structural components of professional pedagogical competency of students-future teachers are defined. It allows us to determine a development level and criterion of estimation on mastering programme «Vocational training (branch-wise».Results. This article interprets the concept «pedagogical training»; its basic organization principles during students’ practice are stated. The methods of expert group formation are presented: self-assessment and personal data.Scientific novelty. The externship organization model for future teachers is developed. This model is based on pedagogical training, using qualimetric approach and the SWOT-analysis techniques. Proposed criterion-assessment procedures are managed to determine the developing levels of professional and pedagogical competency.Practical significance. The model is introduced into pedagogical training of educational process of Kalashnikov’s Izhevsk State Technical University, and can be used in other similar educational establishments.
Multicomponent Equilibrium Models for Testing Geothermometry Approaches
Energy Technology Data Exchange (ETDEWEB)
Carl D. Palmer; Robert W. Smith; Travis L. McLing
2013-02-01
Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.
A semiparametric approach to physiological flow models.
Verotta, D; Sheiner, L B; Ebling, W F; Stanski, D R
1989-08-01
By regarding sampled tissues in a physiological model as linear subsystems, the usual advantages of flow models are preserved while mitigating two of their disadvantages, (i) the need for assumptions regarding intratissue kinetics, and (ii) the need to simultaneously fit data from several tissues. To apply the linear systems approach, both arterial blood and (interesting) tissue drug concentrations must be measured. The body is modeled as having an arterial compartment (A) distributing drug to different linear subsystems (tissues), connected in a specific way by blood flow. The response (CA, with dimensions of concentration) of A is measured. Tissues receive input from A (and optionally from other tissues), and send output to the outside or to other parts of the body. The response (CT, total amount of drug in the tissue (T) divided by the volume of T) from the T-th one, for example, of such tissues is also observed. From linear systems theory, CT can be expressed as the convolution of CA with a disposition function, F(t) (with dimensions 1/time). The function F(t) depends on the (unknown) structure of T, but has certain other constant properties: The integral integral infinity0 F(t) dt is the steady state ratio of CT to CA, and the point F(0) is the clearance rate of drug from A to T divided by the volume of T. A formula for the clearance rate of drug from T to outside T can be derived. To estimate F(t) empirically, and thus mitigate disadvantage (i), we suggest that, first, a nonparametric (or parametric) function be fitted to CA data yielding predicted values, CA, and, second, the convolution integral of CA with F(t) be fitted to CT data using a deconvolution method. By so doing, each tissue's data are analyzed separately, thus mitigating disadvantage (ii). A method for system simulation is also proposed. The results of applying the approach to simulated data and to real thiopental data are reported.
Modeling Approaches for Describing Microbial Population Heterogeneity
DEFF Research Database (Denmark)
Lencastre Fernandes, Rita
, ethanol and biomass throughout the reactor. This work has proven that the integration of CFD and population balance models, for describing the growth of a microbial population in a spatially heterogeneous reactor, is feasible, and that valuable insight on the interplay between flow and the dynamics......Although microbial populations are typically described by averaged properties, individual cells present a certain degree of variability. Indeed, initially clonal microbial populations develop into heterogeneous populations, even when growing in a homogeneous environment. A heterogeneous microbial......) to predict distributions of certain population properties including particle size, mass or volume, and molecular weight. Similarly, PBM allow for a mathematical description of distributed cell properties within microbial populations. Cell total protein content distributions (a measure of cell mass) have been...
Approaches and models of intercultural education
Directory of Open Access Journals (Sweden)
Iván Manuel Sánchez Fontalvo
2013-10-01
Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.
Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.
Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J
2016-01-01
Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.
Conway Hughston, Veronica
Since 1996 ABET has mandated that undergraduate engineering degree granting institutions focus on learning outcomes such as professional skills (i.e. solving unstructured problems and working in teams). As a result, engineering curricula were restructured to include team based learning---including team charters. Team charters were diffused into engineering education as one of many instructional activities to meet the ABET accreditation mandates. However, the implementation and execution of team charters into engineering team based classes has been inconsistent and accepted without empirical evidence of the consequences. The purpose of the current study was to investigate team effectiveness, operationalized as team viability, as an outcome of team charter implementation in an undergraduate engineering team based design course. Two research questions were the focus of the study: a) What is the relationship between team charter quality and viability in engineering student teams, and b) What is the relationship among team charter quality, teamwork mental model similarity, and viability in engineering student teams? Thirty-eight intact teams, 23 treatment and 15 comparison, participated in the investigation. Treatment teams attended a team charter lecture, and completed a team charter homework assignment. Each team charter was assessed and assigned a quality score. Comparison teams did not join the lecture, and were not asked to create a team charter. All teams completed each data collection phase: a) similarity rating pretest; b) similarity posttest; and c) team viability survey. Findings indicate that team viability was higher in teams that attended the lecture and completed the charter assignment. Teams with higher quality team charter scores reported higher levels of team viability than teams with lower quality charter scores. Lastly, no evidence was found to support teamwork mental model similarity as a partial mediator of the team charter quality on team viability
Systematic approach to verification and validation: High explosive burn models
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory
2012-04-16
Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code
Kato, Junko; Okada, Kensuke
2011-01-01
Perceiving differences by means of spatial analogies is intrinsic to human cognition. Multi-dimensional scaling (MDS) analysis based on Minkowski geometry has been used primarily on data on sensory similarity judgments, leaving judgments on abstractive differences unanalyzed. Indeed, analysts have failed to find appropriate experimental or real-life data in this regard. Our MDS analysis used survey data on political scientists' judgments of the similarities and differences between political positions expressed in terms of distance. Both distance smoothing and majorization techniques were applied to a three-way dataset of similarity judgments provided by at least seven experts on at least five parties' positions on at least seven policies (i.e., originally yielding 245 dimensions) to substantially reduce the risk of local minima. The analysis found two dimensions, which were sufficient for mapping differences, and fit the city-block dimensions better than the Euclidean metric in all datasets obtained from 13 countries. Most city-block dimensions were highly correlated with the simplified criterion (i.e., the left-right ideology) for differences that are actually used in real politics. The isometry of the city-block and dominance metrics in two-dimensional space carries further implications. More specifically, individuals may pay attention to two dimensions (if represented in the city-block metric) or focus on a single dimension (if represented in the dominance metric) when judging differences between the same objects. Switching between metrics may be expected to occur during cognitive processing as frequently as the apparent discontinuities and shifts in human attention that may underlie changing judgments in real situations occur. Consequently, the result has extended strong support for the validity of the geometric models to represent an important social cognition, i.e., the one of political differences, which is deeply rooted in human nature.
Similar pattern of peripheral neuropathy in mouse models of type 1 diabetes and Alzheimer’s disease
Jolivalt, Corinne G.; Calcutt, Nigel A.; Masliah, Eliezer
2011-01-01
There is an increasing awareness that diabetes has an impact on the CNS and that diabetes is a risk factor for Alzheimer’s disease (AD). Links between AD and diabetes point to impaired insulin signaling as a common mechanism leading to defects in the brain. However, diabetes is predominantly characterized by peripheral, rather than central, neuropathy and despite the common central mechanisms linking AD and diabetes, little is known about the effect of AD on the PNS. In this study, we compared indices of peripheral neuropathy and investigated insulin signaling in the sciatic nerve of insulin-deficient mice and APP overexpressing transgenic mice. Insulin-deficient and APP transgenic mice displayed similar patterns of peripheral neuropathy with decreased motor nerve conduction velocity, thermal hypoalgesia and loss of tactile sensitivity. Phosphorylation of the insulin receptor and GSK3β were similarly affected in insulin-deficient and APP transgenic mice despite significantly different blood glucose and plasma insulin levels and nerve of both models showed accumulation of Aβ-immunoreactive protein. Although diabetes and AD have different primary etiologies, both diseases share many abnormalities in both the brain and the PNS. Our data point to common deficits in the insulin-signaling pathway in both neurodegenerative diseases and support the idea that AD may cause disorders outside the higher CNS. PMID:22178988
A Nonhydrostatic Model Based On A New Approach
Janjic, Z. I.
Considerable experience with nonhydrostatic mo dels has been accumulated on the scales of convective clouds and storms. However, numerical weather prediction (NWP) deals with motions on a much wider range of temporal and spatial scales. Thus, difficulties that may not be significant on the small scales, may become important in NWP applications. Having in mind these considerations, a new approach has been proposed and applied in developing nonhydrostatic models intended for NWP applications. Namely, instead of extending the cloud models to synoptic scales, the hydrostatic approximation is relaxed in a hydrostatic NWP model. In this way the model validity is extended to nonhydrostatic motions, and at the same time favorable features of the hydrostatic formulation are preserved. In order to apply this approach, the system of nonhydrostatic equations is split into two parts: (a) the part that corresponds to the hydrostatic system, except for corrections due to vertical acceleration, and (b) the system of equations that allows computation of the corrections appearing in the first system. This procedure does not require any additional approximation. In the model, "isotropic" horizontal finite differencing is employed that conserves a number of basic and derived dynamical and quadratic quantities. The hybrid pressure-sigma vertical coordinate has been chosen as the primary option. The forward-backward scheme is used for horizontally propagating fast waves, and an implicit scheme is used for vertically propagating sound waves. The Adams- Bashforth scheme is applied for the advection of the basic dynamical variables and for the Coriolis terms. In real data runs, the nonhydrostatic dynamics does not require extra computational boundary conditions at the top. The philosophy of the physical package and possible future developments of physical parameterizations are also reviewed. A two-dimensional model based on the described approach successfully reproduced classical
A Bayesian modeling approach for generalized semiparametric structural equation models.
Song, Xin-Yuan; Lu, Zhao-Hua; Cai, Jing-Heng; Ip, Edward Hak-Sing
2013-10-01
In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types-continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.
Quantitative versus qualitative modeling: a complementary approach in ecosystem study.
Bondavalli, C; Favilla, S; Bodini, A
2009-02-01
Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.
Estimating similarity of XML Schemas using path similarity measure
Directory of Open Access Journals (Sweden)
Veena Trivedi
2012-07-01
Full Text Available In this paper, an attempt has been made to develop an algorithm which estimates the similarity for XML Schemas using multiple similarity measures. For performing the task, the XML Schema element information has been represented in the form of string and four different similarity measure approaches have been employed. To further improve the similarity measure, an overall similarity measure has also been calculated. The approach used in this paper is a distinguished one, as it calculates the similarity between two XML schemas using four approaches and gives an integrated values for the similarity measure. Keywords-componen
Clustering by Pattern Similarity
Institute of Scientific and Technical Information of China (English)
Hai-xun Wang; Jian Pei
2008-01-01
The task of clustering is to identify classes of similar objects among a set of objects. The definition of similarity varies from one clustering model to another. However, in most of these models the concept of similarity is often based on such metrics as Manhattan distance, Euclidean distance or other Lp distances. In other words, similar objects must have close values in at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. The new similarity concept models a wide range of applications. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks. E-commerce applications, such as collaborative filtering, can also benefit from the new model, because it is able to capture not only the closeness of values of certain leading indicators but also the closeness of (purchasing, browsing, etc.) patterns exhibited by the customers. In addition to the novel similarity model, this paper also introduces an effective and efficient algorithm to detect such clusters, and we perform tests on several real and synthetic data sets to show its performance.
A computational toy model for shallow landslides: Molecular dynamics approach
Martelloni, Gianluca; Bagnoli, Franco; Massaro, Emanuele
2013-09-01
The aim of this paper is to propose a 2D computational algorithm for modeling the triggering and propagation of shallow landslides caused by rainfall. We used a molecular dynamics (MD) approach, similar to the discrete element method (DEM), that is suitable to model granular material and to observe the trajectory of a single particle, so to possibly identify its dynamical properties. We consider that the triggering of shallow landslides is caused by the decrease of the static friction along the sliding surface due to water infiltration by rainfall. Thence the triggering is caused by the two following conditions: (a) a threshold speed of the particles and (b) a condition on the static friction, between the particles and the slope surface, based on the Mohr-Coulomb failure criterion. The latter static condition is used in the geotechnical model to estimate the possibility of landslide triggering. The interaction force between particles is modeled, in the absence of experimental data, by means of a potential similar to the Lennard-Jones one. The viscosity is also introduced in the model and for a large range of values of the model's parameters, we observe a characteristic velocity pattern, with acceleration increments, typical of real landslides. The results of simulations are quite promising: the energy and time triggering distribution of local avalanches show a power law distribution, analogous to the observed Gutenberg-Richter and Omori power law distributions for earthquakes. Finally, it is possible to apply the method of the inverse surface displacement velocity [4] for predicting the failure time.
Modeling and simulation of self-similar traffic based on FBM model%基于FBM模型的自相似流量建模仿真
Institute of Scientific and Technical Information of China (English)
卢颖; 裴承艳; 陈子辰; 康凤举
2011-01-01
Network traffic models are important basis of network programming and performance evaluation. The conventional models are mostly based on Poisson model and Markovian franc model,which is only Short-Range Dependence. With the continuous development of network services, studies found that the actual network traffic has a long-range dependence （LRD） now and in a very long time , which is a kind of self-similarity. In this paper, the RMD and Fourier algorithm were adopted to simulate and analyze FBM model, a self-similar model. They generated the necessary sequence of self-similar traffic. Then the article uses R/S method and variance-time method to verify Hurst value of the generated sequence of self-similar traffic in order to verify the self-similarity of the self-similar traffic sequence. The existence of self-similarity is verified by experiments, and the advantage and disadvantage of RMD and Fourier algorithm are analyzed.%网络流量建模是网络规划与性能评价的重要基础。传统的业务模型大多基于泊松模型和马尔可夫排队模型，只具有短程相关性，随着网络业务的不断研究发现，实际网络业务流在很长的时间范围内都具有长程相关性，即一种自相似性。本文采用RMD算法和Fourier变换法对网络流量的自相似模型-FBM模型进行了建模及仿真研究，生成了所需的自相似流量序列。然后分别采用R／S法和方差时间图法对其进行自相似参数检测。结果验证了仿真算法所产生的序列存在着自相似性，并同时对RMD算法和Fourier变换法的优缺点进行了分析。
The semantic similarity ensemble
Directory of Open Access Journals (Sweden)
Andrea Ballatore
2013-12-01
Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.
An integrated approach to permeability modeling using micro-models
Energy Technology Data Exchange (ETDEWEB)
Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)
2008-10-15
An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.
Volpes, L
2015-01-01
We present an application of the stereoscopic self-similar-expansion model (SSSEM) to Solar Terrestrial Relations Observatory (STEREO)/Sun-Earth Connection Coronal and Heliospheric Investigation (SECCHI) observations of the 03 April 2010 CME and its associated shock. The aim is to verify whether CME-driven shock parameters can be inferred from the analysis of j-maps. For this purpose we use the SSSEM to derive the CME and the shock kinematics. Arrival times and speeds, inferred assuming either propagation at constant speed or with uniform deceleration, show good agreement with Advanced Composition Explorer (ACE) measurements. The shock standoff distance $[\\Delta]$, the density compression $[\\frac{\\rho_d}{\\rho_u}]$ and the Mach number $[M]$ are calculated combining the results obtained for the CME and shock kinematics with models for the shock location. Their values are extrapolated to $\\textrm{L}_1$ and compared to in-situ data. The in-situ standoff distance is obtained from ACE solar-wind measurements, and t...
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can be a...
A Markovian approach for modeling packet traffic with long range dependence
DEFF Research Database (Denmark)
Andersen, Allan T.; Nielsen, Bo Friis
1998-01-01
-state Markov modulated Poisson processes (MMPPs). We illustrate that a superposition of four two-state MMPPs suffices to model second-order self-similar behavior over several time scales. Our modeling approach allows us to fit to additional descriptors while maintaining the second-order behavior...
Directory of Open Access Journals (Sweden)
Merler Stefano
2010-06-01
breakdown analysis shows that similar attack rates are obtained for the younger age classes. Conclusions The good agreement between the two modeling approaches is very important for defining the tradeoff between data availability and the information provided by the models. The results we present define the possibility of hybrid models combining the agent-based and the metapopulation approaches according to the available data and computational resources.
EMMD-Prony approach for dynamic validation of simulation models
Institute of Scientific and Technical Information of China (English)
Ruiyang Bai
2015-01-01
Model validation and updating is critical to model credi-bility growth. In order to assess model credibility quantitatively and locate model error precisely, a new dynamic validation method based on extremum field mean mode decomposition (EMMD) and the Prony method is proposed in this paper. Firstly, complex dy-namic responses from models and real systems are processed into stationary components by EMMD. These components always have definite physical meanings which can be the evidence about rough model error location. Secondly, the Prony method is applied to identify the features of each EMMD component. Amplitude si-milarity, frequency similarity, damping similarity and phase simi-larity are defined to describe the similarity of dynamic responses. Then quantitative validation metrics are obtained based on the improved entropy weight and energy proportion. Precise model error location is realized based on the physical meanings of these features. The application of this method in aircraft control er design provides evidence about its feasibility and usability.
Connectivity of channelized reservoirs: a modelling approach
Energy Technology Data Exchange (ETDEWEB)
Larue, David K. [ChevronTexaco, Bakersfield, CA (United States); Hovadik, Joseph [ChevronTexaco, San Ramon, CA (United States)
2006-07-01
Connectivity represents one of the fundamental properties of a reservoir that directly affects recovery. If a portion of the reservoir is not connected to a well, it cannot be drained. Geobody or sandbody connectivity is defined as the percentage of the reservoir that is connected, and reservoir connectivity is defined as the percentage of the reservoir that is connected to wells. Previous studies have mostly considered mathematical, physical and engineering aspects of connectivity. In the current study, the stratigraphy of connectivity is characterized using simple, 3D geostatistical models. Based on these modelling studies, stratigraphic connectivity is good, usually greater than 90%, if the net: gross ratio, or sand fraction, is greater than about 30%. At net: gross values less than 30%, there is a rapid diminishment of connectivity as a function of net: gross. This behaviour between net: gross and connectivity defines a characteristic 'S-curve', in which the connectivity is high for net: gross values above 30%, then diminishes rapidly and approaches 0. Well configuration factors that can influence reservoir connectivity are well density, well orientation (vertical or horizontal; horizontal parallel to channels or perpendicular) and length of completion zones. Reservoir connectivity as a function of net: gross can be improved by several factors: presence of overbank sandy facies, deposition of channels in a channel belt, deposition of channels with high width/thickness ratios, and deposition of channels during variable floodplain aggradation rates. Connectivity can be reduced substantially in two-dimensional reservoirs, in map view or in cross-section, by volume support effects and by stratigraphic heterogeneities. It is well known that in two dimensions, the cascade zone for the 'S-curve' of net: gross plotted against connectivity occurs at about 60% net: gross. Generalizing this knowledge, any time that a reservoir can be regarded as &apos
Directory of Open Access Journals (Sweden)
Hiroshi Takasaki
2014-06-01
Full Text Available Questions: In people with neck pain, does Mechanical Diagnosis and Therapy (MDT reduce pain and disability more than ‘wait and see’? Does MDT reduce pain and disability more than other interventions? Are any differences in effect clinically important? Design: Systematic review of randomised trials with meta-analysis. Participants: People with neck pain. Intervention: MDT. Outcome measures: Pain intensity and disability due to neck pain in the short (< 3 months, intermediate (< 1 year and long term (≥ 1 year. Results: Five trials were included. Most comparisons demonstrated mean differences in effect that favoured MDT over wait-and-see controls or other interventions, although most were statistically non-significant. For pain, all comparisons had a 95% confidence interval (CI with lower limits that were less than 20 on a scale of 0 to 100, which suggests that the difference may not be clinically important. For disability, even the upper limits of the 95% CI were below this threshold, confirming that the differences are not clinically important. In all of the trials, some or all of the treating therapists did not have the highest level of MDT training. Conclusion: The additional benefit of MDT compared with the wait-and-see approach or other therapeutic approaches may not be clinically important in terms of pain intensity and is not clinically important in terms of disability. However, these estimates of the effect of MDT may reflect suboptimal training of the treating therapists. Further research could improve the precision of the estimates and assess whether the extent of training in MDT influences its effect. [Takasaki H, May S (2014 Mechanical Diagnosis and Therapy has similar effects on pain and disability as ‘wait and see’ and other approaches in people with neck pain: a systematic review. Journal of Physiotherapy 60: 78–84].
Directory of Open Access Journals (Sweden)
Samantha Sevenhuysen
2014-12-01
Full Text Available Question: What is the efficacy and acceptability of a peer-assisted learning model compared with a traditional model for paired students in physiotherapy clinical education? Design: Prospective, assessor-blinded, randomised crossover trial. Participants: Twenty-four physiotherapy students in the third year of a 4-year undergraduate degree. Intervention: Participants each completed 5 weeks of clinical placement, utilising a peer-assisted learning model (a standardised series of learning activities undertaken by student pairs and educators to facilitate peer interaction using guided strategies and a traditional model (usual clinical supervision and learning activities led by clinical educators supervising pairs of students. Outcome measures: The primary outcome measure was student performance, rated on the Assessment of Physiotherapy Practice by a blinded assessor, the supervising clinical educator and by the student in self-assessment. Secondary outcome measures were satisfaction with the teaching and learning experience measured via survey, and statistics on services delivered. Results: There were no significant between-group differences in Assessment of Physiotherapy Practice scores as rated by the blinded assessor (p = 0.43, the supervising clinical educator (p = 0.94 or the students (p = 0.99. In peer-assisted learning, clinical educators had an extra 6 minutes/day available for non-student-related quality activities (95% CI 1 to 10 and students received an additional 0.33 entries/day of written feedback from their educator (95% CI 0.06 to 0.61. Clinical educator satisfaction and student satisfaction were higher with the traditional model. Conclusion: The peer-assisted learning model trialled in the present study produced similar student performance outcomes when compared with a traditional approach. Peer-assisted learning provided some benefits to educator workload and student feedback, but both educators and students were more
A model-data based systems approach to process intensification
DEFF Research Database (Denmark)
Gani, Rafiqul
In recent years process intensification (PI) has attracted much interest as a potential means of process improvement to meet the demands, such as, for sustainable production. A variety of intensified equipment are being developed that potentially creates options to meet these demands...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model....... Here, established procedures for computer aided molecular design is adopted since combination of phenomena to form unit operations with desired objectives is, in principle, similar to combining atoms to form molecules with desired properties. The concept of the phenomena-based synthesis/design method...
Boutelier, D. A.; Schrank, C.; Cruden, A. R.
2006-12-01
The selection of appropriate analogue materials is a central consideration in the design of realistic physical models. Hence, information on the rheology of materials and potential materials is essential to evaluate their suitability as rock analogues. Silicon polymers have long been used to model ductile rocks that deform by diffusion or dislocation creep. Temperature and compositional variations that control the effective viscosity and density of rocks in the crust and mantle are simulated in the laboratory by multiple layers of various silicon polymers mixed with granular fillers, plasticines or bouncing putties. Since dislocation creep is a power law, strain rate-softening flow mechanism, we have been investigating the rheology of highly filled silicon polymers as suitable new analogue materials with similar deformation behavior. The materials actually exhibit strain rate softening behavior but with increasing amounts of filler the mixtures also become non-linear. We report the rheological properties of the analogue materials as functions of the filler content. For the linear viscous materials the flow laws are presented (viscosity coefficient and power law exponent). For non-linear materials the relative importance of strain and strain-rate softening/hardening has been investigated doing multiple creep tests that allow mapping of the effective viscosity in the stress-strain space. Our study reveals that most of the currently used silicon-based analogue materials have a linear or quasi-linear rheology but are also Newtonian or nearly-Newtonian viscous fluid, which makes them more appropriate for simulating natural rocks deforming by diffusion creep.
Energy Technology Data Exchange (ETDEWEB)
Brooking, C. [Univ. of Bath (United Kingdom)
1996-12-31
Process engineering software is used to simulate the operation of large chemical plants. Such simulations are used for a variety of tasks, including operator training. For the software to be of practical use for this, dynamic simulations need to run in real-time. The models that the simulation is based upon are written in terms of Differential Algebraic Equations (DAE`s). In the numerical time-integration of systems of DAE`s using an implicit method such as backward Euler, the solution of nonlinear systems is required at each integration point. When solved using Newton`s method, this leads to the repeated solution of nonsymmetric sparse linear systems. These systems range in size from 500 to 20,000 variables. A typical integration may require around 3000 timesteps, and if 4 Newton iterates were needed on each time step, then this means approximately 12,000 linear systems must be solved. The matrices produced by the simulations have a similar sparsity pattern throughout the integration. They are also severely ill-conditioned, and have widely-scattered spectra.
Minemoto, Takashi; Murata, Masashi
2014-08-01
Device modeling of CH3NH3PbI3-xCl3 perovskite-based solar cells was performed. The perovskite solar cells employ a similar structure with inorganic semiconductor solar cells, such as Cu(In,Ga)Se2, and the exciton in the perovskite is Wannier-type. We, therefore, applied one-dimensional device simulator widely used in the Cu(In,Ga)Se2 solar cells. A high open-circuit voltage of 1.0 V reported experimentally was successfully reproduced in the simulation, and also other solar cell parameters well consistent with real devices were obtained. In addition, the effect of carrier diffusion length of the absorber and interface defect densities at front and back sides and the optimum thickness of the absorber were analyzed. The results revealed that the diffusion length experimentally reported is long enough for high efficiency, and the defect density at the front interface is critical for high efficiency. Also, the optimum absorber thickness well consistent with the thickness range of real devices was derived.
Modeling healthcare authorization and claim submissions using the openEHR dual-model approach
Directory of Open Access Journals (Sweden)
Freire Sergio M
2011-10-01
Full Text Available Abstract Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing
Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches
Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward
2015-01-01
As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.
Energy Technology Data Exchange (ETDEWEB)
Schatzinger, R.A.; Szpakiewicz, M.J.; Jackson, S.R.; Chang, M.M.; Sharma, B.; Tham, M.K.; Cheng, A.M.
1992-04-01
The Reservoir Assessment and Characterization Research Program at NIPER employs an interdisciplinary approach that focuses on the high priority reservoir class of shoreline barrier deposits to: (1) determine the problems specific to this class of reservoirs by identifying the reservoir heterogeneities that influence the movement and trapping of fluids; and (2) develop methods to characterize effectively this class of reservoirs to predict residual oil saturation (ROS) on interwell scales and improve prediction of the flow patterns of injected and produced fluids. Accurate descriptions of the spatial distribution of critical reservoir parameters (e.g., permeability, porosity, pore geometry, mineralogy, and oil saturation) are essential for designing and implementing processes to improve sweep efficiency and thereby increase oil recovery. The methodologies and models developed in this program will, in the near- to mid-term, assist producers in the implementation of effective reservoir management strategies such as location of infill wells and selection of optimum enhanced oil recovery methods to maximize oil production from their reservoirs.
Gebrerufael, Eskendr; Hergert, Heiko; Roth, Robert
2016-01-01
We merge two successful ab initio nuclear-structure methods, the no-core shell model (NCSM) and the multi-reference in-medium similarity renormalization group (IM-SRG) to define a new many-body approach for the comprehensive description of ground and excited states of closed and open-shell nuclei. Building on the key advantages of the two methods---the decoupling of excitations at the many-body level in the IM-SRG and the access to arbitrary nuclei, eigenstates, and observables in the NCSM---their combination enables fully converged no-core calculations for an unprecedented range of nuclei and observables at moderate computational cost. We present applications in the carbon and oxygen isotopic chains, where conventional NCSM calculations are still feasible and provide an important benchmark. The efficiency and rapid convergence of the new approach make it ideally suited for ab initio studies of the complete spectroscopy of nuclei up into the medium-mass regime.
Gebrerufael, Eskendr; Vobig, Klaus; Hergert, Heiko; Roth, Robert
2017-04-14
We merge two successful ab initio nuclear-structure methods, the no-core shell model (NCSM) and the multireference in-medium similarity renormalization group (IM-SRG) to define a new many-body approach for the comprehensive description of ground and excited states of closed and open-shell nuclei. Building on the key advantages of the two methods-the decoupling of excitations at the many-body level in the IM-SRG and the access to arbitrary nuclei, eigenstates, and observables in the NCSM-their combination enables fully converged no-core calculations for an unprecedented range of nuclei and observables at moderate computational cost. We present applications in the carbon and oxygen isotopic chains, where conventional NCSM calculations are still feasible and provide an important benchmark. The efficiency and rapid convergence of the new approach make it ideally suited for ab initio studies of the complete spectroscopy of nuclei up into the medium-mass regime.
Zeng, Ming; Li, Zhi-Yong; Ma, Jin; Cao, Ping-Ping; Wang, Heng; Cui, Yong-Hua; Liu, Zheng
2015-06-06
, epidermal growth factor, basic fibroblast growth factor, platelet derived growth factor, vascular endothelial growth factor, and matrix metalloproteinase 9) was suppressed, in different phenotypic CRS by dexamethasone and clarithromycin in comparable extent. Out of our expectation, our explant model study discovered herein that glucocorticoids and macrolides likely exerted similar regulatory actions on CRS and most of their effects did not vary by the phenotypes of CRS.
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
Predicting future glacial lakes in Austria using different modelling approaches
Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus
2017-04-01
Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers
ALREST High Fidelity Modeling Program Approach
2011-05-18
Gases and Mixtures of Redlich - Kwong and Peng- Robinson Fluids Assumed pdf Model based on k- ε-g Model in NASA/LaRc Vulcan code Level Set model...Potential Attractiveness Of Liquid Hydrocarbon Engines For Boost Applications • Propensity Of Hydrocarbon Engines For Combustion Instability • Air
An approach to model validation and model-based prediction -- polyurethane foam case study.
Energy Technology Data Exchange (ETDEWEB)
Dowding, Kevin J.; Rutherford, Brian Milne
2003-07-01
Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical
Grealish, Shane; Diguet, Elsa; Kirkeby, Agnete; Mattsson, Bengt; Heuer, Andreas; Bramoulle, Yann; Van Camp, Nadja; Perrier, Anselme L; Hantraye, Philippe; Björklund, Anders; Parmar, Malin
2014-11-06
Considerable progress has been made in generating fully functional and transplantable dopamine neurons from human embryonic stem cells (hESCs). Before these cells can be used for cell replacement therapy in Parkinson's disease (PD), it is important to verify their functional properties and efficacy in animal models. Here we provide a comprehensive preclinical assessment of hESC-derived midbrain dopamine neurons in a rat model of PD. We show long-term survival and functionality using clinically relevant MRI and PET imaging techniques and demonstrate efficacy in restoration of motor function with a potency comparable to that seen with human fetal dopamine neurons. Furthermore, we show that hESC-derived dopamine neurons can project sufficiently long distances for use in humans, fully regenerate midbrain-to-forebrain projections, and innervate correct target structures. This provides strong preclinical support for clinical translation of hESC-derived dopamine neurons using approaches similar to those established with fetal cells for the treatment of Parkinson's disease. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Grealish, Shane; Diguet, Elsa; Kirkeby, Agnete; Mattsson, Bengt; Heuer, Andreas; Bramoulle, Yann; Van Camp, Nadja; Perrier, Anselme L.; Hantraye, Philippe; Björklund, Anders; Parmar, Malin
2014-01-01
Summary Considerable progress has been made in generating fully functional and transplantable dopamine neurons from human embryonic stem cells (hESCs). Before these cells can be used for cell replacement therapy in Parkinson’s disease (PD), it is important to verify their functional properties and efficacy in animal models. Here we provide a comprehensive preclinical assessment of hESC-derived midbrain dopamine neurons in a rat model of PD. We show long-term survival and functionality using clinically relevant MRI and PET imaging techniques and demonstrate efficacy in restoration of motor function with a potency comparable to that seen with human fetal dopamine neurons. Furthermore, we show that hESC-derived dopamine neurons can project sufficiently long distances for use in humans, fully regenerate midbrain-to-forebrain projections, and innervate correct target structures. This provides strong preclinical support for clinical translation of hESC-derived dopamine neurons using approaches similar to those established with fetal cells for the treatment of Parkinson’s disease. PMID:25517469
LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL
National Research Council Canada - National Science Library
Eser ÖRDEM
2013-01-01
Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002) and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings...
A model-based multisensor data fusion knowledge management approach
Straub, Jeremy
2014-06-01
A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis. This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false assertion) are also discussed.
Comparison of two novel approaches to model fibre reinforced concrete
Radtke, F.K.F.; Simone, A.; Sluys, L.J.
2009-01-01
We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity f
Modelling the World Wool Market: A Hybrid Approach
2007-01-01
We present a model of the world wool market that merges two modelling traditions: the partialequilibrium commodity-specific approach and the computable general-equilibrium approach. The model captures the multistage nature of the wool production system, and the heterogeneous nature of raw wool, processed wool and wool garments. It also captures the important wool producing and consuming regions of the world. We illustrate the utility of the model by estimating the effects of tariff barriers o...
An algebraic approach to the Hubbard model
de Leeuw, Marius
2015-01-01
We study the algebraic structure of an integrable Hubbard-Shastry type lattice model associated with the centrally extended su(2|2) superalgebra. This superalgebra underlies Beisert's AdS/CFT worldsheet R-matrix and Shastry's R-matrix. The considered model specializes to the one-dimensional Hubbard model in a certain limit. We demonstrate that Yangian symmetries of the R-matrix specialize to the Yangian symmetry of the Hubbard model found by Korepin and Uglov. Moreover, we show that the Hubbard model Hamiltonian has an algebraic interpretation as the so-called secret symmetry. We also discuss Yangian symmetries of the A and B models introduced by Frolov and Quinn.
Numerical modelling approach for mine backfill
Indian Academy of Sciences (India)
MUHAMMAD ZAKA EMAD
2017-09-01
Numerical modelling is broadly used for assessing complex scenarios in underground mines, including mining sequence and blast-induced vibrations from production blasting. Sublevel stoping mining methods with delayed backfill are extensively used to exploit steeply dipping ore bodies by Canadian hard-rockmetal mines. Mine backfill is an important constituent of mining process. Numerical modelling of mine backfill material needs special attention as the numerical model must behave realistically and in accordance with the site conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. Themodelling strategy is studied using a case study mine from Canadian mining industry. In the end, results of numerical model parametric study are shown and discussed.
Regularization of turbulence - a comprehensive modeling approach
Geurts, Bernard J.
2011-01-01
Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fl
Measuring equilibrium models: a multivariate approach
Directory of Open Access Journals (Sweden)
Nadji RAHMANIA
2011-04-01
Full Text Available This paper presents a multivariate methodology for obtaining measures of unobserved macroeconomic variables. The used procedure is the multivariate Hodrick-Prescot which depends on smoothing param eters. The choice of these parameters is crucial. Our approach is based on consistent estimators of these parameters, depending only on the observed data.
A graphical approach to analogue behavioural modelling
Moser, Vincent; Nussbaum, Pascal; Amann, Hans-Peter; Astier, Luc; Pellandini, Fausto
2007-01-01
In order to master the growing complexity of analogue electronic systems, modelling and simulation of analogue hardware at various levels is absolutely necessary. This paper presents an original modelling method based on the graphical description of analogue electronic functional blocks. This method is intended to be automated and integrated into a design framework: specialists create behavioural models of existing functional blocks, that can then be used through high-level selection and spec...
A geometrical approach to structural change modeling
Stijepic, Denis
2013-01-01
We propose a model for studying the dynamics of economic structures. The model is based on qualitative information regarding structural dynamics, in particular, (a) the information on the geometrical properties of trajectories (and their domains) which are studied in structural change theory and (b) the empirical information from stylized facts of structural change. We show that structural change is path-dependent in this model and use this fact to restrict the number of future structural cha...
Consumer preference models: fuzzy theory approach
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
A New Approach for Magneto-Static Hysteresis Behavioral Modeling
DEFF Research Database (Denmark)
Astorino, Antonio; Swaminathan, Madhavan; Antonini, Giulio
2016-01-01
In this paper, a new behavioral modeling approach for magneto-static hysteresis is presented. Many accurate models are currently available, but none of them seems to be able to correctly reproduce all the possible B-H paths with low computational cost. By contrast, the approach proposed...... achieved when comparing the measured and simulated results....
Nucleon Spin Content in a Relativistic Quark Potential Model Approach
Institute of Scientific and Technical Information of China (English)
DONG YuBing; FENG QingGuo
2002-01-01
Based on a relativistic quark model approach with an effective potential U(r) = (ac/2)(1 + γ0)r2, the spin content of the nucleon is investigated. Pseudo-scalar interaction between quarks and Goldstone bosons is employed to calculate the couplings between the Goldstone bosons and the nucleon. Different approaches to deal with the center of mass correction in the relativistic quark potential model approach are discussed.
A simple approach to modeling ductile failure.
Energy Technology Data Exchange (ETDEWEB)
Wellman, Gerald William
2012-06-01
Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.
An approach for activity-based DEVS model specification
DEFF Research Database (Denmark)
Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram
2016-01-01
activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...
Advanced language modeling approaches, case study: Expert search
Hiemstra, Djoerd
2008-01-01
This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative
Do recommender systems benefit users? a modeling approach
Yeung, Chi Ho
2016-04-01
Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.
Random matrix model approach to chiral symmetry
Verbaarschot, J J M
1996-01-01
We review the application of random matrix theory (RMT) to chiral symmetry in QCD. Starting from the general philosophy of RMT we introduce a chiral random matrix model with the global symmetries of QCD. Exact results are obtained for universal properties of the Dirac spectrum: i) finite volume corrections to valence quark mass dependence of the chiral condensate, and ii) microscopic fluctuations of Dirac spectra. Comparisons with lattice QCD simulations are made. Most notably, the variance of the number of levels in an interval containing $n$ levels on average is suppressed by a factor $(\\log n)/\\pi^2 n$. An extension of the random matrix model model to nonzero temperatures and chemical potential provides us with a schematic model of the chiral phase transition. In particular, this elucidates the nature of the quenched approximation at nonzero chemical potential.
Infectious disease modeling a hybrid system approach
Liu, Xinzhi
2017-01-01
This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.
Second Quantization Approach to Stochastic Epidemic Models
Mondaini, Leonardo
2015-01-01
We show how the standard field theoretical language based on creation and annihilation operators may be used for a straightforward derivation of closed master equations describing the population dynamics of multivariate stochastic epidemic models. In order to do that, we introduce an SIR-inspired stochastic model for hepatitis C virus epidemic, from which we obtain the time evolution of the mean number of susceptible, infected, recovered and chronically infected individuals in a population whose total size is allowed to change.
"Dispersion modeling approaches for near road | Science ...
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of applications. For example, such models can be useful for evaluating the mitigation potential of roadside barriers in reducing near-road exposures and their associated adverse health effects. Two databases, a tracer field study and a wind tunnel study, provide measurements used in the development and/or validation of algorithms to simulate dispersion in the presence of noise barriers. The tracer field study was performed in Idaho Falls, ID, USA with a 6-m noise barrier and a finite line source in a variety of atmospheric conditions. The second study was performed in the meteorological wind tunnel at the US EPA and simulated line sources at different distances from a model noise barrier to capture the effect on emissions from individual lanes of traffic. In both cases, velocity and concentration measurements characterized the effect of the barrier on dispersion.This paper presents comparisons with the two datasets of the barrier algorithms implemented in two different dispersion models: US EPA’s R-LINE (a research dispersion modelling tool under development by the US EPA’s Office of Research and Development) and CERC’s ADMS model (ADMS-Urban). In R-LINE the physical features reveal
Flipped models in Trinification: A Comprehensive Approach
Rodríguez, Oscar; Ponce, William A; Rojas, Eduardo
2016-01-01
By considering the 3-3-1 and the left-right symmetric models as low energy effective theories of the trinification group, alternative versions of these models are found. The new neutral gauge bosons in the universal 3-3-1 model and its flipped versions are considered; also, the left-right symmetric model and the two flipped variants of it are also studied. For these models, the couplings of the $Z'$ bosons to the standard model fermions are reported. The explicit form of the null space of the vector boson mass matrix for an arbitrary Higgs tensor and gauge group is also presented. In the general framework of the trinification gauge group, and by using the LHC experimental results and EW precision data, limits on the $Z'$ mass and the mixing angle between $Z$ and the new gauge bosons $Z'$ are imposed. The general results call for very small mixing angles in the range $10^{-3}$ radians and $M_{Z'}$ > 2.5 TeV.
Corbari, C.; Mancini, M.; Li, J.; Su, Zhongbo
2015-01-01
This study proposes a new methodology for the calibration of distributed hydrological models at basin scale by constraining an internal model variable using satellite data of land surface temperature. The model algorithm solves the system of energy and mass balances in terms of a representative equi
Evaluation of approaches focused on modelling of organic carbon stocks using the RothC model
Koco, Štefan; Skalský, Rastislav; Makovníková, Jarmila; Tarasovičová, Zuzana; Barančíková, Gabriela
2014-05-01
The aim of current efforts in the European area is the protection of soil organic matter, which is included in all relevant documents related to the protection of soil. The use of modelling of organic carbon stocks for anticipated climate change, respectively for land management can significantly help in short and long-term forecasting of the state of soil organic matter. RothC model can be applied in the time period of several years to centuries and has been tested in long-term experiments within a large range of soil types and climatic conditions in Europe. For the initialization of the RothC model, knowledge about the carbon pool sizes is essential. Pool size characterization can be obtained from equilibrium model runs, but this approach is time consuming and tedious, especially for larger scale simulations. Due to this complexity we search for new possibilities how to simplify and accelerate this process. The paper presents a comparison of two approaches for SOC stocks modelling in the same area. The modelling has been carried out on the basis of unique input of land use, management and soil data for each simulation unit separately. We modeled 1617 simulation units of 1x1 km grid on the territory of agroclimatic region Žitný ostrov in the southwest of Slovakia. The first approach represents the creation of groups of simulation units based on the evaluation of results for simulation unit with similar input values. The groups were created after the testing and validation of modelling results for individual simulation units with results of modelling the average values of inputs for the whole group. Tests of equilibrium model for interval in the range 5 t.ha-1 from initial SOC stock showed minimal differences in results comparing with result for average value of whole interval. Management inputs data from plant residues and farmyard manure for modelling of carbon turnover were also the same for more simulation units. Combining these groups (intervals of initial
From Cognition To Language: The Modeling Field Theory Approach
2006-10-02
tabula rasa pupil is introduced in the scenario. This procedure is iterated until convergence is achieved. In this case, the payoff (2) plays no...close inferences must be smaller than that granted for the correct inference of the intended meaning (see [9] for a similar approach). Hence the role ...played by noise in this context is similar to the role of the bottleneck transmissions in the ILM framework, since both make advantageous the
Lightweight approach to model traceability in a CASE tool
Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita
2017-07-01
A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.
Approaching models of nursing from a postmodernist perspective.
Lister, P
1991-02-01
This paper explores some questions about the use of models of nursing. These questions make various assumptions about the nature of models of nursing, in general and in particular. Underlying these assumptions are various philosophical positions which are explored through an introduction to postmodernist approaches in philosophical criticism. To illustrate these approaches, a critique of the Roper et al. model is developed, and more general attitudes towards models of nursing are examined. It is suggested that postmodernism offers a challenge to many of the assumptions implicit in models of nursing, and that a greater awareness of these assumptions should lead to nursing care being better informed where such models are in use.
Manufacturing Excellence Approach to Business Performance Model
Directory of Open Access Journals (Sweden)
Jesus Cruz Alvarez
2015-03-01
Full Text Available Six Sigma, lean manufacturing, total quality management, quality control, and quality function deployment are the fundamental set of tools to enhance productivity in organizations. There is some research that outlines the benefit of each tool into a particular context of firm´s productivity, but not into a broader context of firm´s competitiveness that is achieved thru business performance. The aim of this theoretical research paper is to contribute to this mean and propose a manufacturing excellence approach that links productivity tools into a broader context of business performance.
A Bayesian Model Committee Approach to Forecasting Global Solar Radiation
Lauret, Philippe; Muselli, Marc; David, Mathieu; Diagne, Hadja; Voyant, Cyril
2012-01-01
This paper proposes to use a rather new modelling approach in the realm of solar radiation forecasting. In this work, two forecasting models: Autoregressive Moving Average (ARMA) and Neural Network (NN) models are combined to form a model committee. The Bayesian inference is used to affect a probability to each model in the committee. Hence, each model's predictions are weighted by their respective probability. The models are fitted to one year of hourly Global Horizontal Irradiance (GHI) measurements. Another year (the test set) is used for making genuine one hour ahead (h+1) out-of-sample forecast comparisons. The proposed approach is benchmarked against the persistence model. The very first results show an improvement brought by this approach.
MDA based-approach for UML Models Complete Comparison
Chaouni, Samia Benabdellah; Mouline, Salma
2011-01-01
If a modeling task is distributed, it will frequently be necessary to integrate models developed by different team members. Problems occur in the models integration step and particularly, in the comparison phase of the integration. This issue had been discussed in several domains and various models. However, previous approaches have not correctly handled the semantic comparison. In the current paper, we provide a MDA-based approach for models comparison which aims at comparing UML models. We develop an hybrid approach which takes into account syntactic, semantic and structural comparison aspects. For this purpose, we use the domain ontology as well as other resources such as dictionaries. We propose a decision support system which permits the user to validate (or not) correspondences extracted in the comparison phase. For implementation, we propose an extension of the generic correspondence metamodel AMW in order to transform UML models to the correspondence model.
A consortium approach to glass furnace modeling.
Energy Technology Data Exchange (ETDEWEB)
Chang, S.-L.; Golchert, B.; Petrick, M.
1999-04-20
Using computational fluid dynamics to model a glass furnace is a difficult task for any one glass company, laboratory, or university to accomplish. The task of building a computational model of the furnace requires knowledge and experience in modeling two dissimilar regimes (the combustion space and the liquid glass bath), along with the skill necessary to couple these two regimes. Also, a detailed set of experimental data is needed in order to evaluate the output of the code to ensure that the code is providing proper results. Since all these diverse skills are not present in any one research institution, a consortium was formed between Argonne National Laboratory, Purdue University, Mississippi State University, and five glass companies in order to marshal these skills into one three-year program. The objective of this program is to develop a fully coupled, validated simulation of a glass melting furnace that may be used by industry to optimize the performance of existing furnaces.
Mixture modeling approach to flow cytometry data.
Boedigheimer, Michael J; Ferbas, John
2008-05-01
Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.
BUSINESS MODEL IN ELECTRICITY INDUSTRY USING BUSINESS MODEL CANVAS APPROACH; THE CASE OF PT. XYZ
National Research Council Canada - National Science Library
Wicaksono, Achmad Arief; Syarief, Rizal; Suparno, Ono
2017-01-01
.... This study aims to identify company's business model using Business Model Canvas approach, formulate business development strategy alternatives, and determine the prioritized business development...
Modeling quasi-static poroelastic propagation using an asymptotic approach
Energy Technology Data Exchange (ETDEWEB)
Vasco, D.W.
2007-11-01
solution. Unfortunately, analytic solutions are only available for highly idealized conditions, such as a uniform (Rudnicki(1986)) or one-dimensional (Simon et al.(1984)Simon, Zienkiewicz, & Paul; Gajo & Mongiovi(1995); Wang & Kumpel(2003)) medium. In this paper I derive an asymptotic, semi-analytic solution for coupled deformation and flow. The approach is similar to trajectory- or ray-based methods used to model elastic and electromagnetic wave propagation (Aki & Richards(1980); Kline & Kay(1979); Kravtsov & Orlov(1990); Keller & Lewis(1995)) and, more recently, diffusive propagation (Virieux et al.(1994)Virieux, Flores-Luna, & Gibert; Vasco et al.(2000)Vasco, Karasaki, & Keers; Shapiro et al.(2002)Shapiro, Rothert, Rath, & Rindschwentner; Vasco(2007)). The asymptotic solution is valid in the presence of smoothly-varying, heterogeneous flow properties. The situation I am modeling is that of a formation with heterogeneous flow properties and uniform mechanical properties. The boundaries of the layer may vary arbitrary and can define discontinuities in both flow and mechanical properties. Thus, using the techniques presented here, it is possible to model a stack of irregular layers with differing mechanical properties. Within each layer the hydraulic conductivity and porosity can vary smoothly but with an arbitrarily large magnitude. The advantages of this approach are that it produces explicit, semi-analytic expressions for the arrival time and amplitude of the Biot slow and fast waves, expressions which are valid in a medium with heterogeneous properties. As shown here, the semi-analytic expressions provide insight into the nature of pressure and deformation signals recorded at an observation point. Finally, the technique requires considerably fewer computer resources than does a fully numerical treatment.
Abu Bakar Hassan; Mohamad Bahtiar
2017-01-01
Taking this into consideration of diver cultural norms in Malaysian workplace, the propose model explores Malaysia culture and identity with a backdrop of the pushes and pulls of ethnic diversity in a Malaysia. The model seeks to understand relational norm congruence based on multiethnic in Malaysia that will be enable us to identify Malaysia cultural and identity. This is in line with recent call by various interest groups in Malaysia to focus more on model designs that capture contextual an...
"Dispersion modeling approaches for near road
Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of app...
Nonperturbative approach to the modified statistical model
Energy Technology Data Exchange (ETDEWEB)
Magdy, M.A.; Bekmezci, A.; Sever, R. [Middle East Technical Univ., Ankara (Turkey)
1993-12-01
The modified form of the statistical model is used without making any perturbation. The mass spectra of the lowest S, P and D levels of the (Q{bar Q}) and the non-self-conjugate (Q{bar q}) mesons are studied with the Song-Lin potential. The authors results are in good agreement with the experimental and theoretical findings.
System Behavior Models: A Survey of Approaches
2016-06-01
Mandana Vaziri, and Frank Tip. 2007. “Finding Bugs Efficiently with a SAT Solver.” In European Software Engineering Conference and the ACM SIGSOFT...Van Gorp. 2005. “A Taxonomy of Model Transformation.” Electronic Notes in Theoretical Computer Science 152: 125–142. Miyazawa, Alvaro, and Ana
A moving approach for the Vector Hysteron Model
Energy Technology Data Exchange (ETDEWEB)
Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)
2016-04-01
A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.
Integration models: multicultural and liberal approaches confronted
Janicki, Wojciech
2012-01-01
European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.
Optimising GPR modelling: A practical, multi-threaded approach to 3D FDTD numerical modelling
Millington, T. M.; Cassidy, N. J.
2010-09-01
The demand for advanced interpretational tools has lead to the development of highly sophisticated, computationally demanding, 3D GPR processing and modelling techniques. Many of these methods solve very large problems with stepwise methods that utilise numerically similar functions within iterative computational loops. Problems of this nature are readily parallelised by splitting the computational domain into smaller, independent chunks for direct use on cluster-style, multi-processor supercomputers. Unfortunately, the implications of running such facilities, as well as time investment needed to develop the parallel codes, means that for most researchers, the use of these advanced methods is too impractical. In this paper, we propose an alternative method of parallelisation which exploits the capabilities of the modern multi-core processors (upon which today's desktop PCs are built) by multi-threading the calculation of a problem's individual sub-solutions. To illustrate the approach, we have applied it to an advanced, 3D, finite-difference time-domain (FDTD) GPR modelling tool in which the calculation of the individual vector field components is multi-threaded. To be of practical use, the FDTD scheme must be able to deliver accurate results with short execution times and we, therefore, show that the performance benefits of our approach can deliver runtimes less than half those of the more conventional, serial programming techniques. We evaluate implementations of the technique using different programming languages (e.g., Matlab, Java, C++), which will facilitate the construction of a flexible modelling tool for use in future GPR research. The implementations are compared on a variety of typical hardware platforms, having between one and eight processing cores available, and also a modern Graphical Processing Unit (GPU)-based computer. Our results show that a multi-threaded xyz modelling approach is easy to implement and delivers excellent results when implemented
Directory of Open Access Journals (Sweden)
Lynch John
2006-08-01
Full Text Available Abstract Background There are at least three broad conceptual models for the impact of the social environment on adult disease: the critical period, social mobility, and cumulative life course models. Several studies have shown an association between each of these models and mortality. However, few studies have investigated the importance of the different models within the same setting and none has been performed in samples of the whole population. The purpose of the present study was to study the relation between socioeconomic position (SEP and mortality using different conceptual models in the whole population of Scania. Methods In the present investigation we use socioeconomic information on all men (N = 48,909 and women (N = 47,688 born between 1945 and 1950, alive on January, 1st,1990, and living in the Region of Scania, in Sweden. Focusing on three specific life periods (i.e., ages 10–15, 30–35 and 40–45, we examined the association between SEP and the 12-year risk of premature cardiovascular mortality and all-cause mortality. Results There was a strong relation between SEP and mortality among those inside the workforce, irrespective of the conceptual model used. There was a clear upward trend in the mortality hazard rate ratios (HRR with accumulated exposure to manual SEP in both men (p for trend Conclusion There was a strong relation between SEP and cardiovascular and all-cause mortality, irrespective of the conceptual model used. The critical period, social mobility, and cumulative life course models, showed the same fit to the data. That is, one model could not be pointed out as "the best" model and even in this large unselected sample it was not possible to adjudicate which theories best describe the links between life course SEP and mortality risk.
ISM Approach to Model Offshore Outsourcing Risks
Directory of Open Access Journals (Sweden)
Sunand Kumar
2014-07-01
Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing. To this effect, authors have identified various risks through extant review of literature. From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled. Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.
Quantum Machine and SR Approach: a Unified Model
Garola, C; Sozzo, S; Garola, Claudio; Pykacz, Jaroslav; Sozzo, Sandro
2005-01-01
The Geneva-Brussels approach to quantum mechanics (QM) and the semantic realism (SR) nonstandard interpretation of QM exhibit some common features and some deep conceptual differences. We discuss in this paper two elementary models provided in the two approaches as intuitive supports to general reasonings and as a proof of consistency of general assumptions, and show that Aerts' quantum machine can be embodied into a macroscopic version of the microscopic SR model, overcoming the seeming incompatibility between the two models. This result provides some hints for the construction of a unified perspective in which the two approaches can be properly placed.
Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach
Energy Technology Data Exchange (ETDEWEB)
Liao, James C. [Univ. of California, Los Angeles, CA (United States)
2016-10-01
Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.
A modular approach to numerical human body modeling
Forbes, P.A.; Griotto, G.; Rooij, L. van
2007-01-01
The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body mod
A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING
ANTOULAS, AC; WILLEMS, JC
1993-01-01
The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both re
A modular approach to numerical human body modeling
Forbes, P.A.; Griotto, G.; Rooij, L. van
2007-01-01
The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body
A market model for stochastic smile: a conditional density approach
Zilber, A.
2005-01-01
The purpose of this paper is to introduce a new approach that allows to construct no-arbitrage market models of for implied volatility surfaces (in other words, stochastic smile models). That is to say, the idea presented here allows us to model prices of liquidly traded vanilla options as separate
Directory of Open Access Journals (Sweden)
Abu Bakar Hassan
2017-01-01
Full Text Available Taking this into consideration of diver cultural norms in Malaysian workplace, the propose model explores Malaysia culture and identity with a backdrop of the pushes and pulls of ethnic diversity in a Malaysia. The model seeks to understand relational norm congruence based on multiethnic in Malaysia that will be enable us to identify Malaysia cultural and identity. This is in line with recent call by various interest groups in Malaysia to focus more on model designs that capture contextual and cultural factors that influences Malaysia culture and identity.
Directory of Open Access Journals (Sweden)
Rauch Ł.
2015-09-01
Full Text Available The coupled finite element multiscale simulations (FE2 require costly numerical procedures in both macro and micro scales. Attempts to improve numerical efficiency are focused mainly on two areas of development, i.e. parallelization/distribution of numerical procedures and simplification of virtual material representation. One of the representatives of both mentioned areas is the idea of Statistically Similar Representative Volume Element (SSRVE. It aims at the reduction of the number of finite elements in micro scale as well as at parallelization of the calculations in micro scale which can be performed without barriers. The simplification of computational domain is realized by transformation of sophisticated images of material microstructure into artificially created simple objects being characterized by similar features as their original equivalents. In existing solutions for two-phase steels SSRVE is created on the basis of the analysis of shape coefficients of hard phase in real microstructure and searching for a representative simple structure with similar shape coefficients. Optimization techniques were used to solve this task. In the present paper local strains and stresses are added to the cost function in optimization. Various forms of the objective function composed of different elements were investigated and used in the optimization procedure for the creation of the final SSRVE. The results are compared as far as the efficiency of the procedure and uniqueness of the solution are considered. The best objective function composed of shape coefficients, as well as of strains and stresses, was proposed. Examples of SSRVEs determined for the investigated two-phase steel using that objective function are demonstrated in the paper. Each step of SSRVE creation is investigated from computational efficiency point of view. The proposition of implementation of the whole computational procedure on modern High Performance Computing (HPC
Tensor renormalization group approach to two-dimensional classical lattice models.
Levin, Michael; Nave, Cody P
2007-09-21
We describe a simple real space renormalization group technique for two-dimensional classical lattice models. The approach is similar in spirit to block spin methods, but at the same time it is fundamentally based on the theory of quantum entanglement. In this sense, the technique can be thought of as a classical analogue of the density matrix renormalization group method. We demonstrate the method - which we call the tensor renormalization group method - by computing the magnetization of the triangular lattice Ising model.
Embedded System Construction: Evaluation of a Model-Driven and Component-Based Develpoment Approach
Bunse, C.; Gross, H.G.; Peper, C. (Claudia)
2008-01-01
Preprint of paper published in: Models in Software Engineering, Lecture Notes in Computer Science 5421, 2009; doi:10.1007/978-3-642-01648-6_8 Model-driven development has become an important engineering paradigm. It is said to have many advantages over traditional approaches, such as reuse or quality improvement, also for embedded systems. Along a similar line of argumentation, component-based software engineering is advocated. In order to investigate these claims, the MARMOT method was appli...
Thermoplasmonics modeling: A Green's function approach
Baffou, Guillaume; Quidant, Romain; Girard, Christian
2010-10-01
We extend the discrete dipole approximation (DDA) and the Green’s dyadic tensor (GDT) methods—previously dedicated to all-optical simulations—to investigate the thermodynamics of illuminated plasmonic nanostructures. This extension is based on the use of the thermal Green’s function and a original algorithm that we named Laplace matrix inversion. It allows for the computation of the steady-state temperature distribution throughout plasmonic systems. This hybrid photothermal numerical method is suited to investigate arbitrarily complex structures. It can take into account the presence of a dielectric planar substrate and is simple to implement in any DDA or GDT code. Using this numerical framework, different applications are discussed such as thermal collective effects in nanoparticles assembly, the influence of a substrate on the temperature distribution and the heat generation in a plasmonic nanoantenna. This numerical approach appears particularly suited for new applications in physics, chemistry, and biology such as plasmon-induced nanochemistry and catalysis, nanofluidics, photothermal cancer therapy, or phase-transition control at the nanoscale.
Agribusiness model approach to territorial food development
Directory of Open Access Journals (Sweden)
Murcia Hector Horacio
2011-04-01
Full Text Available
Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.
Coupling approaches used in atmospheric entry models
Gritsevich, M. I.
2012-09-01
While a planet orbits the Sun, it is subject to impact by smaller objects, ranging from tiny dust particles and space debris to much larger asteroids and comets. Such collisions have taken place frequently over geological time and played an important role in the evolution of planets and the development of life on the Earth. Though the search for near-Earth objects addresses one of the main points of the Asteroid and Comet Hazard, one should not underestimate the useful information to be gleaned from smaller atmospheric encounters, known as meteors or fireballs. Not only do these events help determine the linkages between meteorites and their parent bodies; due to their relative regularity they provide a good statistical basis for analysis. For successful cases with found meteorites, the detailed atmospheric path record is an excellent tool to test and improve existing entry models assuring the robustness of their implementation. There are many more important scientific questions meteoroids help us to answer, among them: Where do these objects come from, what are their origins, physical properties and chemical composition? What are the shapes and bulk densities of the space objects which fully ablate in an atmosphere and do not reach the planetary surface? Which values are directly measured and which are initially assumed as input to various models? How to couple both fragmentation and ablation effects in the model, taking real size distribution of fragments into account? How to specify and speed up the recovery of a recently fallen meteorites, not letting weathering to affect samples too much? How big is the pre-atmospheric projectile to terminal body ratio in terms of their mass/volume? Which exact parameters beside initial mass define this ratio? More generally, how entering object affects Earth's atmosphere and (if applicable) Earth's surface? How to predict these impact consequences based on atmospheric trajectory data? How to describe atmospheric entry
Applied Regression Modeling A Business Approach
Pardoe, Iain
2012-01-01
An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a
Bayesian Approach to Neuro-Rough Models for Modelling HIV
Marwala, Tshilidzi
2007-01-01
This paper proposes a new neuro-rough model for modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Markov Chain Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62% as opposed to 58% obtained from a Bayesian formulated rough set model trained using Markov chain Monte Carlo method and 62% obtained from a Bayesian formulated multi-layered perceptron (MLP) model trained using hybrid Monte. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.
Unveiling Music Structure Via PLSA Similarity Fusion
DEFF Research Database (Denmark)
Arenas-García, Jerónimo; Meng, Anders; Petersen, Kaare Brandt
2007-01-01
observed similarities can be satisfactorily explained using the latent semantics. Additionally, this approach significantly simplifies the song retrieval phase, leading to a more practical system implementation. The suitability of the PLSA model for representing music structure is studied in a simplified...
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...
Implicit moral evaluations: A multinomial modeling approach.
Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael
2017-01-01
Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Continuous Molecular Fields Approach Applied to Structure-Activity Modeling
Baskin, Igor I
2013-01-01
The Method of Continuous Molecular Fields is a universal approach to predict various properties of chemical compounds, in which molecules are represented by means of continuous fields (such as electrostatic, steric, electron density functions, etc). The essence of the proposed approach consists in performing statistical analysis of functional molecular data by means of joint application of kernel machine learning methods and special kernels which compare molecules by computing overlap integrals of their molecular fields. This approach is an alternative to traditional methods of building 3D structure-activity and structure-property models based on the use of fixed sets of molecular descriptors. The methodology of the approach is described in this chapter, followed by its application to building regression 3D-QSAR models and conducting virtual screening based on one-class classification models. The main directions of the further development of this approach are outlined at the end of the chapter.
A forward modeling approach for interpreting impeller flow logs.
Parker, Alison H; West, L Jared; Odling, Noelle E; Bown, Richard T
2010-01-01
A rigorous and practical approach for interpretation of impeller flow log data to determine vertical variations in hydraulic conductivity is presented and applied to two well logs from a Chalk aquifer in England. Impeller flow logging involves measuring vertical flow speed in a pumped well and using changes in flow with depth to infer the locations and magnitudes of inflows into the well. However, the measured flow logs are typically noisy, which leads to spurious hydraulic conductivity values where simplistic interpretation approaches are applied. In this study, a new method for interpretation is presented, which first defines a series of physical models for hydraulic conductivity variation with depth and then fits the models to the data, using a regression technique. Some of the models will be rejected as they are physically unrealistic. The best model is then selected from the remaining models using a maximum likelihood approach. This balances model complexity against fit, for example, using Akaike's Information Criterion.
An Adaptive Approach to Schema Classification for Data Warehouse Modeling
Institute of Scientific and Technical Information of China (English)
Hong-Ding Wang; Yun-Hai Tong; Shao-Hua Tan; Shi-Wei Tang; Dong-Qing Yang; Guo-Hui Sun
2007-01-01
Data warehouse (DW) modeling is a complicated task, involving both knowledge of business processes and familiarity with operational information systems structure and behavior. Existing DW modeling techniques suffer from the following major drawbacks -data-driven approach requires high levels of expertise and neglects the requirements of end users, while demand-driven approach lacks enterprise-wide vision and is regardless of existing models of underlying operational systems. In order to make up for those shortcomings, a method of classification of schema elements for DW modeling is proposed in this paper. We first put forward the vector space models for subjects and schema elements, then present an adaptive approach with self-tuning theory to construct context vectors of subjects, and finally classify the source schema elements into different subjects of the DW automatically. Benefited from the result of the schema elements classification, designers can model and construct a DW more easily.
A Networks Approach to Modeling Enzymatic Reactions.
Imhof, P
2016-01-01
Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes.
Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling
Lohn, Jason; Colombano, Silvano
1997-01-01
We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.
Hamiltonian approach to hybrid plasma models
Tronci, Cesare
2010-01-01
The Hamiltonian structures of several hybrid kinetic-fluid models are identified explicitly, upon considering collisionless Vlasov dynamics for the hot particles interacting with a bulk fluid. After presenting different pressure-coupling schemes for an ordinary fluid interacting with a hot gas, the paper extends the treatment to account for a fluid plasma interacting with an energetic ion species. Both current-coupling and pressure-coupling MHD schemes are treated extensively. In particular, pressure-coupling schemes are shown to require a transport-like term in the Vlasov kinetic equation, in order for the Hamiltonian structure to be preserved. The last part of the paper is devoted to studying the more general case of an energetic ion species interacting with a neutralizing electron background (hybrid Hall-MHD). Circulation laws and Casimir functionals are presented explicitly in each case.
Modeling of phase equilibria with CPA using the homomorph approach
DEFF Research Database (Denmark)
Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios
2011-01-01
For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...
A Constructive Neural-Network Approach to Modeling Psychological Development
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
A Constructive Neural-Network Approach to Modeling Psychological Development
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
Modular Modelling and Simulation Approach - Applied to Refrigeration Systems
DEFF Research Database (Denmark)
Sørensen, Kresten Kjær; Stoustrup, Jakob
2008-01-01
This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...
Pattern-based approach for logical traffic isolation forensic modelling
CSIR Research Space (South Africa)
Dlamini, I
2009-08-01
Full Text Available The use of design patterns usually changes the approach of software design and makes software development relatively easy. This paper extends work on a forensic model for Logical Traffic Isolation (LTI) based on Differentiated Services (Diff...
A semantic-web approach for modeling computing infrastructures
M. Ghijsen; J. van der Ham; P. Grosso; C. Dumitru; H. Zhu; Z. Zhao; C. de Laat
2013-01-01
This paper describes our approach to modeling computing infrastructures. Our main contribution is the Infrastructure and Network Description Language (INDL) ontology. The aim of INDL is to provide technology independent descriptions of computing infrastructures, including the physical resources as w
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Directory of Open Access Journals (Sweden)
Tobias Hacker
2012-04-01
Full Text Available The integral boundary layer system (IBL with spatially periodic coefficients arises as a long wave approximation for the flow of a viscous incompressible fluid down a wavy inclined plane. The Nusselt-like stationary solution of the IBL is linearly at best marginally stable; i.e., it has essential spectrum at least up to the imaginary axis. Nevertheless, in this stable case we show that localized perturbations of the ground state decay in a self-similar way. The proof uses the renormalization group method in Bloch variables and the fact that in the stable case the Burgers equation is the amplitude equation for long waves of small amplitude in the IBL. It is the first time that such a proof is given for a quasilinear PDE with spatially periodic coefficients.
Large mass self-similar solutions of the parabolic-parabolic Keller-Segel model of chemotaxis.
Biler, Piotr; Corrias, Lucilla; Dolbeault, Jean
2011-07-01
In two space dimensions, the parabolic-parabolic Keller-Segel system shares many properties with the parabolic-elliptic Keller-Segel system. In particular, solutions globally exist in both cases as long as their mass is less than a critical threshold M(c). However, this threshold is not as clear in the parabolic-parabolic case as it is in the parabolic-elliptic case, in which solutions with mass above M(c) always blow up. Here we study forward self-similar solutions of the parabolic-parabolic Keller-Segel system and prove that, in some cases, such solutions globally exist even if their total mass is above M(c), which is forbidden in the parabolic-elliptic case.
Modelling road accidents: An approach using structural time series
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Functional state modelling approach validation for yeast and bacteria cultivations
Roeva, Olympia; Pencheva, Tania
2014-01-01
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778
An optimization approach to kinetic model reduction for combustion chemistry
Lebiedz, Dirk
2013-01-01
Model reduction methods are relevant when the computation time of a full convection-diffusion-reaction simulation based on detailed chemical reaction mechanisms is too large. In this article, we review a model reduction approach based on optimization of trajectories and show its applicability to realistic combustion models. As most model reduction methods, it identifies points on a slow invariant manifold based on time scale separation in the dynamics of the reaction system. The numerical approximation of points on the manifold is achieved by solving a semi-infinite optimization problem, where the dynamics enter the problem as constraints. The proof of existence of a solution for an arbitrarily chosen dimension of the reduced model (slow manifold) is extended to the case of realistic combustion models including thermochemistry by considering the properties of proper maps. The model reduction approach is finally applied to three models based on realistic reaction mechanisms: 1. ozone decomposition as a small t...
Functional state modelling approach validation for yeast and bacteria cultivations.
Roeva, Olympia; Pencheva, Tania
2014-09-03
In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.
Molecular Modeling Approach to Cardiovascular Disease Targetting
Directory of Open Access Journals (Sweden)
Chandra Sekhar Akula,
2010-05-01
Full Text Available Cardiovascular disease, including stroke, is the leading cause of illness and death in the India. A number of studies have shown that inflammation of blood vessels is one of the major factors that increase the incidence of heart diseases, including arteriosclerosis (clogging of the arteries, stroke and myocardial infraction or heart attack. Studies have associated obesity and other components of metabolic syndrome, cardiovascular risk factors, with lowgradeinflammation. Furthermore, some findings suggest that drugs commonly prescribed to the lower cholesterol also reduce this inflammation, suggesting an additional beneficial effect of the stains. The recent development of angiotensin 11 (Ang11 receptor antagonists has enabled to improve significantly the tolerability profile of thisgroup of drugs while maintaining a high clinical efficacy. ACE2 is expressed predominantly in the endothelium and in renal tubular epithelium, and it thus may be an import new cardiovascular target. In the present study we modeled the structure of ACE and designed an inhibitor through using ARGUS lab and the validation of the Drug molecule is done basing on QSAR properties and Cache for this protein through CADD.
Virtuous organization: A structural equation modeling approach
Directory of Open Access Journals (Sweden)
Majid Zamahani
2013-02-01
Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.
Data Analysis A Model Comparison Approach, Second Edition
Judd, Charles M; Ryan, Carey S
2008-01-01
This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T
Zimmer, Christoph; Sahle, Sven
2016-04-01
Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.
Modelling and Generating Ajax Applications: A Model-Driven Approach
Gharavi, V.; Mesbah, A.; Van Deursen, A.
2008-01-01
Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction betw
Modelling and Generating Ajax Applications: A Model-Driven Approach
Gharavi, V.; Mesbah, A.; Van Deursen, A.
2008-01-01
Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction
A novel approach to modeling and diagnosing the cardiovascular system
Energy Technology Data Exchange (ETDEWEB)
Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)
1995-07-01
A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
Lappa, Marcello
2003-09-21
The fluid-dynamic environment within typical growth reactors as well as the interaction of such flow with the intrinsic kinetics of the growth process are investigated in the frame of the new fields of protein crystal and tissue engineering. The paper uses available data to introduce a set of novel growth models. The surface conditions are coupled to the exchange mass flux at the specimen/culture-medium interface and lead to the introduction of a group of differential equations for the nutrient concentration around the sample and for the evolution of the construct mass displacement. These models take into account the sensitivity of the construct/liquid interface to the level of supersaturation in the case of macromolecular crystal growth and to the "direct" effect of the fluid-dynamic shear stress in the case of biological tissue growth. They then are used to show how the proposed surface kinetic laws can predict (through sophisticated numerical simulations) many of the known characteristics of protein crystals and biological tissues produced using well-known and widely used reactors. This procedure provides validation of the models and associated numerical method and at the same time gives insights into the mechanisms of the phenomena. The onset of morphological instabilities is discussed and investigated in detail. The interplay between the increasing size of the sample and the structure of the convective field established inside the reactor is analysed. It is shown that this interaction is essential in determining the time evolution of the specimen shape. Analogies about growing macromolecular crystals and growing biological tissues are pointed out in terms of behaviours and cause-and-effect relationships. These aspects lead to a common source (in terms of original mathematical models, ideas and results) made available for the scientific community under the optimistic idea that the contacts established between the "two fields of engineering" will develop into an
Baron, Thierry; Bencsik, Anna; Biacabe, Anne-Gaëlle; Morignat, Eric; Bessen, Richard A
2007-12-01
Transmissible mink encepholapathy (TME) is a foodborne transmissible spongiform encephalopathy (TSE) of ranch-raised mink; infection with a ruminant TSE has been proposed as the cause, but the precise origin of TME is unknown. To compare the phenotypes of each TSE, bovine-passaged TME isolate and 3 distinct natural bovine spongiform encephalopathy (BSE) agents (typical BSE, H-type BSE, and L-type BSE) were inoculated into an ovine transgenic mouse line (TgOvPrP4). Transgenic mice were susceptible to infection with bovine-passaged TME, typical BSE, and L-type BSE but not to H-type BSE. Based on survival periods, brain lesions profiles, disease-associated prion protein brain distribution, and biochemical properties of protease-resistant prion protein, typical BSE had a distint phenotype in ovine transgenic mice compared to L-type BSE and bovine TME. The similar phenotypic properties of L-type BSE and bovine TME in TgOvPrP4 mice suggest that L-type BSE is a much more likely candidate for the origin of TME than is typical BSE.
Mathematical models for therapeutic approaches to control HIV disease transmission
Roy, Priti Kumar
2015-01-01
The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...
Asteroid modeling for testing spacecraft approach and landing.
Martin, Iain; Parkes, Steve; Dunstan, Martin; Rowell, Nick
2014-01-01
Spacecraft exploration of asteroids presents autonomous-navigation challenges that can be aided by virtual models to test and develop guidance and hazard-avoidance systems. Researchers have extended and applied graphics techniques to create high-resolution asteroid models to simulate cameras and other spacecraft sensors approaching and descending toward asteroids. A scalable model structure with evenly spaced vertices simplifies terrain modeling, avoids distortion at the poles, and enables triangle-strip definition for efficient rendering. To create the base asteroid models, this approach uses two-phase Poisson faulting and Perlin noise. It creates realistic asteroid surfaces by adding both crater models adapted from lunar terrain simulation and multiresolution boulders. The researchers evaluated the virtual asteroids by comparing them with real asteroid images, examining the slope distributions, and applying a surface-relative feature-tracking algorithm to the models.
A model-driven approach to information security compliance
Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena
2017-06-01
The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.
Heuristic approaches to models and modeling in systems biology
MacLeod, Miles
2016-01-01
Prediction and control sufficient for reliable medical and other interventions are prominent aims of modeling in systems biology. The short-term attainment of these goals has played a strong role in projecting the importance and value of the field. In this paper I identify the standard models must m
A Model Management Approach for Co-Simulation Model Evaluation
Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno
2011-01-01
Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software
A New Detection Approach Based on the Maximum Entropy Model
Institute of Scientific and Technical Information of China (English)
DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua
2006-01-01
The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.
LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL
Directory of Open Access Journals (Sweden)
Eser ÖRDEM
2013-06-01
Full Text Available Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002 and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings. This model was created by the researcher as a result of the studies carried out in applied linguistics (Hill, 20009 and memory (Murphy, 2004. Since one of the main problems of foreign language learners is to retrieve what they have learnt, Lewis (1998 and Wray (2008 assume that lexical approach is an alternative explanation to solve this problem.Unlike grammar translation method, this approach supports the idea that language is not composed of general grammar but strings of word and word combinations.In addition, lexical approach posits the idea that each word has tiw gramamtical properties, and therefore each dictionary is a potential grammar book. Foreign language learners can learn to use collocations, a basic principle of Lexical approach. Thus, learners can increase the level of retention.The concept of retrieval clue (Murphy, 2004 is considered the main element in this collocational study model because the main purpose of this model is boost fluency and help learners gain native-like accuracy while producing the target language. Keywords: Foreign language teaching, lexical approach, collocations, retrieval clue
A Model-Driven Approach for Telecommunications Network Services Definition
Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.
Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.
Spousal similarity in coping and depressive symptoms over 10 years.
Holahan, Charles J; Moos, Rudolf H; Moerkbak, Marie L; Cronkite, Ruth C; Holahan, Carole K; Kenney, Brent A
2007-12-01
Following a baseline sample of 184 married couples over 10 years, the present study develops a broadened conceptualization of linkages in spouses' functioning by examining similarity in coping as well as in depressive symptoms. Consistent with hypotheses, results demonstrated (a) similarity in depressive symptoms within couples across 10 years, (b) similarity in coping within couples over 10 years, and (c) the role of coping similarity in strengthening depressive similarity between spouses. Spousal similarity in coping was evident for a composite measure of percent approach coping as well as for component measures of approach and avoidance coping. The role of coping similarity in strengthening depressive symptom similarity was observed for percent approach coping and for avoidance coping. These findings support social contextual models of psychological adjustment that emphasize the importance of dynamic interdependencies between individuals in close relationships.
Bleys, Dries; Soenens, Bart; Boone, Liesbet; Claes, Stephan; Vliegen, Nicole; Luyten, Patrick
2016-06-01
Research investigating the development of adolescent self-criticism has typically focused on the role of either parental self-criticism or parenting. This study used an actor-partner interdependence model to examine an integrated theoretical model in which achievement-oriented psychological control has an intervening role in the relation between parental and adolescent self-criticism. Additionally, the relative contribution of both parents and the moderating role of adolescent gender were examined. Participants were 284 adolescents (M = 14 years, range = 12-16 years) and their parents (M = 46 years, range = 32-63 years). Results showed that only maternal self-criticism was directly related to adolescent self-criticism. However, both parents' achievement-oriented psychological control had an intervening role in the relation between parent and adolescent self-criticism in both boys and girls. Moreover, one parent's achievement-oriented psychological control was not predicted by the self-criticism of the other parent.
Institute of Scientific and Technical Information of China (English)
Yukio NAMBA; Junichi IIJIMA
2003-01-01
Enterprise systems must have the structure to adapt the change of business environment. Whenrebuilding enterprise system to meet the extended operational boundaries, the concept of IT cityplanning is applicable and effective. The aim of this paper is to describe the architectural approachfrom the integrated information infrastructure (In3) standpoint and to propose for applying the "CityPlanning" concept for rebuilding "inter-application spaghetti" enterprise systems. This is mainlybecause the portion of infrastructure has increased with the change of information systems fromcentralized systems to distributed and open systems. As enterprise systems have involvedheterogeneity or architectural black box in them, it may be required the integration framework(meta-architecture) as a discipline based on heterogeneity that can provide comprehensive view of theenterprise systems. This paper proposes "EH Meta-model" as the integration framework that canoptimize the overall enterprise systems from the IT city planning point of view. EH Meta-modelconsists of "Integrated Information Infrastructure Map (In3-Map)", "Service Framework" and "ITScenario". It would be applicable and effective for the viable enterprise, because it has the mechanismto adapt the change. Finally, we illustrate a case of information system in an online securitiescompany and demonstrate applicability and effectiveness of EII Meta-model to meet their businessgoals.
Shoshan, Michal S; Dekel, Noa; Goch, Wojciech; Shalev, Deborah E; Danieli, Tsafi; Lebendiker, Mario; Bal, Wojciech; Tshuva, Edit Y
2016-06-01
The effect of position II in the binding sequence of copper metallochaperones, which varies between Thr and His, was investigated through structural analysis and affinity and oxidation kinetic studies of model peptides. A first Cys-Cu(I)-Cys model obtained for the His peptide at acidic and neutral pH, correlated with higher affinity and more rapid oxidation of its complex; in contrast, the Thr peptide with the Cys-Cu(I)-Met coordination under neutral conditions demonstrated weaker and pH dependent binding. Studies with human antioxidant protein 1 (Atox1) and three of its mutants where S residues were replaced with Ala suggested that (a) the binding affinity is influenced more by the binding sequence than by the protein fold (b) pH may play a role in binding reactivity, and (c) mutating the Met impacted the affinity and oxidation rate more drastically than did mutating one of the Cys, supporting its important role in protein function. Position II thus plays a dominant role in metal binding and transport.
Child human model development: a hybrid validation approach
Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.
2008-01-01
The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a
Modeling Alaska boreal forests with a controlled trend surface approach
Mo Zhou; Jingjing Liang
2012-01-01
An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...
Teaching Service Modelling to a Mixed Class: An Integrated Approach
Deng, Jeremiah D.; Purvis, Martin K.
2015-01-01
Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…
Gray-box modelling approach for description of storage tunnel
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Jacob
1999-01-01
The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...
Child human model development: a hybrid validation approach
Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.
2008-01-01
The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode