Various forms of indexing HDMR for modelling multivariate classification problems
Energy Technology Data Exchange (ETDEWEB)
Aksu, Çağrı [Bahçeşehir University, Information Technologies Master Program, Beşiktaş, 34349 İstanbul (Turkey); Tunga, M. Alper [Bahçeşehir University, Software Engineering Department, Beşiktaş, 34349 İstanbul (Turkey)
2014-12-10
The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled. In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.
High dimensional model representation method for fuzzy structural dynamics
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
ANOVA-HDMR structure of the higher order nodal diffusion solution
International Nuclear Information System (INIS)
Bokov, P. M.; Prinsloo, R. H.; Tomasevic, D. I.
2013-01-01
Nodal diffusion methods still represent a standard in global reactor calculations, but employ some ad-hoc approximations (such as the quadratic leakage approximation) which limit their accuracy in cases where reference quality solutions are sought. In this work we solve the nodal diffusion equations utilizing the so-called higher-order nodal methods to generate reference quality solutions and to decompose the obtained solutions via a technique known as High Dimensional Model Representation (HDMR). This representation and associated decomposition of the solution provides a new formulation of the transverse leakage term. The HDMR structure is investigated via the technique of Analysis of Variance (ANOVA), which indicates why the existing class of transversely-integrated nodal methods prove to be so successful. Furthermore, the analysis leads to a potential solution method for generating reference quality solutions at a much reduced calculational cost, by applying the ANOVA technique to the full higher order solution. (authors)
Efficient stochastic EMC/EMI analysis using HDMR-generated surrogate models
Yücel, Abdulkadir C.
2011-08-01
Stochastic methods have been used extensively to quantify effects due to uncertainty in system parameters (e.g. material, geometrical, and electrical constants) and/or excitation on observables pertinent to electromagnetic compatibility and interference (EMC/EMI) analysis (e.g. voltages across mission-critical circuit elements) [1]. In recent years, stochastic collocation (SC) methods, especially those leveraging generalized polynomial chaos (gPC) expansions, have received significant attention [2, 3]. SC-gPC methods probe surrogate models (i.e. compact polynomial input-output representations) to statistically characterize observables. They are nonintrusive, that is they use existing deterministic simulators, and often cost only a fraction of direct Monte-Carlo (MC) methods. Unfortunately, SC-gPC-generated surrogate models often lack accuracy (i) when the number of uncertain/random system variables is large and/or (ii) when the observables exhibit rapid variations. © 2011 IEEE.
Efficient stochastic EMC/EMI analysis using HDMR-generated surrogate models
Yü cel, Abdulkadir C.; Bagci, Hakan; Michielssen, Eric
2011-01-01
of direct Monte-Carlo (MC) methods. Unfortunately, SC-gPC-generated surrogate models often lack accuracy (i) when the number of uncertain/random system variables is large and/or (ii) when the observables exhibit rapid variations. © 2011 IEEE.
Yucel, Abdulkadir C.
2015-05-05
An efficient method for statistically characterizing multiconductor transmission line (MTL) networks subject to a large number of manufacturing uncertainties is presented. The proposed method achieves its efficiency by leveraging a high-dimensional model representation (HDMR) technique that approximates observables (quantities of interest in MTL networks, such as voltages/currents on mission-critical circuits) in terms of iteratively constructed component functions of only the most significant random variables (parameters that characterize the uncertainties in MTL networks, such as conductor locations and widths, and lumped element values). The efficiency of the proposed scheme is further increased using a multielement probabilistic collocation (ME-PC) method to compute the component functions of the HDMR. The ME-PC method makes use of generalized polynomial chaos (gPC) expansions to approximate the component functions, where the expansion coefficients are expressed in terms of integrals of the observable over the random domain. These integrals are numerically evaluated and the observable values at the quadrature/collocation points are computed using a fast deterministic simulator. The proposed method is capable of producing accurate statistical information pertinent to an observable that is rapidly varying across a high-dimensional random domain at a computational cost that is significantly lower than that of gPC or Monte Carlo methods. The applicability, efficiency, and accuracy of the method are demonstrated via statistical characterization of frequency-domain voltages in parallel wire, interconnect, and antenna corporate feed networks.
Boundary representation modelling techniques
2006-01-01
Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.
Taşkin Kaya, Gülşen
2013-10-01
Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input
Standard model of knowledge representation
Yin, Wensheng
2016-09-01
Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.
Digital models for architectonical representation
Directory of Open Access Journals (Sweden)
Stefano Brusaporci
2011-12-01
Full Text Available Digital instruments and technologies enrich architectonical representation and communication opportunities. Computer graphics is organized according the two phases of visualization and construction, that is modeling and rendering, structuring dichotomy of software technologies. Visualization modalities give different kinds of representations of the same 3D model and instruments produce a separation between drawing and image’s creation. Reverse modeling can be related to a synthesis process, ‘direct modeling’ follows an analytic procedure. The difference between interactive and not interactive applications is connected to the possibilities offered by informatics instruments, and relates to modeling and rendering. At the same time the word ‘model’ describes different phenomenon (i.e. files: mathematical model of the building and of the scene; raster representation and post-processing model. All these correlated different models constitute the architectonical interpretative model, that is a simulation of reality made by the model for improving the knowledge.
Guideline Knowledge Representation Model (GLIKREM)
Czech Academy of Sciences Publication Activity Database
Buchtela, David; Peleška, Jan; Veselý, Arnošt; Zvárová, Jana; Zvolský, Miroslav
2008-01-01
Roč. 4, č. 1 (2008), s. 17-23 ISSN 1801-5603 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : knowledge representation * GLIF model * guidelines Subject RIV: IN - Informatics, Computer Science http://www.ejbi.org/articles/200812/34/1.html
Preon representations and composite models
International Nuclear Information System (INIS)
Kang, Kyungsik
1982-01-01
This is a brief report on the preon models which are investigated by In-Gyu Koh, A. N. Schellekens and myself and based on complex, anomaly-free and asymptotically free representations of SU(3) to SU(8), SO(4N+2) and E 6 with no more than two different preons. Complete list of the representations that are complex anomaly-free and asymptotically free has been given by E. Eichten, I.-G. Koh and myself. The assumptions made about the ground state composites and the role of Fermi statistics to determine the metaflavor wave functions are discussed in some detail. We explain the method of decompositions of tensor products with definite permutation properties which has been developed for this purpose by I.-G. Koh, A.N. Schellekens and myself. An example based on an anomaly-free representation of the confining metacolor group SU(5) is discussed
Gomez, Luis J; Yücel, Abdulkadir C; Hernandez-Garcia, Luis; Taylor, Stephan F; Michielssen, Eric
2015-01-01
A computational framework for uncertainty quantification in transcranial magnetic stimulation (TMS) is presented. The framework leverages high-dimensional model representations (HDMRs), which approximate observables (i.e., quantities of interest such as electric (E) fields induced inside targeted cortical regions) via series of iteratively constructed component functions involving only the most significant random variables (i.e., parameters that characterize the uncertainty in a TMS setup such as the position and orientation of TMS coils, as well as the size, shape, and conductivity of the head tissue). The component functions of HDMR expansions are approximated via a multielement probabilistic collocation (ME-PC) method. While approximating each component function, a quasi-static finite-difference simulator is used to compute observables at integration/collocation points dictated by the ME-PC method. The proposed framework requires far fewer simulations than traditional Monte Carlo methods for providing highly accurate statistical information (e.g., the mean and standard deviation) about the observables. The efficiency and accuracy of the proposed framework are demonstrated via its application to the statistical characterization of E-fields generated by TMS inside cortical regions of an MRI-derived realistic head model. Numerical results show that while uncertainties in tissue conductivities have negligible effects on TMS operation, variations in coil position/orientation and brain size significantly affect the induced E-fields. Our numerical results have several implications for the use of TMS during depression therapy: 1) uncertainty in the coil position and orientation may reduce the response rates of patients; 2) practitioners should favor targets on the crest of a gyrus to obtain maximal stimulation; and 3) an increasing scalp-to-cortex distance reduces the magnitude of E-fields on the surface and inside the cortex.
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
A Distributional Representation Model For Collaborative Filtering
Junlin, Zhang; Heng, Cai; Tongwen, Huang; Huiping, Xue
2015-01-01
In this paper, we propose a very concise deep learning approach for collaborative filtering that jointly models distributional representation for users and items. The proposed framework obtains better performance when compared against current state-of-art algorithms and that made the distributional representation model a promising direction for further research in the collaborative filtering.
Improving Representational Competence with Concrete Models
Stieff, Mike; Scopelitis, Stephanie; Lira, Matthew E.; DeSutter, Dane
2016-01-01
Representational competence is a primary contributor to student learning in science, technology, engineering, and math (STEM) disciplines and an optimal target for instruction at all educational levels. We describe the design and implementation of a learning activity that uses concrete models to improve students' representational competence and…
General regression and representation model for classification.
Directory of Open Access Journals (Sweden)
Jianjun Qian
Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.
International Nuclear Information System (INIS)
Davis, J. E.; Eddy, M. J.; Sutton, T. M.; Altomari, T. J.
2007-01-01
Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces - a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation. (authors)
Internal Representational Models of Attachment Relationships.
Crittenden, Patricia M.
This paper outlines several properties of internal representational models (IRMs) and offers terminology that may help to differentiate the models. Properties of IRMs include focus, memory systems, content, cognitive function, "metastructure," quality of attachment, behavioral strategies, and attitude toward attachment. An IRM focuses on…
Approximate Dynamic Programming Based on High Dimensional Model Representation
Czech Academy of Sciences Publication Activity Database
Pištěk, Miroslav
2013-01-01
Roč. 49, č. 5 (2013), s. 720-737 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GAP102/11/0437 Institutional support: RVO:67985556 Keywords : approximate dynamic programming * Bellman equation * approximate HDMR minimization * trust region problem Subject RIV: BC - Control Systems Theory Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/pistek-0399560.pdf
Towards New Mappings between Emotion Representation Models
Directory of Open Access Journals (Sweden)
Agnieszka Landowska
2018-02-01
Full Text Available There are several models for representing emotions in affect-aware applications, and available emotion recognition solutions provide results using diverse emotion models. As multimodal fusion is beneficial in terms of both accuracy and reliability of emotion recognition, one of the challenges is mapping between the models of affect representation. This paper addresses this issue by: proposing a procedure to elaborate new mappings, recommending a set of metrics for evaluation of the mapping accuracy, and delivering new mapping matrices for estimating the dimensions of a Pleasure-Arousal-Dominance model from Ekman’s six basic emotions. The results are based on an analysis using three datasets that were constructed based on affect-annotated lexicons. The new mappings were obtained with linear regression learning methods. The proposed mappings showed better results on the datasets in comparison with the state-of-the-art matrix. The procedure, as well as the proposed metrics, might be used, not only in evaluation of the mappings between representation models, but also in comparison of emotion recognition and annotation results. Moreover, the datasets are published along with the paper and new mappings might be created and evaluated using the proposed methods. The study results might be interesting for both researchers and developers, who aim to extend their software solutions with affect recognition techniques.
Knowledge Representation Using Multilevel Flow Model in Expert System
International Nuclear Information System (INIS)
Wang, Wenlin; Yang, Ming
2015-01-01
As for the knowledge representation, of course, there are a great many methods available for knowledge representation. These include frames, causal models, and many others. This paper presents a novel method called Multilevel Flow Model (MFM), which is used for knowledge representation in G2 expert system. Knowledge representation plays a vital role in constructing knowledge bases. Moreover, it also has impact on building of generic fault model as well as knowledge bases. The MFM is particularly useful to describe system knowledge concisely as domain map in expert system when domain experts are not available
Knowledge Representation Using Multilevel Flow Model in Expert System
Energy Technology Data Exchange (ETDEWEB)
Wang, Wenlin; Yang, Ming [Harbin Engineering University, Harbin (China)
2015-05-15
As for the knowledge representation, of course, there are a great many methods available for knowledge representation. These include frames, causal models, and many others. This paper presents a novel method called Multilevel Flow Model (MFM), which is used for knowledge representation in G2 expert system. Knowledge representation plays a vital role in constructing knowledge bases. Moreover, it also has impact on building of generic fault model as well as knowledge bases. The MFM is particularly useful to describe system knowledge concisely as domain map in expert system when domain experts are not available.
Generalized Enhanced Multivariance Product Representation for Data Partitioning: Constancy Level
International Nuclear Information System (INIS)
Tunga, M. Alper; Demiralp, Metin
2011-01-01
Enhanced Multivariance Product Representation (EMPR) method is used to represent multivariate functions in terms of less-variate structures. The EMPR method extends the HDMR expansion by inserting some additional support functions to increase the quality of the approximants obtained for dominantly or purely multiplicative analytical structures. This work aims to develop the generalized form of the EMPR method to be used in multivariate data partitioning approaches. For this purpose, the Generalized HDMR philosophy is taken into consideration to construct the details of the Generalized EMPR at constancy level as the introductory steps and encouraging results are obtained in data partitioning problems by using our new method. In addition, to examine this performance, a number of numerical implementations with concluding remarks are given at the end of this paper.
National Research Council Canada - National Science Library
Little, Daniel
2006-01-01
...). The reason this is so is due to hierarchies that we take for granted. By hierarchies I mean that there is a layer of representation of us as individuals, as military professional, as members of a military unit and as citizens of an entire nation...
Do Knowledge-Component Models Need to Incorporate Representational Competencies?
Rau, Martina Angela
2017-01-01
Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…
2006-09-01
two weeks to arrive. Source: http://beergame.mit.edu/ Permission Granted – MIT Supply Chain Forum 2005 Professor Sterman –Sloan School of...Management - MITSource: http://web.mit.edu/jsterman/www/ SDG /beergame.html Rules of Engagement The MIT Beer Game Simulation 04-04 Slide Number 10 Professor...Sterman –Sloan School of Management - MITSource: http://web.mit.edu/jsterman/www/ SDG /beergame.html What is the Significance of Representation
Two problems from the theory of semiotic control models. I. Representations of semiotic models
Energy Technology Data Exchange (ETDEWEB)
Osipov, G S
1981-11-01
Two problems from the theory of semiotic control models are being stated, in particular the representation of models and the semantic analysis of themtheory of semiotic control models are being stated, in particular the representation of models and the semantic analysis of them. Algebraic representation of semiotic models, covering of representations, their reduction and equivalence are discussed. The interrelations between functional and structural characteristics of semiotic models are investigated. 20 references.
A Knowledge-Based Representation Scheme for Environmental Science Models
Keller, Richard M.; Dungan, Jennifer L.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
One of the primary methods available for studying environmental phenomena is the construction and analysis of computational models. We have been studying how artificial intelligence techniques can be applied to assist in the development and use of environmental science models within the context of NASA-sponsored activities. We have identified several high-utility areas as potential targets for research and development: model development; data visualization, analysis, and interpretation; model publishing and reuse, training and education; and framing, posing, and answering questions. Central to progress on any of the above areas is a representation for environmental models that contains a great deal more information than is present in a traditional software implementation. In particular, a traditional software implementation is devoid of any semantic information that connects the code with the environmental context that forms the background for the modeling activity. Before we can build AI systems to assist in model development and usage, we must develop a representation for environmental models that adequately describes a model's semantics and explicitly represents the relationship between the code and the modeling task at hand. We have developed one such representation in conjunction with our work on the SIGMA (Scientists' Intelligent Graphical Modeling Assistant) environment. The key feature of the representation is that it provides a semantic grounding for the symbols in a set of modeling equations by linking those symbols to an explicit representation of the underlying environmental scenario.
BIM-enabled Conceptual Modelling and Representation of Building Circulation
Lee, Jin Kook; Kim, Mi Jeong
2014-01-01
This paper describes how a building information modelling (BIM)-based approach for building circulation enables us to change the process of building design in terms of its computational representation and processes, focusing on the conceptual modelling and representation of circulation within buildings. BIM has been designed for use by several BIM authoring tools, in particular with the widely known interoperable industry foundation classes (IFCs), which follow an object-oriented data modelli...
Converting biomolecular modelling data based on an XML representation.
Sun, Yudong; McKeever, Steve
2008-08-25
Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language). BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.
Cognition and procedure representational requirements for predictive human performance models
Corker, K.
1992-01-01
Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods
On push-forward representations in the standard gyrokinetic model
International Nuclear Information System (INIS)
Miyato, N.; Yagi, M.; Scott, B. D.
2015-01-01
Two representations of fluid moments in terms of a gyro-center distribution function and gyro-center coordinates, which are called push-forward representations, are compared in the standard electrostatic gyrokinetic model. In the representation conventionally used to derive the gyrokinetic Poisson equation, the pull-back transformation of the gyro-center distribution function contains effects of the gyro-center transformation and therefore electrostatic potential fluctuations, which is described by the Poisson brackets between the distribution function and scalar functions generating the gyro-center transformation. Usually, only the lowest order solution of the generating function at first order is considered to explicitly derive the gyrokinetic Poisson equation. This is true in explicitly deriving representations of scalar fluid moments with polarization terms. One also recovers the particle diamagnetic flux at this order because it is associated with the guiding-center transformation. However, higher-order solutions are needed to derive finite Larmor radius terms of particle flux including the polarization drift flux from the conventional representation. On the other hand, the lowest order solution is sufficient for the other representation, in which the gyro-center transformation part is combined with the guiding-center one and the pull-back transformation of the distribution function does not appear
On push-forward representations in the standard gyrokinetic model
Energy Technology Data Exchange (ETDEWEB)
Miyato, N., E-mail: miyato.naoaki@jaea.go.jp; Yagi, M. [Japan Atomic Energy Agency, 2-116 Omotedate, Obuchi, Rokkasho, Aomori 039-3212 (Japan); Scott, B. D. [Max-Planck-Institut für Plasmaphysik, D-85748 Garching (Germany)
2015-01-15
Two representations of fluid moments in terms of a gyro-center distribution function and gyro-center coordinates, which are called push-forward representations, are compared in the standard electrostatic gyrokinetic model. In the representation conventionally used to derive the gyrokinetic Poisson equation, the pull-back transformation of the gyro-center distribution function contains effects of the gyro-center transformation and therefore electrostatic potential fluctuations, which is described by the Poisson brackets between the distribution function and scalar functions generating the gyro-center transformation. Usually, only the lowest order solution of the generating function at first order is considered to explicitly derive the gyrokinetic Poisson equation. This is true in explicitly deriving representations of scalar fluid moments with polarization terms. One also recovers the particle diamagnetic flux at this order because it is associated with the guiding-center transformation. However, higher-order solutions are needed to derive finite Larmor radius terms of particle flux including the polarization drift flux from the conventional representation. On the other hand, the lowest order solution is sufficient for the other representation, in which the gyro-center transformation part is combined with the guiding-center one and the pull-back transformation of the distribution function does not appear.
A Description Logic Based Knowledge Representation Model for Concept Understanding
DEFF Research Database (Denmark)
Badie, Farshad
2017-01-01
This research employs Description Logics in order to focus on logical description and analysis of the phenomenon of ‘concept understanding’. The article will deal with a formal-semantic model for figuring out the underlying logical assumptions of ‘concept understanding’ in knowledge representation...... systems. In other words, it attempts to describe a theoretical model for concept understanding and to reflect the phenomenon of ‘concept understanding’ in terminological knowledge representation systems. Finally, it will design an ontology that schemes the structure of concept understanding based...
A Fuzzy Knowledge Representation Model for Student Performance Assessment
DEFF Research Database (Denmark)
Badie, Farshad
Knowledge representation models based on Fuzzy Description Logics (DLs) can provide a foundation for reasoning in intelligent learning environments. While basic DLs are suitable for expressing crisp concepts and binary relationships, Fuzzy DLs are capable of processing degrees of truth/completene......Knowledge representation models based on Fuzzy Description Logics (DLs) can provide a foundation for reasoning in intelligent learning environments. While basic DLs are suitable for expressing crisp concepts and binary relationships, Fuzzy DLs are capable of processing degrees of truth....../completeness about vague or imprecise information. This paper tackles the issue of representing fuzzy classes using OWL2 in a dataset describing Performance Assessment Results of Students (PARS)....
A Model of Representational Spaces in Human Cortex.
Guntupalli, J Swaroop; Hanke, Michael; Halchenko, Yaroslav O; Connolly, Andrew C; Ramadge, Peter J; Haxby, James V
2016-06-01
Current models of the functional architecture of human cortex emphasize areas that capture coarse-scale features of cortical topography but provide no account for population responses that encode information in fine-scale patterns of activity. Here, we present a linear model of shared representational spaces in human cortex that captures fine-scale distinctions among population responses with response-tuning basis functions that are common across brains and models cortical patterns of neural responses with individual-specific topographic basis functions. We derive a common model space for the whole cortex using a new algorithm, searchlight hyperalignment, and complex, dynamic stimuli that provide a broad sampling of visual, auditory, and social percepts. The model aligns representations across brains in occipital, temporal, parietal, and prefrontal cortices, as shown by between-subject multivariate pattern classification and intersubject correlation of representational geometry, indicating that structural principles for shared neural representations apply across widely divergent domains of information. The model provides a rigorous account for individual variability of well-known coarse-scale topographies, such as retinotopy and category selectivity, and goes further to account for fine-scale patterns that are multiplexed with coarse-scale topographies and carry finer distinctions. © The Author 2016. Published by Oxford University Press.
Crystal structure representations for machine learning models of formation energies
Energy Technology Data Exchange (ETDEWEB)
Faber, Felix [Department of Chemistry, Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, University of Basel Switzerland; Lindmaa, Alexander [Department of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping Sweden; von Lilienfeld, O. Anatole [Department of Chemistry, Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, University of Basel Switzerland; Argonne Leadership Computing Facility, Argonne National Laboratory, 9700 S. Cass Avenue Lemont Illinois 60439; Armiento, Rickard [Department of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping Sweden
2015-04-20
We introduce and evaluate a set of feature vector representations of crystal structures for machine learning (ML) models of formation energies of solids. ML models of atomization energies of organic molecules have been successful using a Coulomb matrix representation of the molecule. We consider three ways to generalize such representations to periodic systems: (i) a matrix where each element is related to the Ewald sum of the electrostatic interaction between two different atoms in the unit cell repeated over the lattice; (ii) an extended Coulomb-like matrix that takes into account a number of neighboring unit cells; and (iii) an ansatz that mimics the periodicity and the basic features of the elements in the Ewald sum matrix using a sine function of the crystal coordinates of the atoms. The representations are compared for a Laplacian kernel with Manhattan norm, trained to reproduce formation energies using a dataset of 3938 crystal structures obtained from the Materials Project. For training sets consisting of 3000 crystals, the generalization error in predicting formation energies of new structures corresponds to (i) 0.49, (ii) 0.64, and (iii) 0.37eV/atom for the respective representations.
From the Osterwalder canvas to an alternative business model representation
Verrue, Johan
2015-01-01
The Osterwalder business model canvas (BMC) is used by many entrepreneurs, managers, consultants and business schools. In our research we have investigated whether the canvas is a valid instrument for gaining an in-depth, accurate insight into business models. Therefore we have performed initial multiple case study research which concluded that the canvas does not generate valid business model (BM) representations. In our second multiple case study, we have constructed an alternative BM frame...
Optical model representation of coupled channel effects
International Nuclear Information System (INIS)
Wall, N.S.; Cowley, A.A.; Johnson, R.C.; Kobas, A.M.
1977-01-01
A modification to the usual 6-parameter Woods-Saxon parameterization of the optical model for the scattering of composite particles is proposed. This additional real term reflects the effect of coupling other channels to the elastic scattering. The analyses favor a repulsive interaction for this term, especially for alpha particles. It is found that the repulsive term when combined with a Woods-Saxon term yields potentials with central values and volume integrals similar to those found by uncoupled elastic scattering calculations. These values are V(r = 0) approximately equal to 125 MeV and J/4A approximately equal to 300 MeV-fm 3
A Fuzzy Knowledge Representation Model for Student Performance Assessment
DEFF Research Database (Denmark)
Badie, Farshad
Knowledge representation models based on Fuzzy Description Logics (DLs) can provide a foundation for reasoning in intelligent learning environments. While basic DLs are suitable for expressing crisp concepts and binary relationships, Fuzzy DLs are capable of processing degrees of truth/completene...
Thinking Egyptian: Active Models for Understanding Spatial Representation.
Schiferl, Ellen
This paper highlights how introductory textbooks on Egyptian art inhibit understanding by reinforcing student preconceptions, and demonstrates another approach to discussing space with a classroom exercise and software. The alternative approach, an active model for spatial representation, introduced here was developed by adapting classroom…
Multiscale geometric modeling of macromolecules II: Lagrangian representation
Feng, Xin; Xia, Kelin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei
2013-01-01
Geometric modeling of biomolecules plays an essential role in the conceptualization of biolmolecular structure, function, dynamics and transport. Qualitatively, geometric modeling offers a basis for molecular visualization, which is crucial for the understanding of molecular structure and interactions. Quantitatively, geometric modeling bridges the gap between molecular information, such as that from X-ray, NMR and cryo-EM, and theoretical/mathematical models, such as molecular dynamics, the Poisson-Boltzmann equation and the Nernst-Planck equation. In this work, we present a family of variational multiscale geometric models for macromolecular systems. Our models are able to combine multiresolution geometric modeling with multiscale electrostatic modeling in a unified variational framework. We discuss a suite of techniques for molecular surface generation, molecular surface meshing, molecular volumetric meshing, and the estimation of Hadwiger’s functionals. Emphasis is given to the multiresolution representations of biomolecules and the associated multiscale electrostatic analyses as well as multiresolution curvature characterizations. The resulting fine resolution representations of a biomolecular system enable the detailed analysis of solvent-solute interaction, and ion channel dynamics, while our coarse resolution representations highlight the compatibility of protein-ligand bindings and possibility of protein-protein interactions. PMID:23813599
Converting Biomolecular Modelling Data Based on an XML Representation
Directory of Open Access Journals (Sweden)
Sun Yudong
2008-06-01
Full Text Available Biomolecular modelling has provided computational simulation based methods for investigating biological processes from quantum chemical to cellular levels. Modelling such microscopic processes requires atomic description of a biological system and conducts in fine timesteps. Consequently the simulations are extremely computationally demanding. To tackle this limitation, different biomolecular models have to be integrated in order to achieve high-performance simulations. The integration of diverse biomolecular models needs to convert molecular data between different data representations of different models. This data conversion is often non-trivial, requires extensive human input and is inevitably error prone. In this paper we present an automated data conversion method for biomolecular simulations between molecular dynamics and quantum mechanics/molecular mechanics models. Our approach is developed around an XML data representation called BioSimML (Biomolecular Simulation Markup Language. BioSimML provides a domain specific data representation for biomolecular modelling which can effciently support data interoperability between different biomolecular simulation models and data formats.
From scenarios to domain models: processes and representations
Haddock, Gail; Harbison, Karan
1994-03-01
The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.
BIM-Enabled Conceptual Modelling and Representation of Building Circulation
Directory of Open Access Journals (Sweden)
Jin Kook Lee
2014-08-01
Full Text Available This paper describes how a building information modelling (BIM-based approach for building circulation enables us to change the process of building design in terms of its computational representation and processes, focusing on the conceptual modelling and representation of circulation within buildings. BIM has been designed for use by several BIM authoring tools, in particular with the widely known interoperable industry foundation classes (IFCs, which follow an object-oriented data modelling methodology. Advances in BIM authoring tools, using space objects and their relations defined in an IFC's schema, have made it possible to model, visualize and analyse circulation within buildings prior to their construction. Agent-based circulation has long been an interdisciplinary topic of research across several areas, including design computing, computer science, architectural morphology, human behaviour and environmental psychology. Such conventional approaches to building circulation are centred on navigational knowledge about built environments, and represent specific circulation paths and regulations. This paper, however, places emphasis on the use of ‘space objects’ in BIM-enabled design processes rather than on circulation agents, the latter of which are not defined in the IFCs' schemas. By introducing and reviewing some associated research and projects, this paper also surveys how such a circulation representation is applicable to the analysis of building circulation-related rules.
Sparse representation based image interpolation with nonlocal autoregressive modeling.
Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming
2013-04-01
Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.
A Representation for Gaining Insight into Clinical Decision Models
Jimison, Holly B.
1988-01-01
For many medical domains uncertainty and patient preferences are important components of decision making. Decision theory is useful as a representation for such medical models in computer decision aids, but the methodology has typically had poor performance in the areas of explanation and user interface. The additional representation of probabilities and utilities as random variables serves to provide a framework for graphical and text insight into complicated decision models. The approach allows for efficient customization of a generic model that describes the general patient population of interest to a patient- specific model. Monte Carlo simulation is used to calculate the expected value of information and sensitivity for each model variable, thus providing a metric for deciding what to emphasize in the graphics and text summary. The computer-generated explanation includes variables that are sensitive with respect to the decision or that deviate significantly from what is typically observed. These techniques serve to keep the assessment and explanation of the patient's decision model concise, allowing the user to focus on the most important aspects for that patient.
Improved dust representation in the Community Atmosphere Model
Albani, S.; Mahowald, N. M.; Perry, A. T.; Scanza, R. A.; Zender, C. S.; Heavens, N. G.; Maggi, V.; Kok, J. F.; Otto-Bliesner, B. L.
2014-09-01
Aerosol-climate interactions constitute one of the major sources of uncertainty in assessing changes in aerosol forcing in the anthropocene as well as understanding glacial-interglacial cycles. Here we focus on improving the representation of mineral dust in the Community Atmosphere Model and assessing the impacts of the improvements in terms of direct effects on the radiative balance of the atmosphere. We simulated the dust cycle using different parameterization sets for dust emission, size distribution, and optical properties. Comparing the results of these simulations with observations of concentration, deposition, and aerosol optical depth allows us to refine the representation of the dust cycle and its climate impacts. We propose a tuning method for dust parameterizations to allow the dust module to work across the wide variety of parameter settings which can be used within the Community Atmosphere Model. Our results include a better representation of the dust cycle, most notably for the improved size distribution. The estimated net top of atmosphere direct dust radiative forcing is -0.23 ± 0.14 W/m2 for present day and -0.32 ± 0.20 W/m2 at the Last Glacial Maximum. From our study and sensitivity tests, we also derive some general relevant findings, supporting the concept that the magnitude of the modeled dust cycle is sensitive to the observational data sets and size distribution chosen to constrain the model as well as the meteorological forcing data, even within the same modeling framework, and that the direct radiative forcing of dust is strongly sensitive to the optical properties and size distribution used.
Time representation in reinforcement learning models of the basal ganglia
Directory of Open Access Journals (Sweden)
Samuel Joseph Gershman
2014-01-01
Full Text Available Reinforcement learning models have been influential in understanding many aspects of basal ganglia function, from reward prediction to action selection. Time plays an important role in these models, but there is still no theoretical consensus about what kind of time representation is used by the basal ganglia. We review several theoretical accounts and their supporting evidence. We then discuss the relationship between reinforcement learning models and the timing mechanisms that have been attributed to the basal ganglia. We hypothesize that a single computational system may underlie both reinforcement learning and interval timing—the perception of duration in the range of seconds to hours. This hypothesis, which extends earlier models by incorporating a time-sensitive action selection mechanism, may have important implications for understanding disorders like Parkinson's disease in which both decision making and timing are impaired.
Efficient explicit formulation for practical fuzzy structural analysis
Indian Academy of Sciences (India)
This paper presents a practical approach based on High Dimensional Model Representation (HDMR) for analysing the response of structures with fuzzy parameters. The proposed methodology involves integrated ﬁnite element modelling, HDMR based response surface generation, and explicit fuzzy analysis procedures.
Representation of Northern Hemisphere winter storm tracks in climate models
Energy Technology Data Exchange (ETDEWEB)
Greeves, C.Z.; Pope, V.D.; Stratton, R.A.; Martin, G.M. [Met Office Hadley Centre for Climate Prediction and Research, Exeter (United Kingdom)
2007-06-15
Northern Hemisphere winter storm tracks are a key element of the winter weather and climate at mid-latitudes. Before projections of climate change are made for these regions, it is necessary to be sure that climate models are able to reproduce the main features of observed storm tracks. The simulated storm tracks are assessed for a variety of Hadley Centre models and are shown to be well modelled on the whole. The atmosphere-only model with the semi-Lagrangian dynamical core produces generally more realistic storm tracks than the model with the Eulerian dynamical core, provided the horizontal resolution is high enough. The two models respond in different ways to changes in horizontal resolution: the model with the semi-Lagrangian dynamical core has much reduced frequency and strength of cyclonic features at lower resolution due to reduced transient eddy kinetic energy. The model with Eulerian dynamical core displays much smaller changes in frequency and strength of features with changes in horizontal resolution, but the location of the storm tracks as well as secondary development are sensitive to resolution. Coupling the atmosphere-only model (with semi-Lagrangian dynamical core) to an ocean model seems to affect the storm tracks largely via errors in the tropical representation. For instance a cold SST bias in the Pacific and a lack of ENSO variability lead to large changes in the Pacific storm track. Extratropical SST biases appear to have a more localised effect on the storm tracks. (orig.)
XML for data representation and model specification in neuroscience.
Crook, Sharon M; Howell, Fred W
2007-01-01
EXtensible Markup Language (XML) technology provides an ideal representation for the complex structure of models and neuroscience data, as it is an open file format and provides a language-independent method for storing arbitrarily complex structured information. XML is composed of text and tags that explicitly describe the structure and semantics of the content of the document. In this chapter, we describe some of the common uses of XML in neuroscience, with case studies in representing neuroscience data and defining model descriptions based on examples from NeuroML. The specific methods that we discuss include (1) reading and writing XML from applications, (2) exporting XML from databases, (3) using XML standards to represent neuronal morphology data, (4) using XML to represent experimental metadata, and (5) creating new XML specifications for models.
Representations of the Virasoro algebra from lattice models
International Nuclear Information System (INIS)
Koo, W.M.; Saleur, H.
1994-01-01
We investigate in detail how the Virasoro algebra appears in the scaling limit of the simplest lattice models of XXZ or RSOS type. Our approach is straightforward but to our knowledge had never been tried so far. We simply formulate a conjecture for the lattice stress-energy tensor motivated by the exact derivation of lattice global Ward identities. We then check that the proper algebraic relations are obeyed in the scaling limit. The latter is under reasonable control thanks to the Bethe-ansatz solution. The results, which are mostly numerical for technical reasons, are remarkably precise. They are also corroborated by exact pieces of information from various sources, in particular Temperley-Lieb algebra representation theory. Most features of the Virasoro algebra (like central term, null vectors, metric properties, etc.) can thus be observed using the lattice models. This seems of general interest for lattice field theory, and also more specifically for finding relations between conformal invariance and lattice integrability, since a basis for the irreducible representations of the Virasoro algebra should now follow (at least in principle) from Bethe-ansatz computations. ((orig.))
Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification
Directory of Open Access Journals (Sweden)
Yidong Tang
2016-01-01
Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.
Evaluation, Use, and Refinement of Knowledge Representations through Acquisition Modeling
Pearl, Lisa
2017-01-01
Generative approaches to language have long recognized the natural link between theories of knowledge representation and theories of knowledge acquisition. The basic idea is that the knowledge representations provided by Universal Grammar enable children to acquire language as reliably as they do because these representations highlight the…
Metric versus observable operator representation, higher spin models
Fring, Andreas; Frith, Thomas
2018-02-01
We elaborate further on the metric representation that is obtained by transferring the time-dependence from a Hermitian Hamiltonian to the metric operator in a related non-Hermitian system. We provide further insight into the procedure on how to employ the time-dependent Dyson relation and the quasi-Hermiticity relation to solve time-dependent Hermitian Hamiltonian systems. By solving both equations separately we argue here that it is in general easier to solve the former. We solve the mutually related time-dependent Schrödinger equation for a Hermitian and non-Hermitian spin 1/2, 1 and 3/2 model with time-independent and time-dependent metric, respectively. In all models the overdetermined coupled system of equations for the Dyson map can be decoupled algebraic manipulations and reduces to simple linear differential equations and an equation that can be converted into the non-linear Ermakov-Pinney equation.
The Calogero model - anyonic representation, fermionic extension and supersymmetry
Energy Technology Data Exchange (ETDEWEB)
Brink, L [Inst. of Theoretical Physics, Goeteborg (Sweden); Hansson, T H [Inst. of Theoretical Physics, Univ. of Stockholm (Sweden); Konstein, S [Dept. of Theoretical Physics, P.N. Lebedev Inst., Moscow (Russian Federation); Vasiliev, M A [Dept. of Theoretical Physics, P.N. Lebedev Inst., Moscow (Russian Federation)
1993-07-26
We discuss several applications and extensions of our previous operator solution of the N-body quantum-mechanical Calogero problem, i.e. N particles in one dimension subject to a two-body interaction of the form 1/2[Sigma][sub i,j] (x[sub i]-x[sub j])[sup 2]+g/(x[sub i]-x[sub j])[sup 2]. Using a complex representation of the deformed Heisenberg algebra underlying the Calogero model, we explicitly establish the equivalence between this system and anyons in the lowest Landau level. A construction based on supersymmetry is used to extend our operator method to include fermions, and we obtain an explicit solution of the supersymmetric Calogero model constructed by Freedman and Mende. We also show how the dynamical OSp(2; 2) supersymmetry is realized by bilinears of modified creation and annihilation operators, and how to construct a supersymmetric extension of the deformed Heisenberg algebra. (orig.)
Knowledge representation to support reasoning based on multiple models
Gillam, April; Seidel, Jorge P.; Parker, Alice C.
1990-01-01
Model Based Reasoning is a powerful tool used to design and analyze systems, which are often composed of numerous interactive, interrelated subsystems. Models of the subsystems are written independently and may be used together while they are still under development. Thus the models are not static. They evolve as information becomes obsolete, as improved artifact descriptions are developed, and as system capabilities change. Researchers are using three methods to support knowledge/data base growth, to track the model evolution, and to handle knowledge from diverse domains. First, the representation methodology is based on having pools, or types, of knowledge from which each model is constructed. In addition information is explicit. This includes the interactions between components, the description of the artifact structure, and the constraints and limitations of the models. The third principle we have followed is the separation of the data and knowledge from the inferencing and equation solving mechanisms. This methodology is used in two distinct knowledge-based systems: one for the design of space systems and another for the synthesis of VLSI circuits. It has facilitated the growth and evolution of our models, made accountability of results explicit, and provided credibility for the user community. These capabilities have been implemented and are being used in actual design projects.
Representation of an open repository in groundwater flow models
International Nuclear Information System (INIS)
Painter, Scott; Sun, Alexander
2005-08-01
The effect of repository tunnels on groundwater flow has been identified as a potential issue for the nuclear waste repository being considered by SKB for a fractured granite formation in Sweden. In particular, the following pre-closure and post-closure processes have been identified as being important: inflows into open tunnels as functions of estimated grouting efficiencies, drawdown of the water table in the vicinity of the repository, upcoming of saline water, 'turnover' of surface water in the upper bedrock, and resaturation of backfilled tunnels following repository closure. The representation of repository tunnels within groundwater models is addressed in this report. The primary focus is on far-field flow that is modeled with a continuum porous medium approximation. Of particular interest are the consequences of the tunnel representation on the transient response of the groundwater system to repository operations and repository closure, as well as modeling issues such as how the water-table free surface and the coupling to near-surface hydrogeology should be handled. The overall objectives are to understand the consequences of current representations and to identify appropriate approximations for representing open tunnels in future groundwater modeling studies. The following conclusions can be drawn from the results of the simulations: 1. Two-phase flow may be induced in the vicinity of repository tunnels during repository pre-closure operations, but the formation of a two-phase flow region will not significantly affect far-field flow or inflows into tunnels. 2. The water table will be drawn down to the repository horizon and tunnel inflows will reach a steady-state value within about 5 years. 3. Steady-state inflows at the repository edge are estimated to be about 250 m 3 /year per meter of tunnel. Inflows will be greater during the transient de-watering period and less for tunnel locations closer to the repository center. 4. Significant amounts of water
Representation of an open repository in groundwater flow models
Energy Technology Data Exchange (ETDEWEB)
Painter, Scott; Sun, Alexander [Southwest Research Inst., San Antonio, TX (United States). Center for Nuclear Waste Regulatory Analyses
2005-08-01
The effect of repository tunnels on groundwater flow has been identified as a potential issue for the nuclear waste repository being considered by SKB for a fractured granite formation in Sweden. In particular, the following pre-closure and post-closure processes have been identified as being important: inflows into open tunnels as functions of estimated grouting efficiencies, drawdown of the water table in the vicinity of the repository, upcoming of saline water, 'turnover' of surface water in the upper bedrock, and resaturation of backfilled tunnels following repository closure. The representation of repository tunnels within groundwater models is addressed in this report. The primary focus is on far-field flow that is modeled with a continuum porous medium approximation. Of particular interest are the consequences of the tunnel representation on the transient response of the groundwater system to repository operations and repository closure, as well as modeling issues such as how the water-table free surface and the coupling to near-surface hydrogeology should be handled. The overall objectives are to understand the consequences of current representations and to identify appropriate approximations for representing open tunnels in future groundwater modeling studies. The following conclusions can be drawn from the results of the simulations: 1. Two-phase flow may be induced in the vicinity of repository tunnels during repository pre-closure operations, but the formation of a two-phase flow region will not significantly affect far-field flow or inflows into tunnels. 2. The water table will be drawn down to the repository horizon and tunnel inflows will reach a steady-state value within about 5 years. 3. Steady-state inflows at the repository edge are estimated to be about 250 m{sup 3}/year per meter of tunnel. Inflows will be greater during the transient de-watering period and less for tunnel locations closer to the repository center. 4. Significant
A Neuronal Network Model for Pitch Selectivity and Representation.
Huang, Chengcheng; Rinzel, John
2016-01-01
Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions.
Chaos game representation (CGR)-walk model for DNA sequences
International Nuclear Information System (INIS)
Jie, Gao; Zhen-Yuan, Xu
2009-01-01
Chaos game representation (CGR) is an iterative mapping technique that processes sequences of units, such as nucleotides in a DNA sequence or amino acids in a protein, in order to determine the coordinates of their positions in a continuous space. This distribution of positions has two features: one is unique, and the other is source sequence that can be recovered from the coordinates so that the distance between positions may serve as a measure of similarity between the corresponding sequences. A CGR-walk model is proposed based on CGR coordinates for the DNA sequences. The CGR coordinates are converted into a time series, and a long-memory ARFIMA (p, d, q) model, where ARFIMA stands for autoregressive fractionally integrated moving average, is introduced into the DNA sequence analysis. This model is applied to simulating real CGR-walk sequence data of ten genomic sequences. Remarkably long-range correlations are uncovered in the data, and the results from these models are reasonably fitted with those from the ARFIMA (p, d, q) model. (cross-disciplinary physics and related areas of science and technology)
The Linked Dual Representation model of vocal perception and production
Directory of Open Access Journals (Sweden)
Sean eHutchins
2013-11-01
Full Text Available The voice is one of the most important media for communication, yet there is a wide range of abilities in both the perception and production of the voice. In this article, we review this range of abilities, focusing on pitch accuracy as a particularly informative case, and look at the factors underlying these abilities. Several classes of models have been posited describing the relationship between vocal perception and production, and we review the evidence for and against each class of model. We look at how the voice is different from other musical instruments and review evidence about both the association and the dissociation between vocal perception and production abilities. Finally, we introduce the Linked Dual Representation model, a new approach which can account for the broad patterns in prior findings, including trends in the data which might seem to be countervailing. We discuss how this model interacts with higher-order cognition and examine its predictions about several aspects of vocal perception and production.
2016-01-05
Computer-aided transformation of PDE models: languages, representations, and a calculus of operations A domain-specific embedded language called...languages, representations, and a calculus of operations Report Title A domain-specific embedded language called ibvp was developed to model initial...Computer-aided transformation of PDE models: languages, representations, and a calculus of operations 1 Vision and background Physical and engineered systems
2009-09-01
Resources and Performance. Action Group 19. Representation of Human Behavior. Lanchester , F. W. (1916). Aircraft in warfare . The dawn of the fourth...Operations and non-kinetic warfare . The second keynote presentation, by Mr. Mike Greenley, CAE Inc. provided an industry perspective, noting the need for...concentrated on tactical-conventional warfare and the emergence of world-wide “irregular warfare ” and “small wars” drive the present and future need
Grms or graphical representation of model spaces. Vol. I Basics
International Nuclear Information System (INIS)
Duch, W.
1986-01-01
This book presents a novel approach to the many-body problem in quantum chemistry, nuclear shell-theory and solid-state theory. Many-particle model spaces are visualized using graphs, each path of a graph labeling a single basis function or a subspace of functions. Spaces of a very high dimension are represented by small graphs. Model spaces have structure that is reflected in the architecture of the corresponding graphs, that in turn is reflected in the structure of the matrices corresponding to operators acting in these spaces. Insight into this structure leads to formulation of very efficient computer algorithms. Calculation of matrix elements is reduced to comparison of paths in a graph, without ever looking at the functions themselves. Using only very rudimentary mathematical tools graphical rules of matrix element calculation in abelian cases are derived, in particular segmentation rules obtained in the unitary group approached are rederived. The graphs are solutions of Diophantine equations of the type appearing in different branches of applied mathematics. Graphical representation of model spaces should find as many applications as has been found for diagramatical methods in perturbation theory
Multiscale geometric modeling of macromolecules I: Cartesian representation
Xia, Kelin; Feng, Xin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei
2014-01-01
This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace-Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the
Multiscale geometric modeling of macromolecules I: Cartesian representation
Energy Technology Data Exchange (ETDEWEB)
Xia, Kelin [Department of Mathematics, Michigan State University, MI 48824 (United States); Feng, Xin [Department of Computer Science and Engineering, Michigan State University, MI 48824 (United States); Chen, Zhan [Department of Mathematics, Michigan State University, MI 48824 (United States); Tong, Yiying [Department of Computer Science and Engineering, Michigan State University, MI 48824 (United States); Wei, Guo-Wei, E-mail: wei@math.msu.edu [Department of Mathematics, Michigan State University, MI 48824 (United States); Department of Biochemistry and Molecular Biology, Michigan State University, MI 48824 (United States)
2014-01-15
This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace–Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the
Integer Representations towards Efficient Counting in the Bit Probe Model
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Greve, Mark; Pandey, Vineet
2011-01-01
Abstract We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both...... in the worst-case and in the average-case. A counter is space-optimal if it represents any number in the range [0,...,2 n − 1] using exactly n bits. We provide a space-optimal counter which supports increment and decrement operations by reading at most n − 1 bits and writing at most 3 bits in the worst......-case. To the best of our knowledge, this is the first such representation which supports these operations by always reading strictly less than n bits. For redundant counters where we only need to represent numbers in the range [0,...,L] for some integer L bits, we define the efficiency...
The representation of knowledge within model-based control systems
International Nuclear Information System (INIS)
Weygand, D.P.; Koul, R.
1987-01-01
Representation of knowledge in artificially intelligent systems is discussed. Types of knowledge that might need to be represented in AI systems are listed, and include knowledge about objects, events, knowledge about how to do things, and knowledge about what human beings know (meta-knowledge). The use of knowledge in AI systems is discussed in terms of acquiring and retrieving knowledge and reasoning about known facts. Different kinds of reasonings or representations are ghen described with some examples given. These include formal reasoning or logical representation, which is related to mathematical logic, production systems, which are based on the idea of condition-action pairs (production), procedural reasoning, which uses pre-formed plans to solve problems, frames, which provide a structure for representing knowledge in an organized manner, direct analogical representations, which represent knowledge in such a manner that permits some observation without deduction
Deep supervised, but not unsupervised, models may explain IT cortical representation.
Directory of Open Access Journals (Sweden)
Seyed-Mahdi Khaligh-Razavi
2014-11-01
Full Text Available Inferior temporal (IT cortex in human and nonhuman primates serves visual object recognition. Computational object-vision models, although continually improving, do not yet reach human performance. It is unclear to what extent the internal representations of computational models can explain the IT representation. Here we investigate a wide range of computational model representations (37 in total, testing their categorization performance and their ability to account for the IT representational geometry. The models include well-known neuroscientific object-recognition models (e.g. HMAX, VisNet along with several models from computer vision (e.g. SIFT, GIST, self-similarity features, and a deep convolutional neural network. We compared the representational dissimilarity matrices (RDMs of the model representations with the RDMs obtained from human IT (measured with fMRI and monkey IT (measured with cell recording for the same set of stimuli (not used in training the models. Better performing models were more similar to IT in that they showed greater clustering of representational patterns by category. In addition, better performing models also more strongly resembled IT in terms of their within-category representational dissimilarities. Representational geometries were significantly correlated between IT and many of the models. However, the categorical clustering observed in IT was largely unexplained by the unsupervised models. The deep convolutional network, which was trained by supervision with over a million category-labeled images, reached the highest categorization performance and also best explained IT, although it did not fully explain the IT data. Combining the features of this model with appropriate weights and adding linear combinations that maximize the margin between animate and inanimate objects and between faces and other objects yielded a representation that fully explained our IT data. Overall, our results suggest that explaining
The representation of knowledge within model-based control systems
International Nuclear Information System (INIS)
Weygand, D.P.; Koul, R.
1987-01-01
The ability to represent knowledge is often considered essential to build systems with reasoning capabilities. In computer science, a good solution often depends on a good representation. The first step in development of most computer applications is selection of a representation for the input, output, and intermediate results that the program will operate upon. For applications in artificial intelligence, this initial choice of representation is especially important. This is because the possible representational paradigms are diverse and the forcing criteria for the choice are usually not clear in the beginning. Yet, the consequences of an inadequate choice can be devastating in the later state of a project if it is discovered that critical information cannot be encoded within the chosen representational paradigm. Problems arise when designing representational systems to support any kind of Knowledge-Base System, that is a computer system that uses knowledge to perform some task. The general case of knowledge-based systems can be thought of as reasoning agents applying knowledge to achieve goals. Artificial Intelligence (AI) research involves building computer systems to perform tasks of perception and reasoning, as well as storage and retrieval of data. The problem of automatically perceiving large patterns in data is a perceptual task that begins to be important for many expert systems applications. Most of AI research assumes that what needs to be represented is known a priori; an AI researcher's job is just figuring out how to encode the information in the system's data structure and procedures. 10 refs
Directory of Open Access Journals (Sweden)
Daryl McPadden
2017-11-01
Full Text Available Representation use is a critical skill for learning, problem solving, and communicating in science, especially in physics where multiple representations often scaffold the understanding of a phenomenon. University Modeling Instruction, which is an active-learning, research-based introductory physics curriculum centered on students’ use of scientific models, has made representation use a primary learning goal with explicit class time devoted to introducing and coordinating representations as part of the model building process. However, because of the semester break, the second semester course, Modeling Instruction-Electricity and Magnetism (MI-EM, contains a mixture of students who are returning from the Modeling Instruction-mechanics course (to whom we refer to as “returning students” and students who are new to Modeling Instruction with the MI-EM course (to whom we refer to as “new students”. In this study, we analyze the impact of MI-EM on students’ representation choices across the introductory physics content for these different groups of students by examining both what individual representations students choose and their average number of representations on a modified card-sort survey with a variety of mechanics and EM questions. Using Wilcoxon-signed-rank tests, Wilcoxon-Mann-Whitney tests, Cliff’s delta effect sizes, and box plots, we compare students’ representation choices from pre- to postsemester, from new and returning students, and from mechanics and EM content. We find that there is a significant difference between returning and new students’ representation choices, which serves as a baseline comparison between Modeling Instruction and traditional lecture-based physics classes. We also find that returning students maintain a high representation use across the MI-EM semester, while new students see significant growth in their representation use regardless of content.
Stull, Andrew T.; Hegarty, Mary
2016-01-01
This study investigated the development of representational competence among organic chemistry students by using 3D (concrete and virtual) models as aids for teaching students to translate between multiple 2D diagrams. In 2 experiments, students translated between different diagrams of molecules and received verbal feedback in 1 of the following 3…
Improving the spatial representation of basin hydrology and flow processes in the SWAT model
Rathjens, Hendrik
2014-01-01
This dissertation aims at improving the spatial representation of basin hydrology and flow processes in the SWAT model. Die vorliegende Dissertation stellt die methodischen Grundlage zur räumlich differenzierten Modellierung mit dem Modell SWAT dar.
Model validation and calibration based on component functions of model output
International Nuclear Information System (INIS)
Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei
2015-01-01
The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods
An, Gary
2009-01-01
The sheer volume of biomedical research threatens to overwhelm the capacity of individuals to effectively process this information. Adding to this challenge is the multiscale nature of both biological systems and the research community as a whole. Given this volume and rate of generation of biomedical information, the research community must develop methods for robust representation of knowledge in order for individuals, and the community as a whole, to "know what they know." Despite increasing emphasis on "data-driven" research, the fact remains that researchers guide their research using intuitively constructed conceptual models derived from knowledge extracted from publications, knowledge that is generally qualitatively expressed using natural language. Agent-based modeling (ABM) is a computational modeling method that is suited to translating the knowledge expressed in biomedical texts into dynamic representations of the conceptual models generated by researchers. The hierarchical object-class orientation of ABM maps well to biomedical ontological structures, facilitating the translation of ontologies into instantiated models. Furthermore, ABM is suited to producing the nonintuitive behaviors that often "break" conceptual models. Verification in this context is focused at determining the plausibility of a particular conceptual model, and qualitative knowledge representation is often sufficient for this goal. Thus, utilized in this fashion, ABM can provide a powerful adjunct to other computational methods within the research process, as well as providing a metamodeling framework to enhance the evolution of biomedical ontologies.
Kitaev honeycomb model. Majorana fermion representation and disorder
International Nuclear Information System (INIS)
Zschocke, Fabian
2016-01-01
Majorana representation we are able to formulate the problem in a way that can be analyzed using Wilson's numerical renormalization group. The numerics reveal an impurity entropy which can be explained by localized Majorana fermions. Through the representation of the Kitaev model in terms of quasi-particles an elegant description of a complex, strongly correlated system is possible. The results of this thesis indicate that these Majorana acquire a relevant physical meaning. If one can localize them, for example with the help of magnetic impurities, a direct experimental observation would be feasible.
McPadden, Daryl; Brewe, Eric
2017-01-01
Representation use is a critical skill for learning, problem solving, and communicating in science, especially in physics where multiple representations often scaffold the understanding of a phenomenon. University Modeling Instruction, which is an active-learning, research-based introductory physics curriculum centered on students' use of…
Interactive Shape Modeling using a Skeleton-Mesh Co-Representation
DEFF Research Database (Denmark)
Bærentzen, Jacob Andreas; Abdrashitov, Rinat; Singh, Karan
2014-01-01
We introduce the Polar-Annular Mesh representation (PAM). A PAM is a mesh-skeleton co-representation designed for the modeling of 3D organic, articulated shapes. A PAM represents a manifold mesh as a partition of polar (triangle fans) and annular (rings of quads) regions. The skeletal topology of...... a PAM to a quad-only mesh. We further present a PAM-based multi-touch sculpting application in order to demonstrate its utility as a shape representation for the interactive modeling of organic, articulated figures as well as for editing and posing of pre-existing models....
Improving the Representation of Soluble Iron in Climate Models
Energy Technology Data Exchange (ETDEWEB)
Perez Garcia-Pando, Carlos [Columbia Univ., New York, NY (United States)
2016-03-13
attached to aggregates of other minerals. This is another challenge that has been tackled by the project. The project has produced a major step forward on our understanding of the key processes needed to predict the mineral composition of dust aerosols by connecting theory, modeling and observations. The project has produced novel semi-empirical and theoretical methods to estimate the emitted size distribution and mineral composition of dust aerosols. These methods account for soil aggregates that are potentially emitted from the original undisturbed soil but are destroyed during wet sieving. The methods construct the emitted size distribution of individual minerals building upon brittle fragmentation theory, reconstructions of wet-sieved soil mineral size distributions, and/or characteristic mineral size distributions estimated from observations at times of high concentration. Based on an unprecedented evaluation with a new global compilation of observations produced with the project support, we showed that the new methods remedy some key deficiencies compared to the previous state-of-the-art. This includes the correct representation of Fe-bearing phyllosilicates at silt sizes, where they are abundant according to observations. In addition, the quartz fraction of silt particles is in better agreement with measured values. In addition, we represent an additional class of iron oxide aerosol that is a small impurity embedded within other minerals, allowing it to travel farther than in its pure crystalline state. We assume that these impurities are least frequent in soils rich in iron oxides (as a result of the assumed effect of weathering that creates pure iron oxide crystals). The mineral composition of dust is also important to other interaction with climate - through shortwave absorption and radiative forcing, nucleation of cloud droplets and ice crystals, and the heterogeneous formation of sulfates and nitrates - and to its impacts upon human health. Despite the
International Nuclear Information System (INIS)
Khrennikov, Andrei
2003-01-01
The contextual approach to the Kolmogorov probability model gives the possibility to represent this conventional model as a quantum structure, i.e., by using complex amplitudes of probabilities (or in the abstract approach - in a Hilbert space). Classical (Kolmogorovian) random variables are represented by in general noncommutative operators in the Hilbert space. The existence of such a contextual representation of the Kolmogorovian model looks very surprising in the view of the orthodox quantum tradition. However, our model can peacefully coexist with various 'no-go' theorems (e.g., von Neumann, Kochen and Specker, Bell, ...)
Directory of Open Access Journals (Sweden)
Tomas eVeloz
2015-11-01
Full Text Available Quantum models of concept combinations have been successful in representing various experimental situations that cannot be accommodated by traditional models based on classical probability or fuzzy set theory. In many cases, the focus has been on producing a representation that fits experimental results to validate quantum models. However, these representations are not always consistent with the cognitive modeling principles. Moreover, some important issues related to the representation of concepts such as the dimensionality of the realization space, the uniqueness of solutions, and the compatibility of measurements, have been overlooked.In this paper, we provide a dimensional analysis of the realization space for the two-sector Fock space model for conjunction of concepts focusing on the first and second sectors separately. We then introduce various representation of concepts that arise from the use of unitary operators in the realization space. In these concrete representations, a pair of concepts and their combination are modeled by a single conceptual state, and by a collection of exemplar-dependent operators. Therefore, they are consistent with cognitive modeling principles. %Moreover, we show that each representation is unique up to change of basis. This framework not only provides a uniform approach to model an entire data set, but, because all measurement operators are expressed in the same basis, allows us to address the question of compatibility of measurements. In particular, we present evidence that it may be possible to predict non-commutative effects from partial measurements of conceptual combinations.
Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran
2017-08-01
Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7Mental Modelling Ability (M-MMA) for 3Mental Modelling Ability (L-MMA) for 0 ≤ x ≤ 3 score. The result shows that problem solving based learning model with multiple representations approach can be an alternative to be applied in improving students' MMA.
Energy Technology Data Exchange (ETDEWEB)
Bryan, Frank [Univ. of Connecticut, Storrs, CT (United States); Dennis, John [Univ. of Connecticut, Storrs, CT (United States); MacCready, Parker [Univ. of Connecticut, Storrs, CT (United States); Whitney, Michael M. [Univ. of Connecticut, Storrs, CT (United States)
2016-09-30
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation.
Villeneuve, Jérôme; Cadoz, Claude; Castagné, Nicolas
2015-01-01
The motivation of this paper is to highlight the importance of visual representations for artists when modeling and simulating mass-interaction physical networks in the context of sound synthesis and musical composition. GENESIS is a musician-oriented software environment for sound synthesis and musical composition. However, despite this orientation, a substantial amount of effort has been put into building a rich variety of tools based on static or dynamic visual representations of models an...
Cheng, Hong
2015-01-01
This unique text/reference presents a comprehensive review of the state of the art in sparse representations, modeling and learning. The book examines both the theoretical foundations and details of algorithm implementation, highlighting the practical application of compressed sensing research in visual recognition and computer vision. Topics and features: provides a thorough introduction to the fundamentals of sparse representation, modeling and learning, and the application of these techniques in visual recognition; describes sparse recovery approaches, robust and efficient sparse represen
Energy Technology Data Exchange (ETDEWEB)
Bryan, Frank [Univ. of Washington, Seattle, WA (United States); Dennis, John [Univ. of Washington, Seattle, WA (United States); MacCready, Parker [Univ. of Washington, Seattle, WA (United States); Whitney, Michael [Univ. of Washington, Seattle, WA (United States)
2016-10-20
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation.
Modeling urban landscape: New paradigms and challenges in territorial representation
Directory of Open Access Journals (Sweden)
Sheyla Aguilar de Santana
2013-05-01
Full Text Available This paper aims to give a brief background on the production of urban space considering the social functions of the city, the needs of contemporary urban reforms and the need for tools that assist in decision making. This state of the art about the production space justifies the current studies on the development of geoprocessing tools, techniques and methodologies that attempt the needs of creating interpretive portraits of urban landscapes to facilitate dialogue between urban technical, administrators and community. In this sense, it is presented how GIS has been working within the context of urban planning and appointed the new challenges and paradigms of territorial representation.
Decision Support Tool for Deep Energy Efficiency Retrofits in DoD Installations
2014-01-01
representations (HDMR). Chemical Engineering Science, 57, 4445–4460. 2. Sobol ’, I., 2001. Global sensitivity indices for nonlinear mathematical...models and their Monte Carlo estimates. Mathematics and computers in simulation, 55, 271–280. 3. Sobol , I. and Kucherenko, S., 2009. Derivative based...representations (HDMR). Chemical Engineering Science, 57, 4445–4460. 16. Sobol ’, I., 2001. Global sensitivity indices for nonlinear mathematical models and
A roadmap for improving the representation of photosynthesis in Earth system models.
Rogers, Alistair; Medlyn, Belinda E; Dukes, Jeffrey S; Bonan, Gordon; von Caemmerer, Susanne; Dietze, Michael C; Kattge, Jens; Leakey, Andrew D B; Mercado, Lina M; Niinemets, Ülo; Prentice, I Colin; Serbin, Shawn P; Sitch, Stephen; Way, Danielle A; Zaehle, Sönke
2017-01-01
Accurate representation of photosynthesis in terrestrial biosphere models (TBMs) is essential for robust projections of global change. However, current representations vary markedly between TBMs, contributing uncertainty to projections of global carbon fluxes. Here we compared the representation of photosynthesis in seven TBMs by examining leaf and canopy level responses of photosynthetic CO 2 assimilation (A) to key environmental variables: light, temperature, CO 2 concentration, vapor pressure deficit and soil water content. We identified research areas where limited process knowledge prevents inclusion of physiological phenomena in current TBMs and research areas where data are urgently needed for model parameterization or evaluation. We provide a roadmap for new science needed to improve the representation of photosynthesis in the next generation of terrestrial biosphere and Earth system models. No claim to original US Government works New Phytologist © 2016 New Phytologist Trust.
A knowledge representation meta-model for rule-based modelling of signalling networks
Directory of Open Access Journals (Sweden)
Adrien Basso-Blandin
2016-03-01
Full Text Available The study of cellular signalling pathways and their deregulation in disease states, such as cancer, is a large and extremely complex task. Indeed, these systems involve many parts and processes but are studied piecewise and their literatures and data are consequently fragmented, distributed and sometimes—at least apparently—inconsistent. This makes it extremely difficult to build significant explanatory models with the result that effects in these systems that are brought about by many interacting factors are poorly understood. The rule-based approach to modelling has shown some promise for the representation of the highly combinatorial systems typically found in signalling where many of the proteins are composed of multiple binding domains, capable of simultaneous interactions, and/or peptide motifs controlled by post-translational modifications. However, the rule-based approach requires highly detailed information about the precise conditions for each and every interaction which is rarely available from any one single source. Rather, these conditions must be painstakingly inferred and curated, by hand, from information contained in many papers—each of which contains only part of the story. In this paper, we introduce a graph-based meta-model, attuned to the representation of cellular signalling networks, which aims to ease this massive cognitive burden on the rule-based curation process. This meta-model is a generalization of that used by Kappa and BNGL which allows for the flexible representation of knowledge at various levels of granularity. In particular, it allows us to deal with information which has either too little, or too much, detail with respect to the strict rule-based meta-model. Our approach provides a basis for the gradual aggregation of fragmented biological knowledge extracted from the literature into an instance of the meta-model from which we can define an automated translation into executable Kappa programs.
Veloz, Tomas; Desjardins, Sylvie
2015-01-01
Quantum models of concept combinations have been successful in representing various experimental situations that cannot be accommodated by traditional models based on classical probability or fuzzy set theory. In many cases, the focus has been on producing a representation that fits experimental results to validate quantum models. However, these representations are not always consistent with the cognitive modeling principles. Moreover, some important issues related to the representation of concepts such as the dimensionality of the realization space, the uniqueness of solutions, and the compatibility of measurements, have been overlooked. In this paper, we provide a dimensional analysis of the realization space for the two-sector Fock space model for conjunction of concepts focusing on the first and second sectors separately. We then introduce various representation of concepts that arise from the use of unitary operators in the realization space. In these concrete representations, a pair of concepts and their combination are modeled by a single conceptual state, and by a collection of exemplar-dependent operators. Therefore, they are consistent with cognitive modeling principles. This framework not only provides a uniform approach to model an entire data set, but, because all measurement operators are expressed in the same basis, allows us to address the question of compatibility of measurements. In particular, we present evidence that it may be possible to predict non-commutative effects from partial measurements of conceptual combinations.
Energy Technology Data Exchange (ETDEWEB)
Belgiorno, Francesco [Politecnico di Milano, Dipartimento di Matematica, Milano (Italy); INdAM-GNFM, Milano (Italy); Cacciatori, Sergio L. [Universita dell' Insubria, Department of Science and High Technology, Como (Italy); INFN sezione di Milano, Milano (Italy); Dalla Piazza, Francesco [Universita ' ' La Sapienza' ' , Dipartimento di Matematica, Roma (Italy); Doronzo, Michele [Universita dell' Insubria, Department of Science and High Technology, Como (Italy)
2016-06-15
We investigate the quantisation in the Heisenberg representation of a model which represents a simplification of the Hopfield model for dielectric media, where the electromagnetic field is replaced by a scalar field φ and the role of the polarisation field is played by a further scalar field ψ. The model, which is quadratic in the fields, is still characterised by a non-trivial physical content, as the physical particles correspond to the polaritons of the standard Hopfield model of condensed matter physics. Causality is also taken into account and a discussion of the standard interaction representation is also considered. (orig.)
Scoring predictive models using a reduced representation of proteins: model and energy definition.
Fogolari, Federico; Pieri, Lidia; Dovier, Agostino; Bortolussi, Luca; Giugliarelli, Gilberto; Corazza, Alessandra; Esposito, Gennaro; Viglino, Paolo
2007-03-23
Reduced representations of proteins have been playing a keyrole in the study of protein folding. Many such models are available, with different representation detail. Although the usefulness of many such models for structural bioinformatics applications has been demonstrated in recent years, there are few intermediate resolution models endowed with an energy model capable, for instance, of detecting native or native-like structures among decoy sets. The aim of the present work is to provide a discrete empirical potential for a reduced protein model termed here PC2CA, because it employs a PseudoCovalent structure with only 2 Centers of interactions per Amino acid, suitable for protein model quality assessment. All protein structures in the set top500H have been converted in reduced form. The distribution of pseudobonds, pseudoangle, pseudodihedrals and distances between centers of interactions have been converted into potentials of mean force. A suitable reference distribution has been defined for non-bonded interactions which takes into account excluded volume effects and protein finite size. The correlation between adjacent main chain pseudodihedrals has been converted in an additional energetic term which is able to account for cooperative effects in secondary structure elements. Local energy surface exploration is performed in order to increase the robustness of the energy function. The model and the energy definition proposed have been tested on all the multiple decoys' sets in the Decoys'R'us database. The energetic model is able to recognize, for almost all sets, native-like structures (RMSD less than 2.0 A). These results and those obtained in the blind CASP7 quality assessment experiment suggest that the model compares well with scoring potentials with finer granularity and could be useful for fast exploration of conformational space. Parameters are available at the url: http://www.dstb.uniud.it/~ffogolari/download/.
A Gloss Composition and Context Clustering Based Distributed Word Sense Representation Model
Directory of Open Access Journals (Sweden)
Tao Chen
2015-08-01
Full Text Available In recent years, there has been an increasing interest in learning a distributed representation of word sense. Traditional context clustering based models usually require careful tuning of model parameters, and typically perform worse on infrequent word senses. This paper presents a novel approach which addresses these limitations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned representations outperform the publicly available embeddings on half of the metrics in the word similarity task, 6 out of 13 sub tasks in the analogical reasoning task, and gives the best overall accuracy in the word sense effect classification task, which shows the effectiveness of our proposed distributed distribution learning model.
Using Structured Knowledge Representation for Context-Sensitive Probabilistic Modeling
National Research Council Canada - National Science Library
Sakhanenko, Nikita A; Luger, George F
2008-01-01
We propose a context-sensitive probabilistic modeling system (COSMOS) that reasons about a complex, dynamic environment through a series of applications of smaller, knowledge-focused models representing contextually relevant information...
A knowledge representation of local pandemic influenza planning models.
Islam, Runa; Brandeau, Margaret L; Das, Amar K
2007-10-11
Planning for pandemic flu outbreak at the small-government level can be aided through the use of mathematical policy models. Formulating and analyzing policy models, however, can be a time- and expertise-expensive process. We believe that a knowledge-based system for facilitating the instantiation of locale- and problem-specific policy models can reduce some of these costs. In this work, we present the ontology we have developed for pandemic influenza policy models.
Potts Model with Invisible Colors : Random-Cluster Representation and Pirogov–Sinai Analysis
Enter, Aernout C.D. van; Iacobelli, Giulio; Taati, Siamak
We study a recently introduced variant of the ferromagnetic Potts model consisting of a ferromagnetic interaction among q “visible” colors along with the presence of r non-interacting “invisible” colors. We introduce a random-cluster representation for the model, for which we prove the existence of
Delice, Ali; Kertil, Mahmut
2015-01-01
This article reports the results of a study that investigated pre-service mathematics teachers' modelling processes in terms of representational fluency in a modelling activity related to a cassette player. A qualitative approach was used in the data collection process. Students' individual and group written responses to the mathematical modelling…
Influence of input matrix representation on topic modelling performance
CSIR Research Space (South Africa)
De Waal, A
2010-11-01
Full Text Available Topic models explain a collection of documents with a small set of distributions over terms. These distributions over terms define the topics. Topic models ignore the structure of documents and use a bag-of-words approach which relies solely...
Challenges in land model representation of heat transfer in snow and frozen soils
Musselman, K. N.; Clark, M. P.; Nijssen, B.; Arnold, J.
2017-12-01
Accurate model simulations of soil thermal and moisture states are critical for realistic estimates of exchanges of energy, water, and biogeochemical fluxes at the land-atmosphere interface. In cold regions, seasonal snow-cover and organic soils form insulating barriers, modifying the heat and moisture exchange that would otherwise occur between mineral soils and the atmosphere. The thermal properties of these media are highly dynamic functions of mass, water and ice content. Land surface models vary in their representation of snow and soil processes, and thus in the treatment of insulation and heat exchange. For some models, recent development efforts have improved representation of heat transfer in cold regions, such as with multi-layer snow treatment, inclusion of soil freezing and organic soil properties, yet model deficiencies remain prevalent. We evaluate models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) experiment for proficiency in simulating heat transfer between the soil through the snowpack to the atmosphere. Using soil observations from cold region sites and a controlled experiment with Structure for Unifying Multiple Modeling Alternatives (SUMMA), we explore the impact of snow and soil model decisions and parameter values on heat transfer model skill. Specifically, we use SUMMA to mimic the spread of behaviors exhibited by the models that participated in PLUMBER. The experiment allows us to isolate relationships between model skill and process representation. The results are aimed to better understand existing model challenges and identify potential advances for cold region models.
Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.
Modified GMDH-NN algorithm and its application for global sensitivity analysis
Song, Shufang; Wang, Lu
2017-11-01
Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.
A Neuronal Network Model for Pitch Selectivity and Representation
Huang, Chengcheng; Rinzel, John
2016-01-01
Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among c...
Representation and Incorporation of Close Others' Responses: The RICOR Model of Social Influence.
Smith, Eliot R; Mackie, Diane M
2015-08-03
We propose a new model of social influence, which can occur spontaneously and in the absence of typically assumed motives. We assume that perceivers routinely construct representations of other people's experiences and responses (beliefs, attitudes, emotions, and behaviors), when observing others' responses or simulating the responses of unobserved others. Like representations made accessible by priming, these representations may then influence the process that generates perceivers' own responses, without intention or awareness, especially when there is a strong social connection to the other. We describe evidence for the basic properties and important moderators of this process, which distinguish it from other mechanisms such as informational, normative, or social identity influence. The model offers new perspectives on the role of others' values in producing cultural differences, the persistence and power of stereotypes, the adaptive reasons for being influenced by others' responses, and the impact of others' views about the self. © 2015 by the Society for Personality and Social Psychology, Inc.
Sensitivity experiments to mountain representations in spectral models
Directory of Open Access Journals (Sweden)
U. Schlese
2000-06-01
Full Text Available This paper describes a set of sensitivity experiments to several formulations of orography. Three sets are considered: a "Standard" orography consisting of an envelope orography produced originally for the ECMWF model, a"Navy" orography directly from the US Navy data and a "Scripps" orography based on the data set originally compiled several years ago at Scripps. The last two are mean orographies which do not use the envelope enhancement. A new filtering technique for handling the problem of Gibbs oscillations in spectral models has been used to produce the "Navy" and "Scripps" orographies, resulting in smoother fields than the "Standard" orography. The sensitivity experiments show that orography is still an important factor in controlling the model performance even in this class of models that use a semi-lagrangian formulation for water vapour, that in principle should be less sensitive to Gibbs oscillations than the Eulerian formulation. The largest impact can be seen in the stationary waves (asymmetric part of the geopotential at 500 mb where the differences in total height and spatial pattern generate up to 60 m differences, and in the surface fields where the Gibbs removal procedure is successful in alleviating the appearance of unrealistic oscillations over the ocean. These results indicate that Gibbs oscillations also need to be treated in this class of models. The best overall result is obtained using the "Navy" data set, that achieves a good compromise between amplitude of the stationary waves and smoothness of the surface fields.
Ontology and modeling patterns for state-based behavior representation
Castet, Jean-Francois; Rozek, Matthew L.; Ingham, Michel D.; Rouquette, Nicolas F.; Chung, Seung H.; Kerzhner, Aleksandr A.; Donahue, Kenneth M.; Jenkins, J. Steven; Wagner, David A.; Dvorak, Daniel L.;
2015-01-01
This paper provides an approach to capture state-based behavior of elements, that is, the specification of their state evolution in time, and the interactions amongst them. Elements can be components (e.g., sensors, actuators) or environments, and are characterized by state variables that vary with time. The behaviors of these elements, as well as interactions among them are represented through constraints on state variables. This paper discusses the concepts and relationships introduced in this behavior ontology, and the modeling patterns associated with it. Two example cases are provided to illustrate their usage, as well as to demonstrate the flexibility and scalability of the behavior ontology: a simple flashlight electrical model and a more complex spacecraft model involving instruments, power and data behaviors. Finally, an implementation in a SysML profile is provided.
MODELING OF DYNAMIC SYSTEMS WITH MODULATION BY MEANS OF KRONECKER VECTOR-MATRIX REPRESENTATION
Directory of Open Access Journals (Sweden)
A. S. Vasilyev
2015-09-01
Full Text Available The paper deals with modeling of dynamic systems with modulation by the possibilities of state-space method. This method, being the basis of modern control theory, is based on the possibilities of vector-matrix formalism of linear algebra and helps to solve various problems of technical control of continuous and discrete nature invariant with respect to the dimension of their “input-output” objects. Unfortunately, it turned its back on the wide group of control systems, which hardware environment modulates signals. The marked system deficiency is partially offset by this paper, which proposes Kronecker vector-matrix representations for purposes of system representation of processes with signal modulation. The main result is vector-matrix representation of processes with modulation with no formal difference from continuous systems. It has been found that abilities of these representations could be effectively used in research of systems with modulation. Obtained model representations of processes with modulation are best adapted to the state-space method. These approaches for counting eigenvalues of Kronecker matrix summaries, that are matrix basis of model representations of processes described by Kronecker vector products, give the possibility to use modal direction in research of dynamics for systems with modulation. It is shown that the use of controllability for eigenvalues of general matrixes applied to Kronecker structures enabled to divide successfully eigenvalue spectrum into directed and not directed components. Obtained findings including design problems for models of dynamic processes with modulation based on the features of Kronecker vector and matrix structures, invariant with respect to the dimension of input-output relations, are applicable in the development of alternate current servo drives.
Modeling a space-variant cortical representation for apparent motion.
Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash
2013-08-06
Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.
Shape prior modeling using sparse representation and online dictionary learning.
Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N
2012-01-01
The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.
Hattori, Masasi
2016-12-01
This paper presents a new theory of syllogistic reasoning. The proposed model assumes there are probabilistic representations of given signature situations. Instead of conducting an exhaustive search, the model constructs an individual-based "logical" mental representation that expresses the most probable state of affairs, and derives a necessary conclusion that is not inconsistent with the model using heuristics based on informativeness. The model is a unification of previous influential models. Its descriptive validity has been evaluated against existing empirical data and two new experiments, and by qualitative analyses based on previous empirical findings, all of which supported the theory. The model's behavior is also consistent with findings in other areas, including working memory capacity. The results indicate that people assume the probabilities of all target events mentioned in a syllogism to be almost equal, which suggests links between syllogistic reasoning and other areas of cognition. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
The Bogolubov Representation of the Polaron Model and Its Completely Integrable RPA-Approximation
International Nuclear Information System (INIS)
Bogolubov, Nikolai N. Jr.; Prykarpatsky, Yarema A.; Ghazaryan, Anna A.
2009-12-01
The polaron model in ionic crystal is studied in the N. Bogolubov representation using a special RPA-approximation. A new exactly solvable approximated polaron model is derived and described in detail. Its free energy at finite temperature is calculated analytically. The polaron free energy in the constant magnetic field at finite temperature is also discussed. Based on the structure of the N. Bogolubov unitary transformed polaron Hamiltonian a very important new result is stated: the full polaron model is exactly solvable. (author)
Solano, Javier; Duarte, José; Vargas, Erwin; Cabrera, Jhon; Jácome, Andrés; Botero, Mónica; Rey, Juan
2016-10-01
This paper addresses the Energetic Macroscopic Representation EMR, the modelling and the control of photovoltaic panel PVP generation systems for simulation purposes. The model of the PVP considers the variations on irradiance and temperature. A maximum power point tracking MPPT algorithm is considered to control the power converter. A novel EMR is proposed to consider the dynamic model of the PVP with variations in the irradiance and the temperature. The EMR is evaluated through simulations of a PVP generation system.
Modeling biological tissue growth: discrete to continuum representations.
Hywood, Jack D; Hackett-Jones, Emily J; Landman, Kerry A
2013-09-01
There is much interest in building deterministic continuum models from discrete agent-based models governed by local stochastic rules where an agent represents a biological cell. In developmental biology, cells are able to move and undergo cell division on and within growing tissues. A growing tissue is itself made up of cells which undergo cell division, thereby providing a significant transport mechanism for other cells within it. We develop a discrete agent-based model where domain agents represent tissue cells. Each agent has the ability to undergo a proliferation event whereby an additional domain agent is incorporated into the lattice. If a probability distribution describes the waiting times between proliferation events for an individual agent, then the total length of the domain is a random variable. The average behavior of these stochastically proliferating agents defining the growing lattice is determined in terms of a Fokker-Planck equation, with an advection and diffusion term. The diffusion term differs from the one obtained Landman and Binder [J. Theor. Biol. 259, 541 (2009)] when the rate of growth of the domain is specified, but the choice of agents is random. This discrepancy is reconciled by determining a discrete-time master equation for this process and an associated asymmetric nonexclusion random walk, together with consideration of synchronous and asynchronous updating schemes. All theoretical results are confirmed with numerical simulations. This study furthers our understanding of the relationship between agent-based rules, their implementation, and their associated partial differential equations. Since tissue growth is a significant cellular transport mechanism during embryonic growth, it is important to use the correct partial differential equation description when combining with other cellular functions.
On a Modeling of Online User Behavior Using Function Representation
Directory of Open Access Journals (Sweden)
Pavel Pesout
2012-01-01
Full Text Available Understanding the online user system requirements has become very crucial for online services providers. The existence of many users and services leads to different users’ needs. The objective of this presented piece of work is to explore the algorithms of how to optimize providers supply with proposing a new way to represent user requirements as continuous functions depending on time. We address the problems of the prediction the of system requirements and reducing model complexity by creating the typical user behavior profiles.
Generic process model structures: towards a standard notation for abstract representations
CSIR Research Space (South Africa)
Van Der Merwe, A
2007-10-01
Full Text Available in the case of objects, or repositories in the case of process models. The creation of the MIT Process Handbook was a step in this direction. However, although the authors used object-oriented concepts in the abstract representations, they did not rigorously...
CSIR Research Space (South Africa)
Garland, Rebecca M
2016-11-01
Full Text Available Aerosol particles can have large impacts on air quality and on the climate system. Regional climate models for Africa have not been well-tested and validated for their representation and simulation of aerosol particles. This study aimed to validate...
Representations of the Poincare group, position operator and the bi-local model
International Nuclear Information System (INIS)
Sohkawa, Tohru
1978-01-01
We propose two types of representations of the Poincare group which give general frameworks for introduction of internal degrees of freedom of a particle. The bi-local model recently proposed by Takabayasi is constructed through our frameworks. In this study, new covariant and non-covariant position operators are introduced and discussed. (author)
A Semiotic Model of Destination Representations Applied to Cultural and Heritage Tourism Marketing
DEFF Research Database (Denmark)
Pennington, Jody; Thomsen, Robert Chr.
2010-01-01
, and symbolic qualities, each of which destination marketers should consider in choosing representations because of the influence those qualities exert on reception. It is argued that the semiotic model can help marketers make informed decisions about the relevance and probable impact of the iconicity...
Minimal representations of supersymmetry and 1D N-extended σ-models
International Nuclear Information System (INIS)
Toppan, Francesco
2008-01-01
We discuss the minimal representations of the 1D N-Extended Supersymmetry algebra (the Z 2 -graded symmetry algebra of the Supersymmetric Quantum Mechanics) linearly realized on a finite number of fields depending on a real parameter t, the time. Their knowledge allows to construct one-dimensional sigma-models with extended off-shell supersymmetries without using superfields (author)
Using Bar Representations as a Model for Connecting Concepts of Rational Number.
Middleton, James A.; van den Heuvel-Panhuizen, Marja; Shew, Julia A.
1998-01-01
Examines bar models as graphical representations of rational numbers and presents related real life problems. Concludes that, through pairing the fraction bars with ratio tables and other ways of teaching numbers, numeric strategies become connected with visual strategies that allow students with diverse ways of thinking to share their…
Model of geophysical fields representation in problems of complex correlation-extreme navigation
Directory of Open Access Journals (Sweden)
Volodymyr KHARCHENKO
2015-09-01
Full Text Available A model of the optimal representation of spatial data for the task of complex correlation-extreme navigation is developed based on the criterion of minimum deviation of the correlation functions of the original and the resulting fields. Calculations are presented for one-dimensional case using the approximation of the correlation function by Fourier series. It is shown that in the presence of different geophysical map data fields their representation is possible by single template with optimal sampling without distorting the form of the correlation functions.
Can representational trajectory reveal the nature of an internal model of gravity?
De Sá Teixeira, Nuno; Hecht, Heiko
2014-05-01
The memory for the vanishing location of a horizontally moving target is usually displaced forward in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, this downward displacement has been shown to increase with time (representational trajectory). However, the degree to which different kinematic events change the temporal profile of these displacements remains to be determined. The present article attempts to fill this gap. In the first experiment, we replicate the finding that representational momentum for downward-moving targets is bigger than for upward motions, showing, moreover, that it increases rapidly during the first 300 ms, stabilizing afterward. This temporal profile, but not the increased error for descending targets, is shown to be disrupted when eye movements are not allowed. In the second experiment, we show that the downward drift with time emerges even for static targets. Finally, in the third experiment, we report an increased error for upward-moving targets, as compared with downward movements, when the display is compatible with a downward ego-motion by including vection cues. Thus, the errors in the direction of gravity are compatible with the perceived event and do not merely reflect a retinotopic bias. Overall, these results provide further evidence for an internal model of gravity in the visual representational system.
Weck, Philippe F; Kim, Eunja; Wang, Yifeng; Kruichak, Jessica N; Mills, Melissa M; Matteo, Edward N; Pellenq, Roland J-M
2017-08-01
Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematically compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.
Representation of the contextual statistical model by hyperbolic amplitudes
International Nuclear Information System (INIS)
Khrennikov, Andrei
2005-01-01
We continue the development of a so-called contextual statistical model (here context has the meaning of a complex of physical conditions). It is shown that, besides contexts producing the conventional trigonometric cos-interference, there exist contexts producing the hyperbolic cos-interference. Starting with the corresponding interference formula of total probability we represent such contexts by hyperbolic probabilistic amplitudes or in the abstract formalism by normalized vectors of a hyperbolic analogue of the Hilbert space. There is obtained a hyperbolic Born's rule. Incompatible observables are represented by noncommutative operators. This paper can be considered as the first step towards hyperbolic quantum probability. We also discuss possibilities of experimental verification of hyperbolic quantum mechanics: in physics of elementary particles, string theory as well as in experiments with nonphysical systems, e.g., in psychology, cognitive sciences, and economy
Improving the representation of soluble iron in climate models
Energy Technology Data Exchange (ETDEWEB)
Mahowald, Natalie [Cornell Univ., Ithaca, NY (United States)
2016-11-29
Funding from this grant supported Rachel Sanza, Yan Zhang and partially Samuel Albani. Substantial progress has been made on inclusion of mineralogy, showing the quality of the simulations, and the impact on radiation in the CAM4 and CAM5 (Scanza et al., 2015). In addition, the elemental distribution has been evaluated (and partially supported by this grant) (Zhang et al., 2015), showing that using spatial distributions of mineralogy, improved resperentation of Fe, Ca and Al are possible, compared to the limited available data. A new intermediate complexity soluble iron scheme was implemented in the Bulk Aerosol Model (BAM), which was completed as part of Rachel Scanza’s PhD thesis. Currently Rachel is writing up at least two first author papers describing the general methods and comparison to observations (Scanza et al., in prep.), as well as papers describing the sensitivity to preindustrial conditions and interannual variability. This work lead to the lead PI being asked to write a commentary in Nature (Mahowald, 2013) and two review papers (Mahowald et al., 2014, Mahowald et al., submitted) and contributed to related papers (Albani et al., 2016, Albani et al., 2014, Albani et al., 2015).
A knowledge representation model for the optimisation of electricity generation mixes
International Nuclear Information System (INIS)
Chee Tahir, Aidid; Bañares-Alcántara, René
2012-01-01
Highlights: ► Prototype energy model which uses semantic representation (ontologies). ► Model accepts both quantitative and qualitative based energy policy goals. ► Uses logic inference to formulate equations for linear optimisation. ► Proposes electricity generation mix based on energy policy goals. -- Abstract: Energy models such as MARKAL, MESSAGE and DNE-21 are optimisation tools which aid in the formulation of energy policies. The strength of these models lie in their solid theoretical foundations built on rigorous mathematical equations designed to process numerical (quantitative) data related to economics and the environment. Nevertheless, a complete consideration of energy policy issues also requires the consideration of the political and social aspects of energy. These political and social issues are often associated with non-numerical (qualitative) information. To enable the evaluation of these aspects in a computer model, we hypothesise that a different approach to energy model optimisation design is required. A prototype energy model that is based on a semantic representation using ontologies and is integrated to engineering models implemented in Java has been developed. The model provides both quantitative and qualitative evaluation capabilities through the use of logical inference. The semantic representation of energy policy goals is used (i) to translate a set of energy policy goals into a set of logic queries which is then used to determine the preferred electricity generation mix and (ii) to assist in the formulation of a set of equations which is then solved in order to obtain a proposed electricity generation mix. Scenario case studies have been developed and tested on the prototype energy model to determine its capabilities. Knowledge queries were made on the semantic representation to determine an electricity generation mix which fulfilled a set of energy policy goals (e.g. CO 2 emissions reduction, water conservation, energy supply
The quantum Rabi model and Lie algebra representations of sl2
International Nuclear Information System (INIS)
Wakayama, Masato; Yamasaki, Taishi
2014-01-01
The aim of the present paper is to understand the spectral problem of the quantum Rabi model in terms of Lie algebra representations of sl 2 (R). We define a second order element of the universal enveloping algebra U(sl 2 ) of sl 2 (R), which, through the image of a principal series representation of sl 2 (R), provides a picture equivalent to the quantum Rabi model drawn by confluent Heun differential equations. By this description, in particular, we give a representation theoretic interpretation of the degenerate part of the spectrum (i.e., Judd's eigenstates) of the Rabi Hamiltonian due to Kuś in 1985, which is a part of the exceptional spectrum parameterized by integers. We also discuss the non-degenerate part of the exceptional spectrum of the model, in addition to the Judd eigenstates, from a viewpoint of infinite dimensional irreducible submodules (or subquotients) of the non-unitary principal series such as holomorphic discrete series representations of sl 2 (R). (paper)
Continuous versus discontinuous albedo representations in a simple diffusive climate model
Simmons, P. A.; Griffel, D. H.
1988-07-01
A one-dimensional annually and zonally averaged energy-balance model, with diffusive meridional heat transport and including icealbedo feedback, is considered. This type of model is found to be very sensitive to the form of albedo used. The solutions for a discontinuous step-function albedo are compared to those for a more realistic smoothly varying albedo. The smooth albedo gives a closer fit to present conditions, but the discontinuous form gives a better representation of climates in earlier epochs.
Explicit state representation and the ATLAS event data model: theory and practice
International Nuclear Information System (INIS)
Nowak, M; Snyder, S; Cranmer, K; Malon, D; Gemmeren, P v; Schaffer, A; Binet, S
2008-01-01
In anticipation of data taking, ATLAS has undertaken a program of work to develop an explicit state representation of the experiment's complex transient event data model. This effort has provided both an opportunity to consider explicitly the structure, organization, and content of the ATLAS persistent event store before writing tens of petabytes of data (replacing simple streaming, which uses the persistent store as a core dump of transient memory), and a locus for support of event data model evolution, including significant refactoring, beyond the automatic schema evolution capabilities of underlying persistence technologies. ATLAS has encountered the need for such non-trivial schema evolution on several occasions already. This paper describes the state representation strategy (transient/persistent separation) and its implementation, including both the payoffs that ATLAS has seen (significant and sometimes surprising space and performance improvements, the extra layer notwithstanding, and extremely general schema evolution support) and the costs (additional and relatively pervasive additional infrastructure development and maintenance). The paper further discusses how these costs are mitigated, and how ATLAS is able to implement this strategy without losing the ability to take advantage of the (improving!) automatic schema evolution capabilities of underlying technology layers when appropriate. Implications of state representations for direct ROOT browsability, and current strategies for associating physics analysis views with such state representations, are also described
Energy Technology Data Exchange (ETDEWEB)
Bonnet, G [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires
1961-07-01
When studying the behaviour of a magnetic resonance transducer formed by the association of an electrical network and of a set of nuclear spins, it is possible to bring about a representation that is analytically equivalent by means of an entirely electrical model, available for transients as well as steady-state. A detailed study of the validity conditions justifies its use in most cases. Also proposed is a linearity criterion of Bloch's equations in transient state that is simply the prolongation of the well-known condition of non-saturation in the steady-state. (author) [French] L'etude du comportement d'un transducteur a resonance magnetique forme de l'association d'un reseau electrique et d'un ensemble de noyaux dotes de spin, montre qu'il est possible d'en deduire une representation analytiquement equivalente au moyen d'un modele entierement electrique utilisable pour un regime transitoire aussi bien que pour un regime permanent. Une etude detaillee des conditions de validite permet d'en justifier l'emploi dans la majorite des cas. On propose enfin un critere de linearite des equations de Bloch en regime transitoire, qui constitue un prolongement de la condition connue de non-saturation en regime stationnaire. (auteur)
Cherkasskaya, Eugenia; Rosario, Margaret
2017-11-01
The etiology of low female sexual desire, the most prevalent sexual complaint in women, is multi-determined, implicating biological and psychological factors, including women's early parent-child relationships and bodily self-representations. The current study evaluated a model that hypothesized that sexual body self-representations (sexual subjectivity, self-objectification, genital self-image) explain (i.e., mediate) the relation between internalized working models of parent-child relationships (attachment, separation-individuation, parental identification) and sexual desire in heterosexual women. We recruited 614 young, heterosexual women (M = 25.5 years, SD = 4.63) through social media. The women completed an online survey. Structural equation modeling was used. The hypotheses were supported in that the relation between internalized working models of parent-child relationships (attachment and separation-individuation) and sexual desire was mediated by sexual body self-representations (sexual body esteem, self-objectification, genital self-image). However, parental identification was not related significantly to sexual body self-representations or sexual desire in the model. Current findings demonstrated that understanding female sexual desire necessitates considering women's internalized working models of early parent-child relationships and their experiences of their bodies in a sexual context. Treatment of low or absent desire in women would benefit from modalities that emphasize early parent-child relationships as well as interventions that foster mind-body integration.
Prather, Edward
2018-01-01
Astronomy education researchers in the Department of Astronomy at the University of Arizona have been investigating a new framework for getting students to engage in discussions about fundamental astronomy topics. This framework is intended to also provide students with explicit feedback on the correctness and coherency of their mental models on these topics. This framework builds upon our prior efforts to create productive Pedagogical Discipline Representations (PDR). Students are asked to work collaboratively to generate their own representations (drawings, graphs, data tables, etc.) that reflect important characteristics of astrophysical scenarios presented in class. We have found these representation tasks offer tremendous insight into the broad range of ideas and knowledge students possess after instruction that includes both traditional lecture and actively learning strategies. In particular, we find that some of our students are able to correctly answer challenging multiple-choice questions on topics, however, they struggle to accurately create representations of these same topics themselves. Our work illustrates that some of our students are not developing a robust level of discipline fluency with many core ideas in astronomy, even after engaging with active learning strategies.
International Nuclear Information System (INIS)
Thuburn, J.; Woollings, T.J.
2005-01-01
Accurate representation of different kinds of wave motion is essential for numerical models of the atmosphere, but is sensitive to details of the discretization. In this paper, numerical dispersion relations are computed for different vertical discretizations of the compressible Euler equations and compared with the analytical dispersion relation. A height coordinate, an isentropic coordinate, and a terrain-following mass-based coordinate are considered, and, for each of these, different choices of prognostic variables and grid staggerings are considered. The discretizations are categorized according to whether their dispersion relations are optimal, are near optimal, have a single zero-frequency computational mode, or are problematic in other ways. Some general understanding of the factors that affect the numerical dispersion properties is obtained: heuristic arguments concerning the normal mode structures, and the amount of averaging and coarse differencing in the finite difference scheme, are shown to be useful guides to which configurations will be optimal; the number of degrees of freedom in the discretization is shown to be an accurate guide to the existence of computational modes; there is only minor sensitivity to whether the equations for thermodynamic variables are discretized in advective form or flux form; and an accurate representation of acoustic modes is found to be a prerequisite for accurate representation of inertia-gravity modes, which, in turn, is found to be a prerequisite for accurate representation of Rossby modes
International Nuclear Information System (INIS)
Niccoli, G.
2009-12-01
In an earlier paper (G. Niccoli and J. Teschner, 2009), the spectrum (eigenvalues and eigenstates) of a lattice regularizations of the Sine-Gordon model has been completely characterized in terms of polynomial solutions with certain properties of the Baxter equation. This characterization for cyclic representations has been derived by the use of the Separation of Variables (SOV) method of Sklyanin and by the direct construction of the Baxter Q-operator family. Here, we reconstruct the Baxter Q-operator and the same characterization of the spectrum by only using the SOV method. This analysis allows us to deduce the main features required for the extension to cyclic representations of other integrable quantum models of this kind of spectrum characterization. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Niccoli, G.
2009-12-15
In an earlier paper (G. Niccoli and J. Teschner, 2009), the spectrum (eigenvalues and eigenstates) of a lattice regularizations of the Sine-Gordon model has been completely characterized in terms of polynomial solutions with certain properties of the Baxter equation. This characterization for cyclic representations has been derived by the use of the Separation of Variables (SOV) method of Sklyanin and by the direct construction of the Baxter Q-operator family. Here, we reconstruct the Baxter Q-operator and the same characterization of the spectrum by only using the SOV method. This analysis allows us to deduce the main features required for the extension to cyclic representations of other integrable quantum models of this kind of spectrum characterization. (orig.)
On the Representation of Subgrid Microtopography Effects in Process-based Hydrologic Models
Jan, A.; Painter, S. L.; Coon, E. T.
2017-12-01
Increased availability of high-resolution digital elevation are enabling process-based hydrologic modeling on finer and finer scales. However, spatial variability in surface elevation (microtopography) exists below the scale of a typical hyper-resolution grid cell and has the potential to play a significant role in water retention, runoff, and surface/subsurface interactions. Though the concept of microtopographic features (depressions, obstructions) and the associated implications on flow and discharge are well established, representing those effects in watershed-scale integrated surface/subsurface hydrology models remains a challenge. Using the complex and coupled hydrologic environment of the Arctic polygonal tundra as an example, we study the effects of submeter topography and present a subgrid model parameterized by small-scale spatial heterogeneities for use in hyper-resolution models with polygons at a scale of 15-20 meters forming the surface cells. The subgrid model alters the flow and storage terms in the diffusion wave equation for surface flow. We compare our results against sub-meter scale simulations (acts as a benchmark for our simulations) and hyper-resolution models without the subgrid representation. The initiation of runoff in the fine-scale simulations is delayed and the recession curve is slowed relative to simulated runoff using the hyper-resolution model with no subgrid representation. Our subgrid modeling approach improves the representation of runoff and water retention relative to models that ignore subgrid topography. We evaluate different strategies for parameterizing subgrid model and present a classification-based method to efficiently move forward to larger landscapes. This work was supported by the Interoperable Design of Extreme-scale Application Software (IDEAS) project and the Next-Generation Ecosystem Experiments-Arctic (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the
Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.
IRF models associated with representations of the Lie superalgebras gl(m|n) and sl(m|n)
International Nuclear Information System (INIS)
Deguchi, T.; Fujii, A.
1991-01-01
This paper presents two families of exactly solvable interaction round a face (IRF) models associated with representations of the Lie superalgebras gl(m/n) and sl(m/n). These IRF models are the generalizations of integrable spin chains with bosons and fermions. The authors present fusion models associated with higher representations of gl(m/n) and sl(m/n). The authors introduce restricted IRF models both for gl(m/n) and sl(m/n)
Evaluating and improving the representation of heteroscedastic errors in hydrological models
McInerney, D. J.; Thyer, M. A.; Kavetski, D.; Kuczera, G. A.
2013-12-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic predictions. In particular, residual errors of hydrological models are often heteroscedastic, with large errors associated with high rainfall and runoff events. Recent studies have shown that using a weighted least squares (WLS) approach - where the magnitude of residuals are assumed to be linearly proportional to the magnitude of the flow - captures some of this heteroscedasticity. In this study we explore a range of Bayesian approaches for improving the representation of heteroscedasticity in residual errors. We compare several improved formulations of the WLS approach, the well-known Box-Cox transformation and the more recent log-sinh transformation. Our results confirm that these approaches are able to stabilize the residual error variance, and that it is possible to improve the representation of heteroscedasticity compared with the linear WLS approach. We also find generally good performance of the Box-Cox and log-sinh transformations, although as indicated in earlier publications, the Box-Cox transform sometimes produces unrealistically large prediction limits. Our work explores the trade-offs between these different uncertainty characterization approaches, investigates how their performance varies across diverse catchments and models, and recommends practical approaches suitable for large-scale applications.
Discrete series representations for sl(2|1), Meixner polynomials and oscillator models
International Nuclear Information System (INIS)
Jafarov, E I; Van der Jeugt, J
2012-01-01
We explore a model for a one-dimensional quantum oscillator based on the Lie superalgebra sl(2|1). For this purpose, a class of discrete series representations of sl(2|1) is constructed, each representation characterized by a real number β > 0. In this model, the position and momentum operators of the oscillator are odd elements of sl(2|1) and their expressions involve an arbitrary parameter γ. In each representation, the spectrum of the Hamiltonian is the same as that of a canonical oscillator. The spectrum of a position operator can be continuous or infinite discrete, depending on the value of γ. We determine the position wavefunctions both in the continuous and the discrete case and discuss their properties. In the discrete case, these wavefunctions are given in terms of Meixner polynomials. From the embedding osp(1|2) subset of sl(2|1), it can be seen why the case γ = 1 corresponds to a paraboson oscillator. Consequently, taking the values (β, γ) = (1/2, 1) in the sl(2|1) model yields a canonical oscillator. (paper)
A neural network model of semantic memory linking feature-based object representation and words.
Cuppini, C; Magosso, E; Ursino, M
2009-06-01
Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via gamma-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).
Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.
2018-05-01
A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.
Crittenden, Patricia M; Newman, Louise
2010-07-01
This study compared aspects of the functioning of mothers with borderline personality disorder (BPD) to those of mothers without psychiatric disorder using two different conceptualizations of attachment theory. The Adult Attachment Interviews (AAIs) of 32 mothers were classified using both the Main and Goldwyn method (M&G) and the Dynamic-Maturational Model method (DMM). We found that mothers with BPD recalled more danger, reported more negative effects of danger, and gave evidence of more unresolved psychological trauma tied to danger than other mothers. We also found that the DMM classifications discriminated between the two groups of mothers better than the M&G classifications. Using the DMM method, the AAIs of BPD mothers were more complex, extreme, and had more indicators of rapid shifts in arousal than those of other mothers. Representations drawn from the AAI, using either classificatory method, did not match the representations of the mother's child drawn from the Working Model of the Child Interview; mothers with very anxious DMM classifications were paired with secure-balanced child representations. We propose that the DMM offers greater clinical utility, conceptual coherence, empirical validity, and coder reliability than the M&G.
Path integral representation of Lorentzian spinfoam model, asymptotics and simplicial geometries
International Nuclear Information System (INIS)
Han, Muxin; Krajewski, Thomas
2014-01-01
A new path integral representation of Lorentzian Engle–Pereira–Rovelli–Livine spinfoam model is derived by employing the theory of unitary representation of SL(2,C). The path integral representation is taken as a starting point of semiclassical analysis. The relation between the spinfoam model and classical simplicial geometry is studied via the large-spin asymptotic expansion of the spinfoam amplitude with all spins uniformly large. More precisely, in the large-spin regime, there is an equivalence between the spinfoam critical configuration (with certain nondegeneracy assumption) and a classical Lorentzian simplicial geometry. Such an equivalence relation allows us to classify the spinfoam critical configurations by their geometrical interpretations, via two types of solution-generating maps. The equivalence between spinfoam critical configuration and simplical geometry also allows us to define the notion of globally oriented and time-oriented spinfoam critical configuration. It is shown that only at the globally oriented and time-oriented spinfoam critical configuration, the leading-order contribution of spinfoam large-spin asymptotics gives precisely an exponential of Lorentzian Regge action of General Relativity. At all other (unphysical) critical configurations, spinfoam large-spin asymptotics modifies the Regge action at the leading-order approximation. (paper)
Zieliński, Tomasz G.
2017-11-01
The paper proposes and investigates computationally-efficient microstructure representations for sound absorbing fibrous media. Three-dimensional volume elements involving non-trivial periodic arrangements of straight fibres are examined as well as simple two-dimensional cells. It has been found that a simple 2D quasi-representative cell can provide similar predictions as a volume element which is in general much more geometrically accurate for typical fibrous materials. The multiscale modelling allowed to determine the effective speeds and damping of acoustic waves propagating in such media, which brings up a discussion on the correlation between the speed, penetration range and attenuation of sound waves. Original experiments on manufactured copper-wire samples are presented and the microstructure-based calculations of acoustic absorption are compared with the corresponding experimental results. In fact, the comparison suggested the microstructure modifications leading to representations with non-uniformly distributed fibres.
Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame
Lee, Myoungkyu; Oliver, Todd; Moser, Robert
2017-11-01
Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.
Taher, M.; Hamidah, I.; Suwarma, I. R.
2017-09-01
This paper outlined the results of an experimental study on the effects of multi-representation approach in learning Archimedes Law on students’ mental model improvement. The multi-representation techniques implemented in the study were verbal, pictorial, mathematical, and graphical representations. Students’ mental model was classified into three levels, i.e. scientific, synthetic, and initial levels, based on the students’ level of understanding. The present study employed the pre-experimental methodology, using one group pretest-posttest design. The subject of the study was 32 eleventh grade students in a Public Senior High School in Riau Province. The research instrument included model mental test on hydrostatic pressure concept, in the form of essay test judged by experts. The findings showed that there was positive change in students’ mental model, indicating that multi-representation approach was effective to improve students’ mental model.
An Ontology for Musical Phonographic Records: Contributing with a Representation Model
de Oliveira Albuquerque, Marcelo; Siqueira, Sean Wolfgand M.; de Saldanha da G. Lanzelotte, Rosana; Braz, Maria Helena L. B.
Music is a complex domain with some interesting specificities that makes it difficult to be modeled. If different types of music are considered, then the difficulties are even bigger. This paper presents some of the characteristics that makes music such a hard domain to model and proposes an ontology for representing musical phonographic records. This ontology will provide a global representation that can be used to support systems interoperability and data integration, which provides disseminating music worldwide, contributing to culture in the knowledge society.
Digital representations of the real world how to capture, model, and render visual reality
Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian
2015-01-01
Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo
Worthy, Darrell A; Pang, Bo; Byrne, Kaileigh A
2013-01-01
Models of human behavior in the Iowa Gambling Task (IGT) have played a pivotal role in accounting for behavioral differences during decision-making. One critical difference between models that have been used to account for behavior in the IGT is the inclusion or exclusion of the assumption that participants tend to persevere, or stay with the same option over consecutive trials. Models that allow for this assumption include win-stay-lose-shift (WSLS) models and reinforcement learning (RL) models that include a decay learning rule where expected values for each option decay as they are chosen less often. One shortcoming of RL models that have included decay rules is that the tendency to persevere by sticking with the same option has been conflated with the tendency to select the option with the highest expected value because a single term is used to represent both of these tendencies. In the current work we isolate the tendencies to perseverate and to select the option with the highest expected value by including them as separate terms in a Value-Plus-Perseveration (VPP) RL model. Overall the VPP model provides a better fit to data from a large group of participants than models that include a single term to account for both perseveration and the representation of expected value. Simulations of each model show that the VPP model's simulated choices most closely resemble the decision-making behavior of human subjects. In addition, we also find that parameter estimates of loss aversion are more strongly correlated with performance when perseverative tendencies and expected value representations are decomposed as separate terms within the model. The results suggest that the tendency to persevere and the tendency to select the option that leads to the best net payoff are central components of decision-making behavior in the IGT. Future work should use this model to better examine decision-making behavior.
On the representability problem and the physical meaning of coarse-grained models
Energy Technology Data Exchange (ETDEWEB)
Wagner, Jacob W.; Dama, James F.; Durumeric, Aleksander E. P.; Voth, Gregory A., E-mail: gavoth@uchicago.edu [Department of Chemistry, James Franck Institute, Institute for Biophysical Dynamics, and Computation Institute, The University of Chicago, Chicago, Illinois 60637 (United States)
2016-07-28
In coarse-grained (CG) models where certain fine-grained (FG, i.e., atomistic resolution) observables are not directly represented, one can nonetheless identify indirect the CG observables that capture the FG observable’s dependence on CG coordinates. Often, in these cases it appears that a CG observable can be defined by analogy to an all-atom or FG observable, but the similarity is misleading and significantly undermines the interpretation of both bottom-up and top-down CG models. Such problems emerge especially clearly in the framework of the systematic bottom-up CG modeling, where a direct and transparent correspondence between FG and CG variables establishes precise conditions for consistency between CG observables and underlying FG models. Here we present and investigate these representability challenges and illustrate them via the bottom-up conceptual framework for several simple analytically tractable polymer models. The examples provide special focus on the observables of configurational internal energy, entropy, and pressure, which have been at the root of controversy in the CG literature, as well as discuss observables that would seem to be entirely missing in the CG representation but can nonetheless be correlated with CG behavior. Though we investigate these problems in the framework of systematic coarse-graining, the lessons apply to top-down CG modeling also, with crucial implications for simulation at constant pressure and surface tension and for the interpretations of structural and thermodynamic correlations for comparison to experiment.
Directory of Open Access Journals (Sweden)
Darrell A. Worthy
2013-09-01
Full Text Available Models of human behavior in the Iowa Gambling Task (IGT have played a pivotal role in accounting for behavioral differences during decision-making. One critical difference between models that have been used to account for behavior in the IGT is the inclusion or exclusion of the assumption that participants tend to persevere, or stay with the same option over consecutive trials. Models that allow for this assumption include win-stay-lose-shift (WSLS models and reinforcement learning (RL models that include a decay learning rule where expected values for each option decay as they are chosen less often. One shortcoming of RL models that have included decay rules is that the tendency to persevere by sticking with the same option has been conflated with the tendency to select the option with the highest expected value because a single term is used to represent both of these tendencies. In the current work we isolate the tendencies to perseverate and to select the option with the highest expected value by including them as separate terms in a Value-Plus-Perseveration (VPP RL model. Overall the VPP model provides a better fit to data from a large group of participants than models that include a single term to account for both perseveration and the representation of expected value. Simulations of each model show that the VPP model’s simulated choices most closely resemble the decision-making behavior of human subjects. In addition, we also find that parameter estimates of loss aversion are more strongly correlated with performance when perseverative tendencies and expected value representations are decomposed as separate terms within the model. The results suggest that the tendency to persevere and the tendency to select the option that leads to the best net payoff are central components of decision-making behavior in the IGT. Future work should use this model to better examine decision-making behavior.
Probabilistic Elastic Part Model: A Pose-Invariant Representation for Real-World Face Verification.
Li, Haoxiang; Hua, Gang
2018-04-01
Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic part model. We extract local descriptors (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each descriptor with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of the face parts of all face images in the training corpus, namely the probabilistic elastic part (PEP) model. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms, which naturally defines a part. Given one or multiple face images of the same subject, the PEP-model builds its PEP representation by sequentially concatenating descriptors identified by each Gaussian component in a maximum likelihood sense. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy. Our experiments show that we achieve state-of-the-art face verification accuracy with the proposed representations on the Labeled Face in the Wild (LFW) dataset, the YouTube video face database, and the CMU MultiPIE dataset.
Anderson, Andrew James; Binder, Jeffrey R; Fernandino, Leonardo; Humphries, Colin J; Conant, Lisa L; Aguilar, Mario; Wang, Xixi; Doko, Donias; Raizada, Rajeev D S
2017-09-01
We introduce an approach that predicts neural representations of word meanings contained in sentences then superposes these to predict neural representations of new sentences. A neurobiological semantic model based on sensory, motor, social, emotional, and cognitive attributes was used as a foundation to define semantic content. Previous studies have predominantly predicted neural patterns for isolated words, using models that lack neurobiological interpretation. Fourteen participants read 240 sentences describing everyday situations while undergoing fMRI. To connect sentence-level fMRI activation patterns to the word-level semantic model, we devised methods to decompose the fMRI data into individual words. Activation patterns associated with each attribute in the model were then estimated using multiple-regression. This enabled synthesis of activation patterns for trained and new words, which were subsequently averaged to predict new sentences. Region-of-interest analyses revealed that prediction accuracy was highest using voxels in the left temporal and inferior parietal cortex, although a broad range of regions returned statistically significant results, showing that semantic information is widely distributed across the brain. The results show how a neurobiologically motivated semantic model can decompose sentence-level fMRI data into activation features for component words, which can be recombined to predict activation patterns for new sentences. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Energy Technology Data Exchange (ETDEWEB)
Bryan, Frank [National Center for Atmospheric Research, Boulder, CO (United States); Dennis, John [National Center for Atmospheric Research, Boulder, CO (United States); MacCready, Parker [Univ. of Washington, Seattle, WA (United States); Whitney, Michael [Univ. of Connecticut
2015-11-20
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation. The main computational objectives were: 1. To develop computationally efficient, but physically based, parameterizations of estuary and continental shelf mixing processes for use in an Earth System Model (CESM). 2. To develop a two-way nested regional modeling framework in order to dynamically downscale the climate response of particular coastal ocean regions and to upscale the impact of the regional coastal processes to the global climate in an Earth System Model (CESM). 3. To develop computational infrastructure to enhance the efficiency of data transfer between specific sources and destinations, i.e., a point-to-point communication capability, (used in objective 1) within POP, the ocean component of CESM.
Directory of Open Access Journals (Sweden)
Andrej Ficko
2015-03-01
Full Text Available Underuse of nonindustrial private forests in developed countries has been interpreted mostly as a consequence of the prevailing noncommodity objectives of their owners. Recent empirical studies have indicated a correlation between the harvesting behavior of forest owners and the specific conceptualization of appropriate forest management described as "nonintervention" or "hands-off" management. We aimed to fill the huge gap in knowledge of social representations of forest management in Europe and are the first to be so rigorous in eliciting forest owner representations in Europe. We conducted 3099 telephone interviews with randomly selected forest owners in Slovenia, asking them whether they thought they managed their forest efficiently, what the possible reasons for underuse were, and what they understood by forest management. Building on social representations theory and applying a series of structural equation models, we tested the existence of three latent constructs of forest management and estimated whether and how much these constructs correlated to the perception of resource efficiency. Forest owners conceptualized forest management as a mixture of maintenance and ecosystem-centered and economics-centered management. None of the representations had a strong association with the perception of resource efficiency, nor could it be considered a factor preventing forest owners from cutting more. The underuse of wood resources was mostly because of biophysical constraints in the environment and not a deep-seated philosophical objection to harvesting. The difference between our findings and other empirical studies is primarily explained by historical differences in forestland ownership in different parts of Europe and the United States, the rising number of nonresidential owners, alternative lifestyle, and environmental protectionism, but also as a consequence of our high methodological rigor in testing the relationships between the constructs
Camporese, M.; Bertoldi, G.; Bortoli, E.; Wohlfahrt, G.
2017-12-01
Integrated hydrologic surface-subsurface models (IHSSMs) are increasingly used as prediction tools to solve simultaneously states and fluxes in and between multiple terrestrial compartments (e.g., snow cover, surface water, groundwater), in an attempt to tackle environmental problems in a holistic approach. Two such models, CATHY and GEOtop, are used in this study to investigate their capabilities to reproduce hydrological processes in alpine grasslands. The two models differ significantly in the complexity of the representation of the surface energy balance and the solution of Richards equation for water flow in the variably saturated subsurface. The main goal of this research is to show how these differences in process representation can lead to different predictions of hydrologic states and fluxes, in the simulation of an experimental site located in the Venosta Valley (South Tyrol, Italy). Here, a large set of relevant hydrological data (e.g., evapotranspiration, soil moisture) has been collected, with ground and remote sensing observations. The area of interest is part of a Long-Term Ecological Research (LTER) site, a mountain steep, heterogeneous slope, where the predominant land use types are meadow, pasture, and forest. The comparison between data and model predictions, as well as between simulations with the two IHSSMs, contributes to advance our understanding of the tradeoffs between different complexities in modeĺs process representation, model accuracy, and the ability to explain observed hydrological dynamics in alpine environments.
Development of a Method for Enhanced Fan Representation in Gas Turbine Modeling
Directory of Open Access Journals (Sweden)
Georgios Doulgeris
2011-01-01
Full Text Available A challenge in civil aviation future propulsion systems is expected to be the integration with the airframe, coming as a result of increasing bypass ratio or above wing installations for noise mitigation. The resulting highly distorted inlet flows to the engine make a clear demand for advanced gas turbine performance prediction models. Since the dawn of jet engine, several models have been proposed, and the present work comes to add a model that combines two well-established compressor performance methods in order to create a quasi-three-dimensional representation of the fan of a modern turbofan. A streamline curvature model is coupled to a parallel compressor method, covering radial and circumferential directions, respectively. Model testing has shown a close agreement to experimental data, making it a good candidate for assessing the loss of surge margin on a high bypass ratio turbofan, semiembedded on the upper surface of a broad wing airframe.
Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation
Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong
2017-05-01
Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.
Model's sparse representation based on reduced mixed GMsFE basis methods
Energy Technology Data Exchange (ETDEWEB)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn [Institute of Mathematics, Hunan University, Changsha 410082 (China); Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn [College of Mathematics and Econometrics, Hunan University, Changsha 410082 (China)
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in
Zheng, Jianqiu; Doskey, Paul V
2015-02-17
An enzyme-explicit denitrification model with representations for pre- and de novo synthesized enzymes was developed to improve predictions of nitrous oxide (N2O) accumulations in soil and emissions from the surface. The metabolic model of denitrification is based on dual-substrate utilization and Monod growth kinetics. Enzyme synthesis/activation was incorporated into each sequential reduction step of denitrification to regulate dynamics of the denitrifier population and the active enzyme pool, which controlled the rate function. Parameterizations were developed from observations of the dynamics of N2O production and reduction in soil incubation experiments. The model successfully reproduced the dynamics of N2O and N2 accumulation in the incubations and revealed an important regulatory effect of denitrification enzyme kinetics on the accumulation of denitrification products. Pre-synthesized denitrification enzymes contributed 20, 13, 43, and 62% of N2O that accumulated in 48 h incubations of soil collected from depths of 0-5, 5-10, 10-15, and 15-25 cm, respectively. An enzyme activity function (E) was defined to estimate the relative concentration of active enzymes and variation in response to environmental conditions. The value of E allows for activities of pre-synthesized denitrification enzymes to be differentiated from de novo synthesized enzymes. Incorporating explicit representations of denitrification enzyme kinetics into biogeochemical models is a promising approach for accurately simulating dynamics of the production and reduction of N2O in soils.
Improving Conceptual Understanding and Representation Skills Through Excel-Based Modeling
Malone, Kathy L.; Schunn, Christian D.; Schuchardt, Anita M.
2018-02-01
The National Research Council framework for science education and the Next Generation Science Standards have developed a need for additional research and development of curricula that is both technologically model-based and includes engineering practices. This is especially the case for biology education. This paper describes a quasi-experimental design study to test the effectiveness of a model-based curriculum focused on the concepts of natural selection and population ecology that makes use of Excel modeling tools (Modeling Instruction in Biology with Excel, MBI-E). The curriculum revolves around the bio-engineering practice of controlling an invasive species. The study takes place in the Midwest within ten high schools teaching a regular-level introductory biology class. A post-test was designed that targeted a number of common misconceptions in both concept areas as well as representational usage. The results of a post-test demonstrate that the MBI-E students significantly outperformed the traditional classes in both natural selection and population ecology concepts, thus overcoming a number of misconceptions. In addition, implementing students made use of more multiple representations as well as demonstrating greater fascination for science.
Alharbi, Basma Mohammed
2017-02-07
Location-Based Social Networks (LBSNs) capture individuals whereabouts for a large portion of the population. To utilize this data for user (location)-similarity based tasks, one must map the raw data into a low-dimensional uniform feature space. However, due to the nature of LBSNs, many users have sparse and incomplete check-ins. In this work, we propose to overcome this issue by leveraging the network of friends, when learning the new feature space. We first analyze the impact of friends on individuals\\'s mobility, and show that individuals trajectories are correlated with thoseof their friends and friends of friends (2-hop friends) in an online setting. Based on our observation, we propose a mixed-membership model that infers global mobility patterns from users\\' check-ins and their network of friends, without impairing the model\\'s complexity. Our proposed model infers global patterns and learns new representations for both usersand locations simultaneously. We evaluate the inferred patterns and compare the quality of the new user representation against baseline methods on a social link prediction problem.
Energy Technology Data Exchange (ETDEWEB)
Johnson, J. D. (Prostat, Mesa, AZ); Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)
2006-10-01
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.
Model-based object classification using unification grammars and abstract representations
Liburdy, Kathleen A.; Schalkoff, Robert J.
1993-04-01
The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.
Standard representation and unified stability analysis for dynamic artificial neural network models.
Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D
2018-02-01
An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.
Two-Layer Variable Infiltration Capacity Land Surface Representation for General Circulation Models
Xu, L.
1994-01-01
A simple two-layer variable infiltration capacity (VIC-2L) land surface model suitable for incorporation in general circulation models (GCMs) is described. The model consists of a two-layer characterization of the soil within a GCM grid cell, and uses an aerodynamic representation of latent and sensible heat fluxes at the land surface. The effects of GCM spatial subgrid variability of soil moisture and a hydrologically realistic runoff mechanism are represented in the soil layers. The model was tested using long-term hydrologic and climatalogical data for Kings Creek, Kansas to estimate and validate the hydrological parameters. Surface flux data from three First International Satellite Land Surface Climatology Project Field Experiments (FIFE) intensive field compaigns in the summer and fall of 1987 in central Kansas, and from the Anglo-Brazilian Amazonian Climate Observation Study (ABRACOS) in Brazil were used to validate the mode-simulated surface energy fluxes and surface temperature.
a Conceptual Model for the Representation of Landforms Using Ontology Design Patterns
Guilbert, Eric; Moulin, Bernard; Cortés Murcia, Andrés
2016-06-01
A landform is an area of a terrain with its own recognisable shape. Its definition is often qualitative and inherently vague. Hence landforms are difficult to formalise in view of their extraction from a DTM. This paper presents a two-level framework for the representation of landforms. The objective is to provide a structure where landforms can be conceptually designed according to a common model which can be implemented. It follows the principle that landforms are not defined by geometrical characteristics but by salient features perceived by people. Hence, these salient features define a skeleton around which the landform is built. The first level of our model defines general concepts forming a landform prototype while the second level provides a model for the translation of these concepts and landform extraction on a DTM. The model is still under construction and preliminary results together with current developments are also presented.
The Coulomb gas representation of critical RSOS models on the sphere and the torus
International Nuclear Information System (INIS)
Foda, O.; Nienhuis, B.
1989-01-01
We derive the Coulomb gas formulation of the c<1 discrete unitary series, on the sphere and the torus, starting from the corresponding regime-III RSOS models on a square lattice with appropriate topology. We clarify the origin of the background charge, the screening charges, and the choice of operator representations in a correlation function. In the scaling limit, we obtain a bosonic action coupled to the background curvature in addition to topological terms that vanish on the Riemann sphere. Its Virasoro algebra has the central charge expected on the basis of comparing conformal dimensions. As an application, we derive general expressions for the correlation functions on the torus. (orig.)
The Coulomb gas representation of critical RSOS models on the sphere and the torus
Energy Technology Data Exchange (ETDEWEB)
Foda, O. (Rijksuniversiteit Utrecht (Netherlands). Inst. voor Theoretische Fysica); Nienhuis, B. (Rijksuniversiteit Leiden (Netherlands). Inst. Lorentz voor Theoretische Natuurkunde)
1989-10-02
We derive the Coulomb gas formulation of the c<1 discrete unitary series, on the sphere and the torus, starting from the corresponding regime-III RSOS models on a square lattice with appropriate topology. We clarify the origin of the background charge, the screening charges, and the choice of operator representations in a correlation function. In the scaling limit, we obtain a bosonic action coupled to the background curvature in addition to topological terms that vanish on the Riemann sphere. Its Virasoro algebra has the central charge expected on the basis of comparing conformal dimensions. As an application, we derive general expressions for the correlation functions on the torus. (orig.).
Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan
2016-07-01
The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.
Alharbi, Basma Mohammed; Zhang, Xiangliang
2017-01-01
Location-Based Social Networks (LBSNs) capture individuals whereabouts for a large portion of the population. To utilize this data for user (location)-similarity based tasks, one must map the raw data into a low-dimensional uniform feature space. However, due to the nature of LBSNs, many users have sparse and incomplete check-ins. In this work, we propose to overcome this issue by leveraging the network of friends, when learning the new feature space. We first analyze the impact of friends on individuals's mobility, and show that individuals trajectories are correlated with thoseof their friends and friends of friends (2-hop friends) in an online setting. Based on our observation, we propose a mixed-membership model that infers global mobility patterns from users' check-ins and their network of friends, without impairing the model's complexity. Our proposed model infers global patterns and learns new representations for both usersand locations simultaneously. We evaluate the inferred patterns and compare the quality of the new user representation against baseline methods on a social link prediction problem.
Structure-reactivity modeling using mixture-based representation of chemical reactions.
Polishchuk, Pavel; Madzhidov, Timur; Gimadiev, Timur; Bodrov, Andrey; Nugmanov, Ramil; Varnek, Alexandre
2017-09-01
We describe a novel approach of reaction representation as a combination of two mixtures: a mixture of reactants and a mixture of products. In turn, each mixture can be encoded using an earlier reported approach involving simplex descriptors (SiRMS). The feature vector representing these two mixtures results from either concatenated product and reactant descriptors or the difference between descriptors of products and reactants. This reaction representation doesn't need an explicit labeling of a reaction center. The rigorous "product-out" cross-validation (CV) strategy has been suggested. Unlike the naïve "reaction-out" CV approach based on a random selection of items, the proposed one provides with more realistic estimation of prediction accuracy for reactions resulting in novel products. The new methodology has been applied to model rate constants of E2 reactions. It has been demonstrated that the use of the fragment control domain applicability approach significantly increases prediction accuracy of the models. The models obtained with new "mixture" approach performed better than those required either explicit (Condensed Graph of Reaction) or implicit (reaction fingerprints) reaction center labeling.
Black, R. X.
2017-12-01
We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.
Hülse, Dominik; Arndt, Sandra; Ridgwell, Andy; Wilson, Jamie
2016-04-01
The ocean-sediment system, as the biggest carbon reservoir in the Earth's carbon cycle, plays a crucial role in regulating atmospheric carbon dioxide concentrations and climate. Therefore, it is essential to constrain the importance of marine carbon cycle feedbacks on global warming and ocean acidification. Arguably, the most important single component of the ocean's carbon cycle is the so-called "biological carbon pump". It transports carbon that is fixed in the light-flooded surface layer of the ocean to the deep ocean and the surface sediment, where it is degraded/dissolved or finally buried in the deep sediments. Over the past decade, progress has been made in understanding different factors that control the efficiency of the biological carbon pump and their feedbacks on the global carbon cycle and climate (i.e. ballasting = ocean acidification feedback; temperature dependant organic matter degradation = global warming feedback; organic matter sulphurisation = anoxia/euxinia feedback). Nevertheless, many uncertainties concerning the interplay of these processes and/or their relative significance remain. In addition, current Earth System Models tend to employ empirical and static parameterisations of the biological pump. As these parametric representations are derived from a limited set of present-day observations, their ability to represent carbon cycle feedbacks under changing climate conditions is limited. The aim of my research is to combine past carbon cycling information with a spatially resolved global biogeochemical model to constrain the functioning of the biological pump and to base its mathematical representation on a more mechanistic approach. Here, I will discuss important aspects that control the efficiency of the ocean's biological carbon pump, review how these processes of first order importance are mathematically represented in existing Earth system Models of Intermediate Complexity (EMIC) and distinguish different approaches to approximate
Toward a Unified Representation of Atmospheric Convection in Variable-Resolution Climate Models
Energy Technology Data Exchange (ETDEWEB)
Walko, Robert [Univ. of Miami, Coral Gables, FL (United States)
2016-11-07
The purpose of this project was to improve the representation of convection in atmospheric weather and climate models that employ computational grids with spatially-variable resolution. Specifically, our work targeted models whose grids are fine enough over selected regions that convection is resolved explicitly, while over other regions the grid is coarser and convection is represented as a subgrid-scale process. The working criterion for a successful scheme for representing convection over this range of grid resolution was that identical convective environments must produce very similar convective responses (i.e., the same precipitation amount, rate, and timing, and the same modification of the atmospheric profile) regardless of grid scale. The need for such a convective scheme has increased in recent years as more global weather and climate models have adopted variable resolution meshes that are often extended into the range of resolving convection in selected locations.
Modeling and representation of a computer-aided conceptual design system
Energy Technology Data Exchange (ETDEWEB)
Li, Bing; Zhang, Ju Fan [Harbin Institute of Technology, Shenzhen (China); Chen, Yuan [Shandong Univ. at Weihai, Weihai (China); Hu, Ying [The Chinese Univ. of Hong Kong, Hong Kong (China)
2012-11-15
A novel hierarchical function action behavior mechanism (FABM) modeling framework is proposed to conduct intelligent mapping from the overall function to the principle solution, according to the requirements of customers. Based on the hierarchical modeling framework, an object oriented representation method is developed to express the inheritance and the interconnecting characteristics between any two objects. In addition, the rules of expansion and modification in demand behavior are proposed to solve the combinational explosion problem, and the combinational rules in the mechanism behavior are developed to extend the innovation of the principle solution. A case study on the pan mechanism design for a cooking robot is presented to demonstrate the implementation of intelligent reasoning based on the FABM model.
Extraction and representation of common feature from uncertain facial expressions with cloud model.
Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing
2017-12-01
Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.
Hongyang, Yu; Zhengang, Lu; Xi, Yang
2017-05-01
Modular Multilevel Converter is more and more widely used in high voltage DC transmission system and high power motor drive system. It is a major topological structure for high power AC-DC converter. Due to the large module number, the complex control algorithm, and the high power user’s back ground, the MMC model used for simulation should be as accurate as possible to simulate the details of how MMC works for the dynamic testing of the MMC controller. But so far, there is no sample simulation MMC model which can simulate the switching dynamic process. In this paper, one curve embedded full-bridge MMC modeling method with detailed representation of IGBT characteristics is proposed. This method is based on the switching curve referring and sample circuit calculation, and it is sample for implementation. Based on the simulation comparison test under Matlab/Simulink, the proposed method is proved to be correct.
The Schroedinger representation for φ4 theory and the O(N) σ-model
International Nuclear Information System (INIS)
Pachos, J.
1996-01-01
In this work we apply the field theoretical Schrodinger representation to the massive φ 4 theory and the O(N) σ model in 1+1 dimensions. The Schrodinger equation for the φ 4 theory is reviewed and then solved classically and semiclassically to obtain the vacuum functional as an expansion of local functionals. These results are compared with equivalent ones derived from the path integral formulation to prove their agreement with the conventional field theoretical methods. For the O(N)σ model we construct the functional Laplacian, which is the principal ingredient of the corresponding Schrodinger equation. This result is used to construct the generalised Virasoro operators for this model and study their algebra. (Author)
Modeling and representation of a computer-aided conceptual design system
International Nuclear Information System (INIS)
Li, Bing; Zhang, Ju Fan; Chen, Yuan; Hu, Ying
2012-01-01
A novel hierarchical function action behavior mechanism (FABM) modeling framework is proposed to conduct intelligent mapping from the overall function to the principle solution, according to the requirements of customers. Based on the hierarchical modeling framework, an object oriented representation method is developed to express the inheritance and the interconnecting characteristics between any two objects. In addition, the rules of expansion and modification in demand behavior are proposed to solve the combinational explosion problem, and the combinational rules in the mechanism behavior are developed to extend the innovation of the principle solution. A case study on the pan mechanism design for a cooking robot is presented to demonstrate the implementation of intelligent reasoning based on the FABM model
Surveying, Modeling and 3d Representation of a wreck for Diving Purposes: Cargo Ship "vera"
Ktistis, A.; Tokmakidis, P.; Papadimitriou, K.
2017-02-01
This paper presents the results from an underwater recording of the stern part of a contemporary cargo-ship wreck. The aim of this survey was to create 3D representations of this wreck mainly for recreational diving purposes. The key points of this paper are: a) the implementation of the underwater recording at a diving site; b) the reconstruction of a 3d model from data that have been captured by recreational divers; and c) the development of a set of products to be used by the general public for the ex situ presentation or for the in situ navigation. The idea behind this project is to define a simple and low cost procedure for the surveying, modeling and 3D representation of a diving site. The perspective of our team is to repeat the proposed methodology for the documentation and the promotion of other diving sites with cultural features, as well as to train recreational divers in underwater surveying procedures towards public awareness and community engagement in the maritime heritage.
Bisby, James A; King, John A; Brewin, Chris R; Burgess, Neil; Curran, H Valerie
2010-08-01
A dual representation model of intrusive memory proposes that personally experienced events give rise to two types of representation: an image-based, egocentric representation based on sensory-perceptual features; and a more abstract, allocentric representation that incorporates spatiotemporal context. The model proposes that intrusions reflect involuntary reactivation of egocentric representations in the absence of a corresponding allocentric representation. We tested the model by investigating the effect of alcohol on intrusive memories and, concurrently, on egocentric and allocentric spatial memory. With a double-blind independent group design participants were administered alcohol (.4 or .8 g/kg) or placebo. A virtual environment was used to present objects and test recognition memory from the same viewpoint as presentation (tapping egocentric memory) or a shifted viewpoint (tapping allocentric memory). Participants were also exposed to a trauma video and required to detail intrusive memories for 7 days, after which explicit memory was assessed. There was a selective impairment of shifted-view recognition after the low dose of alcohol, whereas the high dose induced a global impairment in same-view and shifted-view conditions. Alcohol showed a dose-dependent inverted "U"-shaped effect on intrusions, with only the low dose increasing the number of intrusions, replicating previous work. When same-view recognition was intact, decrements in shifted-view recognition were associated with increases in intrusions. The differential effect of alcohol on intrusive memories and on same/shifted-view recognition support a dual representation model in which intrusions might reflect an imbalance between two types of memory representation. These findings highlight important clinical implications, given alcohol's involvement in real-life trauma. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Experience-driven formation of parts-based representations in a model of layered visual memory
Directory of Open Access Journals (Sweden)
Jenia Jitsev
2009-09-01
Full Text Available Growing neuropsychological and neurophysiological evidence suggests that the visual cortex uses parts-based representations to encode, store and retrieve relevant objects. In such a scheme, objects are represented as a set of spatially distributed local features, or parts, arranged in stereotypical fashion. To encode the local appearance and to represent the relations between the constituent parts, there has to be an appropriate memory structure formed by previous experience with visual objects. Here, we propose a model how a hierarchical memory structure supporting efficient storage and rapid recall of parts-based representations can be established by an experience-driven process of self-organization. The process is based on the collaboration of slow bidirectional synaptic plasticity and homeostatic unit activity regulation, both running at the top of fast activity dynamics with winner-take-all character modulated by an oscillatory rhythm. These neural mechanisms lay down the basis for cooperation and competition between the distributed units and their synaptic connections. Choosing human face recognition as a test task, we show that, under the condition of open-ended, unsupervised incremental learning, the system is able to form memory traces for individual faces in a parts-based fashion. On a lower memory layer the synaptic structure is developed to represent local facial features and their interrelations, while the identities of different persons are captured explicitly on a higher layer. An additional property of the resulting representations is the sparseness of both the activity during the recall and the synaptic patterns comprising the memory traces.
Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto
2015-04-01
The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits
Karvounis, E C; Exarchos, T P; Fotiou, E; Sakellarios, A I; Iliopoulou, D; Koutsouris, D; Fotiadis, D I
2013-01-01
With an ever increasing number of biological models available on the internet, a standardized modelling framework is required to allow information to be accessed and visualized. In this paper we propose a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of geometry, blood flow, plaque progression and stent modelling, exported by any cardiovascular disease modelling software. ART-ML has been developed and tested using ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in 3D representations. All the above described procedures integrate disparate data formats, protocols and tools. ART-ML proposes a representation way, expanding ARTool, for interpretability of the individual resources, creating a standard unified model for the description of data and, consequently, a format for their exchange and representation that is machine independent. More specifically, ARTool platform incorporates efficient algorithms which are able to perform blood flow simulations and atherosclerotic plaque evolution modelling. Integration of data layers between different modules within ARTool are based upon the interchange of information included in the ART-ML model repository. ART-ML provides a markup representation that enables the representation and management of embedded models within the cardiovascular disease modelling platform, the storage and interchange of well-defined information. The corresponding ART-ML model incorporates all relevant information regarding geometry, blood flow, plaque progression and stent modelling procedures. All created models are stored in a model repository database which is accessible to the research community using efficient web interfaces, enabling the interoperability of any cardiovascular disease modelling software
Representation of the radiative strength functions in the practical model of cascade gamma decay
International Nuclear Information System (INIS)
Vu, D.C.; Sukhovoj, A.M.; Mitsyna, L.V.; Zeinalov, Sh.; Jovancevic, N.; Knezevic, D.; Krmar, M.; Dragic, A.
2016-01-01
The developed in Dubna practical model of the cascade gamma decay of neutron resonance allows one, from the fitted intensities of the two-step cascades, to obtain parameters both of level density and of partial widths of emission of nuclear reaction products. In the presented variant of the model a part of phenomenological representations is minimized. Analysis of new results confirms the previous finding that dynamics of interaction between Fermi- and Bose-nuclear states depends on the form of the nucleus. It also follows from the ratios of densities of vibrational and quasi-particle levels that this interaction exists at least up to the binding neutron energy and probably differs for nuclei with varied parities of nucleons. [ru
Method of transition from 3D model to its ontological representation in aircraft design process
Govorkov, A. S.; Zhilyaev, A. S.; Fokin, I. V.
2018-05-01
This paper proposes the method of transition from a 3D model to its ontological representation and describes its usage in the aircraft design process. The problems of design for manufacturability and design automation are also discussed. The introduced method is to aim to ease the process of data exchange between important aircraft design phases, namely engineering and design control. The method is also intended to increase design speed and 3D model customizability. This requires careful selection of the complex systems (CAD / CAM / CAE / PDM), providing the basis for the integration of design and technological preparation of production and more fully take into account the characteristics of products and processes for their manufacture. It is important to solve this problem, as investment in the automation define the company's competitiveness in the years ahead.
Spectrum recovery method based on sparse representation for segmented multi-Gaussian model
Teng, Yidan; Zhang, Ye; Ti, Chunli; Su, Nan
2016-09-01
Hyperspectral images can realize crackajack features discriminability for supplying diagnostic characteristics with high spectral resolution. However, various degradations may generate negative influence on the spectral information, including water absorption, bands-continuous noise. On the other hand, the huge data volume and strong redundancy among spectrums produced intense demand on compressing HSIs in spectral dimension, which also leads to the loss of spectral information. The reconstruction of spectral diagnostic characteristics has irreplaceable significance for the subsequent application of HSIs. This paper introduces a spectrum restoration method for HSIs making use of segmented multi-Gaussian model (SMGM) and sparse representation. A SMGM is established to indicating the unsymmetrical spectral absorption and reflection characteristics, meanwhile, its rationality and sparse property are discussed. With the application of compressed sensing (CS) theory, we implement sparse representation to the SMGM. Then, the degraded and compressed HSIs can be reconstructed utilizing the uninjured or key bands. Finally, we take low rank matrix recovery (LRMR) algorithm for post processing to restore the spatial details. The proposed method was tested on the spectral data captured on the ground with artificial water absorption condition and an AVIRIS-HSI data set. The experimental results in terms of qualitative and quantitative assessments demonstrate that the effectiveness on recovering the spectral information from both degradations and loss compression. The spectral diagnostic characteristics and the spatial geometry feature are well preserved.
On process model representation and AlF{sub 3} dynamics of aluminium electrolysis cells
Energy Technology Data Exchange (ETDEWEB)
Drengstig, Tormod
1997-12-31
This thesis develops a formal graphical based process representation scheme for modelling complex, non-standard unit processes. The scheme is based on topological and phenomenological decompositions. The topological decomposition is the modularization of processes into modules representing volumes and boundaries, whereas the phenomenological decomposition focuses on physical phenomena and characteristics inside these topological modules. This defines legal and illegal connections between components at all levels and facilitates a full implementation of the methodology into a computer aided modelling tool that can interpret graphical symbols and guide modelers towards a consistent mathematical model of the process. The thesis also presents new results on the excess AlF{sub 3} and bath temperature dynamics of an aluminium electrolysis cell. A dynamic model of such a cell is developed and validated against known behaviour and real process data. There are dynamics that the model does not capture and this is further discussed. It is hypothesized that long-term prediction of bath temperature and excess AlF{sub 3} is impossible with a current efficiency model considering only bath composition and temperature. A control strategy for excess AlF{sub 3} and bath temperature is proposed based on an almost constant AlF{sub 3} input close to average consumption and energy manipulations to compensate for the disturbances. 96 refs., 135 figs., 22 tabs.
Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).
Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S
2017-01-01
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.
CONSTRUCTION OF AGGREGATE NATIONAL ECONOMIC MODEL WITH DETAILED REPRESENTATION OF THE FOREST COMPLEX
Directory of Open Access Journals (Sweden)
Blam Yu. Sh.
2014-09-01
Full Text Available Autonomy of the industrial forecasts often exacerbated by the lack of direct connection with the economic forecasts on the macro level. On the other hand it is desirable to simulate the industrial strategy in a fairly high degree of isolation, so that it does not depend at every moment on description of other activities or levels of hierarchy. To study the effects of national economic relations on the development of industrial complex we propose to use a spatial model of the national economy, which describes modalities of the researched industries in more detail. Quantitative parameters, obtained using basic Interregional Cross-sectoral Optimization Model (OMMM against the external development of the industrial complex, are used to form an aggregated model with a detailed representation with unsignificant loss of information. Thus, the above described model is intended to harmonize national economic decisions with forecasts obtained from industry models in real terms. The conversion procedure is based on the properties of the model of «mutual» problems and information from basic OMMM. The final result is a production-transport cost model within a «traditional» industrial structure.
Enhancing the representation of subgrid land surface characteristics in land surface models
Directory of Open Access Journals (Sweden)
Y. Ke
2013-09-01
Full Text Available Land surface heterogeneity has long been recognized as important to represent in the land surface models. In most existing land surface models, the spatial variability of surface cover is represented as subgrid composition of multiple surface cover types, although subgrid topography also has major controls on surface processes. In this study, we developed a new subgrid classification method (SGC that accounts for variability of both topography and vegetation cover. Each model grid cell was represented with a variable number of elevation classes and each elevation class was further described by a variable number of vegetation types optimized for each model grid given a predetermined total number of land response units (LRUs. The subgrid structure of the Community Land Model (CLM was used to illustrate the newly developed method in this study. Although the new method increases the computational burden in the model simulation compared to the CLM subgrid vegetation representation, it greatly reduced the variations of elevation within each subgrid class and is able to explain at least 80% of the total subgrid plant functional types (PFTs. The new method was also evaluated against two other subgrid methods (SGC1 and SGC2 that assigned fixed numbers of elevation and vegetation classes for each model grid (SGC1: M elevation bands–N PFTs method; SGC2: N PFTs–M elevation bands method. Implemented at five model resolutions (0.1°, 0.25°, 0.5°, 1.0°and 2.0° with three maximum-allowed total number of LRUs (i.e., NLRU of 24, 18 and 12 over North America (NA, the new method yielded more computationally efficient subgrid representation compared to SGC1 and SGC2, particularly at coarser model resolutions and moderate computational intensity (NLRU = 18. It also explained the most PFTs and elevation variability that is more homogeneously distributed spatially. The SGC method will be implemented in CLM over the NA continent to assess its impacts on
Prospects for improving the representation of coastal and shelf seas in global ocean models
Holt, Jason; Hyder, Patrick; Ashworth, Mike; Harle, James; Hewitt, Helene T.; Liu, Hedong; New, Adrian L.; Pickles, Stephen; Porter, Andrew; Popova, Ekaterina; Icarus Allen, J.; Siddorn, John; Wood, Richard
2017-02-01
Accurately representing coastal and shelf seas in global ocean models represents one of the grand challenges of Earth system science. They are regions of immense societal importance through the goods and services they provide, hazards they pose and their role in global-scale processes and cycles, e.g. carbon fluxes and dense water formation. However, they are poorly represented in the current generation of global ocean models. In this contribution, we aim to briefly characterise the problem, and then to identify the important physical processes, and their scales, needed to address this issue in the context of the options available to resolve these scales globally and the evolving computational landscape.We find barotropic and topographic scales are well resolved by the current state-of-the-art model resolutions, e.g. nominal 1/12°, and still reasonably well resolved at 1/4°; here, the focus is on process representation. We identify tides, vertical coordinates, river inflows and mixing schemes as four areas where modelling approaches can readily be transferred from regional to global modelling with substantial benefit. In terms of finer-scale processes, we find that a 1/12° global model resolves the first baroclinic Rossby radius for only ˜ 8 % of regions benefit of improved resolution and process representation using 1/12° global- and basin-scale northern North Atlantic nucleus for a European model of the ocean (NEMO) simulations; the latter includes tides and a k-ɛ vertical mixing scheme. These are compared with global stratification observations and 19 models from CMIP5. In terms of correlation and basin-wide rms error, the high-resolution models outperform all these CMIP5 models. The model with tides shows improved seasonal cycles compared to the high-resolution model without tides. The benefits of resolution are particularly apparent in eastern boundary upwelling zones.To explore the balance between the size of a globally refined model and that of
Energy Technology Data Exchange (ETDEWEB)
Karahan, Aydin, E-mail: karahan@mit.ed [Center for Advanced Nuclear Energy Systems, Nuclear Science and Engineering Department, Massachusetts Institute of Technology, 77 Massachusetts Avenue, 24-204, Cambridge, MA 02139 (United States); Kazimi, Mujid S. [Center for Advanced Nuclear Energy Systems, Nuclear Science and Engineering Department, Massachusetts Institute of Technology, 77 Massachusetts Avenue, 24-204, Cambridge, MA 02139 (United States)
2011-02-15
Research highlights: Essence of more physics based modeling approaches to the fuel behavior problem is emphasized. Demonstrations on modeling of metallic and oxide fuel dimensional changes and fission gas behavior with more physics based and semi-empirical approaches are given. Essence of fuel clad chemical interaction modeling of the metallic fuel in an appropriate way and implications during short and long term transients for sodium fast reactor applications are discussed. - Abstract: This work emphasizes the relevance of representation of appropriate mechanisms for understanding the actual physical behavior of the fuel pin under irradiation. Replacing fully empirical simplified treatments with more rigorous semi-empirical models which include the important pieces of physics, would open the path to more accurately capture the sensitivity to various parameters such as operating conditions, geometry, composition, and enhance the uncertainty quantification process. Steady state and transient fuel behavior demonstration examples and implications are given for sodium fast reactor metallic fuels by using FEAST-METAL. The essence of appropriate modeling of the fuel clad mechanical interaction and fuel clad chemical interaction of the metallic fuels are emphasized. Furthermore, validation efforts for oxide fuel pellet swelling behavior at high temperature and high burnup LWR conditions and comparison with FRAPCON-EP and FRAPCON-3.4 codes will be given. The value of discriminating the oxide fuel swelling modes, instead of applying a linear line, is pointed out. Future directions on fuel performance modeling will be addressed.
On the τ(2)-model in the chiral Potts model and cyclic representation of the quantum group Uq(sl2)
International Nuclear Information System (INIS)
Roan Shishyr
2009-01-01
We identify the precise relationship between the five-parameter τ (2) -family in the N-state chiral Potts model and XXZ chains with U q (sl 2 )-cyclic representation. By studying the Yang-Baxter relation of the six-vertex model, we discover a one-parameter family of L-operators in terms of the quantum group U q (sl 2 ). When N is odd, the N-state τ (2) -model can be regarded as the XXZ chain of U q (sl 2 ) cyclic representations with q N =1. The symmetry algebra of the τ (2) -model is described by the quantum affine algebra U q (sl 2 -hat) via the canonical representation. In general, for an arbitrary N, we show that the XXZ chain with a U q (sl 2 )-cyclic representation for q 2N = 1 is equivalent to two copies of the same N-state τ (2) -model. (fast track communication)
International Nuclear Information System (INIS)
Deenen, J.; Quesne, C.
1985-01-01
Both non-Hermitian Dyson and Hermitian Holstein--Primakoff representations of the Sp(2d,R) algebra are obtained when the latter is restricted to a positive discrete series irreducible representation 1 +n/2>. For such purposes, some results for boson representations, recently deduced from a study of the Sp(2d,R) partially coherent states, are combined with some standard techniques of boson expansion theories. The introduction of Usui operators enables the establishment of useful relations between the various boson representations. Two Dyson representations of the Sp(2d,R) algebra are obtained in compact form in terms of ν = d(d+1)/2 pairs of boson creation and annihilation operators, and of an extra U(d) spin, characterized by the irreducible representation [lambda 1 xxxlambda/sub d/]. In contrast to what happens when lambda 1 = xxx = lambda/sub d/ = lambda, it is shown that the Holstein--Primakoff representation of the Sp(2d,R) algebra cannot be written in such a compact form for a generic irreducible representation. Explicit expansions are, however, obtained by extending the Marumori, Yamamura, and Tokunaga method of boson expansion theories. The Holstein--Primakoff representation is then used to prove that, when restricted to the Sp(2d,R) irreducible representation 1 +n/2>, the dn-dimensional harmonic oscillator Hamiltonian has a U(ν) x SU(d) symmetry group
Lake Representations in Global Climate Models: An End-User Perspective
Rood, R. B.; Briley, L.; Steiner, A.; Wells, K.
2017-12-01
The weather and climate in the Great Lakes region of the United States and Canada are strongly influenced by the lakes. Within global climate models, lakes are incorporated in many ways. If one is interested in quantitative climate information for the Great Lakes, then it is a first principle requirement that end-users of climate model simulation data, whether scientists or practitioners, need to know if and how lakes are incorporated into models. We pose the basic question, how are lakes represented in CMIP models? Despite significant efforts by the climate community to document and publish basic information about climate models, it is unclear how to answer the question about lake representations? With significant knowledge of the practice of the field, then a reasonable starting point is to use the ES-DOC Comparator (https://compare.es-doc.org/ ). Once at this interface to model information, the end-user is faced with the need for more knowledge about the practice and culture of the discipline. For example, lakes are often categorized as a type of land, a counterintuitive concept. In some models, though, lakes are specified in ocean models. There is little evidence and little confidence that the information obtained through this process is complete or accurate. In fact, it is verifiably not accurate. This experience, then, motivates identifying and finding either human experts or technical documentation for each model. The conclusion from this exercise is that it can take months or longer to provide a defensible answer to if and how lakes are represented in climate models. Our experience with lake finding is that this is not a unique experience. This talk documents our experience and explores barriers we have identified and strategies for reducing those barriers.
Directory of Open Access Journals (Sweden)
Akihiro eEguchi
2015-08-01
Full Text Available Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognise the whole object.
Strange statistics, braid group representations and multipoint functions in the N-component model
International Nuclear Information System (INIS)
Lee, H.C.; Ge, M.L.; Couture, M.; Wu, Y.S.
1989-01-01
The statistics of fields in low dimensions is studied from the point of view of the braid group B n of n strings. Explicit representations M R for the N-component model, N = 2 to 5, are derived by solving the Yang-Baxter-like braid group relations for the statistical matrix R, which describes the transformation of the bilinear product of two N-component fields under the transposition of coordinates. When R 2 not equal to 1 the statistics is neither Bose-Einstein nor Fermi-Dirac; it is strange. It is shown that for each N, the N + 1 parameter family of solutions obtained is the most general one under a given set of constraints including charge conservation. Extended Nth order (N > 2) Alexander-Conway relations for link polynomials are derived. They depend nonhomogeneously only on one of the N + 1 parameters. The N = 3 and 4 ones agree with those previously derived
DEFF Research Database (Denmark)
Mullins, Michael
Contemporary communicational and informational processes contribute to the shaping of our physical environment by having a powerful influence on the process of design. Applications of virtual reality (VR) are transforming the way architecture is conceived and produced by introducing dynamic...... elements into the process of design. Through its immersive properties, virtual reality allows access to a spatial experience of a computer model very different to both screen based simulations as well as traditional forms of architectural representation. The dissertation focuses on processes of the current...... representation? How is virtual reality used in public participation and how do virtual environments affect participatory decision making? How does VR thus affect the physical world of built environment? Given the practical collaborative possibilities of immersive technology, how can they best be implemented...
A Better Representation of European Croplands into a Global Biosphere Model
Gervois, S.; de Noblet, N.; Viovy, N.; Ciais, P.; Brisson, N.; Seguin, B.
2002-12-01
Croplands cover a quarter of Europe's surface (about an hundred million hectares), their impact on carbon and water fluxes must therefore be estimated. Global biosphere models such as ORCHIDEE (http://www.ipsl.jussieu.fr/~ssipsl/) were conceived to simulate natural ecosystems only, so croplands are often described as grasslands. Not only cropland productivity depends on climate and soil conditions but also on irrigations, fertilisers impact, date of sowing... In addition crop species are usually selected genetically to shorten and accelerate their growth. Agronomic models such as STICS (Brisson et al. 1998) give a more realistic picture of croplands as they are especially designed to account for this human forcing. On the other hand they can be used at the local scale only. First we evaluate the ability of the two models to reproduce the seasonal behaviour the leaf area index (LAI), the aerial biomass, and the exchanges of water vapour and CO2 with the atmosphere. For that we compare the model outputs with measurements performed at five sites that are representative of most common European crops (wheat, corn, soybean). As expected the agronomic STICS better behaves than the generic model ORCHIDEE in representing the seasonal cycle of the above variables. In order to get a realistic representation of croplands areas at the regional scale, we decided to couple ORCHIDEE with STICS. First we present the main steps of the coupling procedure. The principle consists in forcing ORCHIDEE with five more realistic outputs of STICS: LAI, date of harvest, nitrogen stress, root profile, and vegetation height. On the other hand, ORCHIDEE computes its own carbon and water balance. The allocation scheme was also modified in ORCHIDEE in order to conserve the coherence between LAI and leaf biomass, and we added a harvest module into ORCHIDEE. The coupled model was validated against carbon and water fluxes observed respectively at two fields (wheat and corn) in the US. We also
Dal Gesso, S.; Van der Dussen, J.J.; Siebesma, A.P.; De Roode, S.R.; Boutle, I.A.; Kamae, Y.; Roehrig, R.; Vial, J.
2015-01-01
Six Single-Column Model (SCM) versions of climate models are evaluated on the basis of their representation of the dependence of the stratocumulus-topped boundary layer regime on the free tropospheric thermodynamic conditions. The study includes two idealized experiments corresponding to the
Al-Balushi, Sulaiman M.; Al-Hajri, Sheikha H.
2014-01-01
The purpose of the current study is to explore the impact of associating animations with concrete models on eleventh-grade students' comprehension of different visual representations in organic chemistry. The study used a post-test control group quasi-experimental design. The experimental group (N = 28) used concrete models, submicroscopic…
Directory of Open Access Journals (Sweden)
Su Yang
Full Text Available Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1 Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2 The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3 The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.
An AgMIP framework for improved agricultural representation in integrated assessment models
Ruane, Alex C.; Rosenzweig, Cynthia; Asseng, Senthold; Boote, Kenneth J.; Elliott, Joshua; Ewert, Frank; Jones, James W.; Martre, Pierre; McDermid, Sonali P.; Müller, Christoph; Snyder, Abigail; Thorburn, Peter J.
2017-12-01
Integrated assessment models (IAMs) hold great potential to assess how future agricultural systems will be shaped by socioeconomic development, technological innovation, and changing climate conditions. By coupling with climate and crop model emulators, IAMs have the potential to resolve important agricultural feedback loops and identify unintended consequences of socioeconomic development for agricultural systems. Here we propose a framework to develop robust representation of agricultural system responses within IAMs, linking downstream applications with model development and the coordinated evaluation of key climate responses from local to global scales. We survey the strengths and weaknesses of protocol-based assessments linked to the Agricultural Model Intercomparison and Improvement Project (AgMIP), each utilizing multiple sites and models to evaluate crop response to core climate changes including shifts in carbon dioxide concentration, temperature, and water availability, with some studies further exploring how climate responses are affected by nitrogen levels and adaptation in farm systems. Site-based studies with carefully calibrated models encompass the largest number of activities; however they are limited in their ability to capture the full range of global agricultural system diversity. Representative site networks provide more targeted response information than broadly-sampled networks, with limitations stemming from difficulties in covering the diversity of farming systems. Global gridded crop models provide comprehensive coverage, although with large challenges for calibration and quality control of inputs. Diversity in climate responses underscores that crop model emulators must distinguish between regions and farming system while recognizing model uncertainty. Finally, to bridge the gap between bottom-up and top-down approaches we recommend the deployment of a hybrid climate response system employing a representative network of sites to bias
Ghimire, B.; Riley, W. J.; Koven, C.
2013-12-01
Nitrogen is the most important nutrient limiting plant carbon assimilation and growth, and is required for production of photosynthetic enzymes, growth and maintenance respiration, and maintaining cell structure. The forecasted rise in plant available nitrogen through atmospheric nitrogen deposition and the release of locked soil nitrogen by permafrost thaw in high latitude ecosystems is likely to result in an increase in plant productivity. However a mechanistic representation of plant nitrogen dynamics is lacking in earth system models. Most earth system models ignore the dynamic nature of plant nutrient uptake and allocation, and further lack tight coupling of below- and above-ground processes. In these models, the increase in nitrogen uptake does not translate to a corresponding increase in photosynthesis parameters, such as maximum Rubisco capacity and electron transfer rate. We present an improved modeling framework implemented in the Community Land Model version 4.5 (CLM4.5) for dynamic plant nutrient uptake, and allocation to different plant parts, including leaf enzymes. This modeling framework relies on imposing a more realistic flexible carbon to nitrogen stoichiometric ratio for different plant parts. The model mechanistically responds to plant nitrogen uptake and leaf allocation though changes in photosynthesis parameters. We produce global simulations, and examine the impacts of the improved nitrogen cycling. The improved model is evaluated against multiple observations including TRY database of global plant traits, nitrogen fertilization observations and 15N tracer studies. Global simulations with this new version of CLM4.5 showed better agreement with the observations than the default CLM4.5-CN model, and captured the underlying mechanisms associated with plant nitrogen cycle.
The polygonal model: A simple representation of biomolecules as a tool for teaching metabolism.
Bonafe, Carlos Francisco Sampaio; Bispo, Jose Ailton Conceição; de Jesus, Marcelo Bispo
2018-01-01
Metabolism involves numerous reactions and organic compounds that the student must master to understand adequately the processes involved. Part of biochemical learning should include some knowledge of the structure of biomolecules, although the acquisition of such knowledge can be time-consuming and may require significant effort from the student. In this report, we describe the "polygonal model" as a new means of graphically representing biomolecules. This model is based on the use of geometric figures such as open triangles, squares, and circles to represent hydroxyl, carbonyl, and carboxyl groups, respectively. The usefulness of the polygonal model was assessed by undergraduate students in a classroom activity that consisted of "transforming" molecules from Fischer models to polygonal models and vice and versa. The survey was applied to 135 undergraduate Biology and Nursing students. Students found the model easy to use and we noted that it allowed identification of students' misconceptions in basic concepts of organic chemistry, such as in stereochemistry and organic groups that could then be corrected. The students considered the polygonal model easier and faster for representing molecules than Fischer representations, without loss of information. These findings indicate that the polygonal model can facilitate the teaching of metabolism when the structures of biomolecules are discussed. Overall, the polygonal model promoted contact with chemical structures, e.g. through drawing activities, and encouraged student-student dialog, thereby facilitating biochemical learning. © 2017 by The International Union of Biochemistry and Molecular Biology, 46(1):66-75, 2018. © 2017 The International Union of Biochemistry and Molecular Biology.
Energy Technology Data Exchange (ETDEWEB)
Niyogi, Devdutta S. [Purdue
2013-06-07
The CLASIC experiment was conducted over the US southern great plains (SGP) in June 2007 with an objective to lead an enhanced understanding of the cumulus convection particularly as it relates to land surface conditions. This project was design to help assist with understanding the overall improvement of land atmosphere convection initiation representation of which is important for global and regional models. The study helped address one of the critical documented deficiency in the models central to the ARM objectives for cumulus convection initiation and particularly under summer time conditions. This project was guided by the scientific question building on the CLASIC theme questions: What is the effect of improved land surface representation on the ability of coupled models to simulate cumulus and convection initiation? The focus was on the US Southern Great Plains region. Since the CLASIC period was anomalously wet the strategy has been to use other periods and domains to develop the comparative assessment for the CLASIC data period, and to understand the mechanisms of the anomalous wet conditions on the tropical systems and convection over land. The data periods include the IHOP 2002 field experiment that was over roughly same domain as the CLASIC in the SGP, and some of the DOE funded Ameriflux datasets.
Rajamani, Sripriya; Chen, Elizabeth S; Lindemann, Elizabeth; Aldekhyyel, Ranyah; Wang, Yan; Melton, Genevieve B
2018-02-01
Reports by the National Academy of Medicine and leading public health organizations advocate including occupational information as part of an individual's social context. Given recent National Academy of Medicine recommendations on occupation-related data in the electronic health record, there is a critical need for improved representation. The National Institute for Occupational Safety and Health has developed an Occupational Data for Health (ODH) model, currently in draft format. This study aimed to validate the ODH model by mapping occupation-related elements from resources representing recommendations, standards, public health reports and surveys, and research measures, along with preliminary evaluation of associated value sets. All 247 occupation-related items across 20 resources mapped to the ODH model. Recommended value sets had high variability across the evaluated resources. This study demonstrates the ODH model's value, the multifaceted nature of occupation information, and the critical need for occupation value sets to support clinical care, population health, and research. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Three-moment representation of rain in a cloud microphysics model
Paukert, M.; Fan, J.; Rasch, P. J.; Morrison, H.; Milbrandt, J.; Khain, A.; Shpund, J.
2017-12-01
Two-moment microphysics schemes have been commonly used for cloud simulation in models across different scales - from large-eddy simulations to global climate models. These schemes have yielded valuable insights into cloud and precipitation processes, however the size distributions are limited to two degrees of freedom, and thus the shape parameter is typically fixed or diagnosed. We have developed a three-moment approach for the rain category in order to provide an additional degree of freedom to the size distribution and thereby improve the cloud microphysics representations for more accurate weather and climate simulations. The approach is applied to the Predicted Particle Properties (P3) scheme. In addition to the rain number and mass mixing ratios predicted in the two-moment P3, we now include prognostic equations for the sixth moment of the size distribution (radar reflectivity), thus allowing the shape parameter to evolve freely. We employ the spectral bin microphysics (SBM) model to formulate the three-moment process rates in P3 for drop collisions and breakup. We first test the three-moment scheme with a maritime stratocumulus case from the VOCALS field campaign, and compare the model results with respect to cloud and precipitation properties from the new P3 scheme, original two-moment P3 scheme, SBM, and in-situ aircraft measurements. The improved simulation results by the new P3 scheme will be discussed and physically explained.
International Nuclear Information System (INIS)
Mercado, Lina M.; Huntingford, Chris; Gash, John H.C.; Cox, Peter M.; Jogireddy, Venkata
2007-01-01
The Joint UK Land Environment Simulator (JULES) (which is based on Met Office Surface Exchange Scheme MOSES), the land surface scheme of the Hadley Centre General Circulation Models (GCM) has been improved to contain an explicit description of light interception for different canopy levels, which consequently leads to a multilayer approach to scaling from leaf to canopy level photosynthesis. We test the improved JULES model at a site in the Amazonian rainforest by comparing against measurements of vertical profiles of radiation through the canopy, eddy covariance measurements of carbon and energy fluxes, and also measurements of carbon isotopic fractionation from top canopy leaves. Overall, the new light interception formulation improves modelled photosynthetic carbon uptake compared to the standard big leaf approach used in the original JULES formulation. Additional model improvement was not significant when incorporating more realistic vertical variation of photosynthetic capacity. Even with the improved representation of radiation interception, JULES simulations of net carbon uptake underestimate eddy covariance measurements by 14%. This discrepancy can be removed by either increasing the photosynthetic capacity throughout the canopy or by explicitly including light inhibition of leaf respiration. Along with published evidence of such inhibition of leaf respiration, our study suggests this effect should be considered for inclusion in other GCMs
Zelenyak, Andreea-Manuela; Schorer, Nora; Sause, Markus G R
2018-02-01
This paper presents a method for embedding realistic defect geometries of a fiber reinforced material in a finite element modeling environment in order to simulate active ultrasonic inspection. When ultrasonic inspection is used experimentally to investigate the presence of defects in composite materials, the microscopic defect geometry may cause signal characteristics that are difficult to interpret. Hence, modeling of this interaction is key to improve our understanding and way of interpreting the acquired ultrasonic signals. To model the true interaction of the ultrasonic wave field with such defect structures as pores, cracks or delamination, a realistic three dimensional geometry reconstruction is required. We present a 3D-image based reconstruction process which converts computed tomography data in adequate surface representations ready to be embedded for processing with finite element methods. Subsequent modeling using these geometries uses a multi-scale and multi-physics simulation approach which results in quantitative A-Scan ultrasonic signals which can be directly compared with experimental signals. Therefore, besides the properties of the composite material, a full transducer implementation, piezoelectric conversion and simultaneous modeling of the attached circuit is applied. Comparison between simulated and experimental signals provides very good agreement in electrical voltage amplitude and the signal arrival time and thus validates the proposed modeling approach. Simulating ultrasound wave propagation in a medium with a realistic shape of the geometry clearly shows a difference in how the disturbance of the waves takes place and finally allows more realistic modeling of A-scans. Copyright © 2017 Elsevier B.V. All rights reserved.
Updating representation of land surface-atmosphere feedbacks in airborne campaign modeling analysis
Huang, M.; Carmichael, G. R.; Crawford, J. H.; Chan, S.; Xu, X.; Fisher, J. A.
2017-12-01
An updated modeling system to support airborne field campaigns is being built at NASA Ames Pleiades, with focus on adjusting the representation of land surface-atmosphere feedbacks. The main updates, referring to previous experiences with ARCTAS-CARB and CalNex in the western US to study air pollution inflows, include: 1) migrating the WRF (Weather Research and Forecasting) coupled land surface model from Noah to improved/more complex models especially Noah-MP and Rapid Update Cycle; 2) enabling the WRF land initialization with suitably spun-up land model output; 3) incorporating satellite land cover, vegetation dynamics, and soil moisture data (i.e., assimilating Soil Moisture Active Passive data using the ensemble Kalman filter approach) into WRF. Examples are given of comparing the model fields with available aircraft observations during spring-summer 2016 field campaigns taken place at the eastern side of continents (KORUS-AQ in South Korea and ACT-America in the eastern US), the air pollution export regions. Under fair weather and stormy conditions, air pollution vertical distributions and column amounts, as well as the impact from land surface, are compared. These help identify challenges and opportunities for LEO/GEO satellite remote sensing and modeling of air quality in the northern hemisphere. Finally, we briefly show applications of this system on simulating Australian conditions, which would explore the needs for further development of the observing system in the southern hemisphere and inform the Clean Air and Urban Landscapes (https://www.nespurban.edu.au) modelers.
Improving rainfall representation for large-scale hydrological modelling of tropical mountain basins
Zulkafli, Zed; Buytaert, Wouter; Onof, Christian; Lavado, Waldo; Guyot, Jean-Loup
2013-04-01
Errors in the forcing data are sometimes overlooked in hydrological studies even when they could be the most important source of uncertainty. The latter particularly holds true in tropical countries with short historical records of rainfall monitoring and remote areas with sparse rain gauge network. In such instances, alternative data such as the remotely sensed precipitation from the TRMM (Tropical Rainfall Measuring Mission) satellite have been used. These provide a good spatial representation of rainfall processes but have been established in the literature to contain volumetric biases that may impair the results of hydrological modelling or worse, are compensated during model calibration. In this study, we analysed precipitation time series from the TMPA (TRMM Multiple Precipitation Algorithm, version 6) against measurements from over 300 gauges in the Andes and Amazon regions of Peru and Ecuador. We found moderately good monthly correlation between the pixel and gauge pairs but a severe underestimation of rainfall amounts and wet days. The discrepancy between the time series pairs is particularly visible over the east side of the Andes and may be attributed to localized and orographic-driven high intensity rainfall, which the satellite product may have limited skills at capturing due to technical and scale issues. This consequently results in a low bias in the simulated streamflow volumes further downstream. In comparison, with the recently released TMPA, version 7, the biases reduce. This work further explores several approaches to merge the two sources of rainfall measurements, each of a different spatial and temporal support, with the objective of improving the representation of rainfall in hydrological simulations. The methods used are (1) mean bias correction (2) data assimilation using Kalman filter Bayesian updating. The results are evaluated by means of (1) a comparison of runoff ratios (the ratio of the total runoff and the total precipitation over an
Fock model and Segal-Bargmann transform for minimal representations of Hermitian Lie groups
DEFF Research Database (Denmark)
Hilgert, Joachim; Kobayashi, Toshiyuki; Möllers, Jan
2012-01-01
For any Hermitian Lie group G of tube type we construct a Fock model of its minimal representation. The Fock space is defined on the minimal nilpotent K_C-orbit X in p_C and the L^2-inner product involves a K-Bessel function as density. Here K is a maximal compact subgroup of G, and g......_C=k_C+p_C is a complexified Cartan decomposition. In this realization the space of k-finite vectors consists of holomorphic polynomials on X. The reproducing kernel of the Fock space is calculated explicitly in terms of an I-Bessel function. We further find an explicit formula of a generalized Segal-Bargmann transform which...... intertwines the Schroedinger and Fock model. Its kernel involves the same I-Bessel function. Using the Segal--Bargmann transform we also determine the integral kernel of the unitary inversion operator in the Schroedinger model which is given by a J-Bessel function....
Strong coupling and quasispinor representations of the SU(3) rotor model
International Nuclear Information System (INIS)
Rowe, D.J.; De Guise, H.
1992-01-01
We define a coupling scheme, in close parallel to the coupling scheme of Elliott and Wilsdon, in which nucleonic intrinsic spins are strongly coupled to SU(3) spatial wave functions. The scheme is proposed for shell-model calculations in strongly deformed nuclei and for semimicroscopic analyses of rotations in odd-mass nuclei and other nuclei for which the spin-orbit interaction is believed to play an important role. The coupling scheme extends the domain of utility of the SU(3) model, and the symplectic model, to heavy nuclei and odd-mass nuclei. It is based on the observation that the low angular-momentum states of an SU(3) irrep have properties that mimic those of a corresponding irrep of the rotor algebra. Thus, we show that strongly coupled spin-SU(3) bands behave like strongly coupled rotor bands with properties that approach those of irreducible representations of the rigid-rotor algebra in the limit of large SU(3) quantum numbers. Moreover, we determine that the low angular-momentum states of a strongly coupled band of states of half-odd integer angular momentum behave to a high degree of accuracy as if they belonged to an SU(3) irrep. These are the quasispinor SU(3) irreps referred to in the title. (orig.)
Kuipers, G.; van der Laan, E.; Arfini, E.A.G.
2017-01-01
This article presents a comparative content analysis of gender representation in fashion magazines in Italy and the Netherlands. Updating Goffman’s classic study of Gender Advertisements, we study the intersections of gender, professional role, country and time in media representation. Thus, we
Pairing FLUXNET sites to validate model representations of land-use/land-cover change
Chen, Liang; Dirmeyer, Paul A.; Guo, Zhichang; Schultz, Natalie M.
2018-01-01
Land surface energy and water fluxes play an important role in land-atmosphere interactions, especially for the climatic feedback effects driven by land-use/land-cover change (LULCC). These have long been documented in model-based studies, but the performance of land surface models in representing LULCC-induced responses has not been investigated well. In this study, measurements from proximate paired (open versus forest) flux tower sites are used to represent observed deforestation-induced changes in surface fluxes, which are compared with simulations from the Community Land Model (CLM) and the Noah Multi-Parameterization (Noah-MP) land model. Point-scale simulations suggest the CLM can represent the observed diurnal and seasonal changes in net radiation (Rnet) and ground heat flux (G), but difficulties remain in the energy partitioning between latent (LE) and sensible (H) heat flux. The CLM does not capture the observed decreased daytime LE, and overestimates the increased H during summer. These deficiencies are mainly associated with models' greater biases over forest land-cover types and the parameterization of soil evaporation. Global gridded simulations with the CLM show uncertainties in the estimation of LE and H at the grid level for regional and global simulations. Noah-MP exhibits a similar ability to simulate the surface flux changes, but with larger biases in H, G, and Rnet change during late winter and early spring, which are related to a deficiency in estimating albedo. Differences in meteorological conditions between paired sites is not a factor in these results. Attention needs to be devoted to improving the representation of surface heat flux processes in land models to increase confidence in LULCC simulations.
Karvounis, E C; Tsakanikas, V D; Fotiou, E; Fotiadis, D I
2010-01-01
The paper proposes a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of blood flow, mass transport and plaque formation, exported by ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in easy to handle 3D representations. The platform incorporates efficient algorithms which are able to perform blood flow simulation. In addition atherosclerotic plaque development is estimated taking into account morphological, flow and genetic factors. ART-ML provides a XML format that enables the representation and management of embedded models within the ARTool platform and the storage and interchange of well-defined information. This approach influences in the model creation, model exchange, model reuse and result evaluation.
Collins, Tom; Tillmann, Barbara; Barrett, Frederick S; Delbé, Charles; Janata, Petr
2014-01-01
Listeners' expectations for melodies and harmonies in tonal music are perhaps the most studied aspect of music cognition. Long debated has been whether faster response times (RTs) to more strongly primed events (in a music theoretic sense) are driven by sensory or cognitive mechanisms, such as repetition of sensory information or activation of cognitive schemata that reflect learned tonal knowledge, respectively. We analyzed over 300 stimuli from 7 priming experiments comprising a broad range of musical material, using a model that transforms raw audio signals through a series of plausible physiological and psychological representations spanning a sensory-cognitive continuum. We show that RTs are modeled, in part, by information in periodicity pitch distributions, chroma vectors, and activations of tonal space--a representation on a toroidal surface of the major/minor key relationships in Western tonal music. We show that in tonal space, melodies are grouped by their tonal rather than timbral properties, whereas the reverse is true for the periodicity pitch representation. While tonal space variables explained more of the variation in RTs than did periodicity pitch variables, suggesting a greater contribution of cognitive influences to tonal expectation, a stepwise selection model contained variables from both representations and successfully explained the pattern of RTs across stimulus categories in 4 of the 7 experiments. The addition of closure--a cognitive representation of a specific syntactic relationship--succeeded in explaining results from all 7 experiments. We conclude that multiple representational stages along a sensory-cognitive continuum combine to shape tonal expectations in music. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
A model of adaptive decision-making from representation of information environment by quantum fields
Bagarello, F.; Haven, E.; Khrennikov, A.
2017-10-01
We present the mathematical model of decision-making (DM) of agents acting in a complex and uncertain environment (combining huge variety of economical, financial, behavioural and geopolitical factors). To describe interaction of agents with it, we apply the formalism of quantum field theory (QTF). Quantum fields are a purely informational nature. The QFT model can be treated as a far relative of the expected utility theory, where the role of utility is played by adaptivity to an environment (bath). However, this sort of utility-adaptivity cannot be represented simply as a numerical function. The operator representation in Hilbert space is used and adaptivity is described as in quantum dynamics. We are especially interested in stabilization of solutions for sufficiently large time. The outputs of this stabilization process, probabilities for possible choices, are treated in the framework of classical DM. To connect classical and quantum DM, we appeal to Quantum Bayesianism. We demonstrate the quantum-like interference effect in DM, which is exhibited as a violation of the formula of total probability, and hence the classical Bayesian inference scheme. This article is part of the themed issue `Second quantum revolution: foundational questions'.
Bagarello, F; Haven, E; Khrennikov, A
2017-11-13
We present the mathematical model of decision-making (DM) of agents acting in a complex and uncertain environment (combining huge variety of economical, financial, behavioural and geopolitical factors). To describe interaction of agents with it, we apply the formalism of quantum field theory (QTF). Quantum fields are a purely informational nature. The QFT model can be treated as a far relative of the expected utility theory, where the role of utility is played by adaptivity to an environment (bath). However, this sort of utility-adaptivity cannot be represented simply as a numerical function. The operator representation in Hilbert space is used and adaptivity is described as in quantum dynamics. We are especially interested in stabilization of solutions for sufficiently large time. The outputs of this stabilization process, probabilities for possible choices, are treated in the framework of classical DM. To connect classical and quantum DM, we appeal to Quantum Bayesianism. We demonstrate the quantum-like interference effect in DM, which is exhibited as a violation of the formula of total probability, and hence the classical Bayesian inference scheme.This article is part of the themed issue 'Second quantum revolution: foundational questions'. © 2017 The Author(s).
Najeebullah Khan; Adnan Hussein; Zahid Awan; Bakhtiar Khan
2012-01-01
This study measured the impacts of six independent variables (political rights, election system type, political quota, literacy rate, labor force participation and GDP per capita at current price in US dollar) on the dependent variable (percentage of women representation in national legislature) using multiple linear regression models. At a first step we developed and tested the model without of sample data of Pakistan. For model construction and validation ten years data from the year 1999 a...
Yakubova, Gulnoza; Hughes, Elizabeth M.; Shinaberry, Megan
2016-01-01
The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the…
Energy Technology Data Exchange (ETDEWEB)
Klevers, Denis [Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); Taylor, Washington [Center for Theoretical Physics, Department of Physics, Massachusetts Institute of Technology,77 Massachusetts Avenue Cambridge, MA 02139 (United States)
2016-06-29
We give an explicit construction of a class of F-theory models with matter in the three-index symmetric (4) representation of SU(2). This matter is realized at codimension two loci in the F-theory base where the divisor carrying the gauge group is singular; the associated Weierstrass model does not have the form associated with a generic SU(2) Tate model. For 6D theories, the matter is localized at a triple point singularity of arithmetic genus g=3 in the curve supporting the SU(2) group. This is the first explicit realization of matter in F-theory in a representation corresponding to a genus contribution greater than one. The construction is realized by “unHiggsing” a model with a U(1) gauge factor under which there is matter with charge q=3. The resulting SU(2) models can be further unHiggsed to realize non-Abelian G{sub 2}×SU(2) models with more conventional matter content or SU(2){sup 3} models with trifundamental matter. The U(1) models used as the basis for this construction do not seem to have a Weierstrass realization in the general form found by Morrison-Park, suggesting that a generalization of that form may be needed to incorporate models with arbitrary matter representations and gauge groups localized on singular divisors.
Jorgensen, Palle E T
1987-01-01
Historically, operator theory and representation theory both originated with the advent of quantum mechanics. The interplay between the subjects has been and still is active in a variety of areas.This volume focuses on representations of the universal enveloping algebra, covariant representations in general, and infinite-dimensional Lie algebras in particular. It also provides new applications of recent results on integrability of finite-dimensional Lie algebras. As a central theme, it is shown that a number of recent developments in operator algebras may be handled in a particularly e
Yen, Y. N.; Weng, K. H.; Huang, H. Y.
2013-07-01
After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.
Directory of Open Access Journals (Sweden)
Lars Marcus
2018-04-01
Full Text Available The world is witnessing unprecedented urbanization, bringing extreme challenges to contemporary practices in urban planning and design. This calls for improved urban models that can generate new knowledge and enhance practical skill. Importantly, any urban model embodies a conception of the relation between humans and the physical environment. In urban modeling this is typically conceived of as a relation between human subjects and an environmental object, thereby reproducing a humans-environment dichotomy. Alternative modeling traditions, such as space syntax that originates in architecture rather than geography, have tried to overcome this dichotomy. Central in this effort is the development of new representations of urban space, such as in the case of space syntax, the axial map. This form of representation aims to integrate both human behavior and the physical environment into one and the same description. Interestingly, models based on these representations have proved to better capture pedestrian movement than regular models. Pedestrian movement, as well as other kinds of human flows in urban space, is essential for urban modeling, since increasingly flows of this kind are understood as the driver in urban processes. Critical for a full understanding of space syntax modeling is the ontology of its' representations, such as the axial map. Space syntax theory here often refers to James Gibson's “Theory of affordances,” where the concept of affordances, in a manner similar to axial maps, aims to bridge the subject-object dichotomy by neither constituting physical properties of the environment or human behavior, but rather what emerges in the meeting between the two. In extension of this, the axial map can be interpreted as a representation of how the physical form of the environment affords human accessibility and visibility in urban space. This paper presents a close examination of the form of representations developed in space syntax
Do convection-permitting models improve the representation of the impact of LUC?
Vanden Broucke, Sam; Van Lipzig, Nicole
2017-10-01
In this study we assess the added value of convection permitting scale (CPS) simulations in studies using regional climate models to quantify the bio-geophysical climate impact of land-use change (LUC). To accomplish this, a comprehensive model evaluation methodology is applied to both non-CPS and CPS simulations. The main characteristics of the evaluation methodology are (1) the use of paired eddy-covariance site observations (forest vs open land) and (2) a simultaneous evaluation of all surface energy budget components. Results show that although generally satisfactory, non-CPS simulations fall short of completely reproducing the observed LUC signal because of three key biases. CPS scale simulations succeed at significantly reducing two of these biases, namely, those in daytime shortwave radiation and daytime sensible heat flux. Also, CPS slightly reduces a third bias in nighttime incoming longwave radiation. The daytime improvements can be attributed partially to the switch from parameterized to explicit convection, the associated improvement in the simulation of afternoon convective clouds, and resulting surface energy budget and atmospheric feedbacks. Also responsible for the improvements during daytime is a better representation of surface heterogeneity and thus, surface roughness. Meanwhile, the modest nighttime longwave improvement can be attributed to increased vertical atmospheric resolution. However, the model still fails at reproducing the magnitude of the observed nighttime longwave difference. One possible explanation for this persistent bias is the nighttime radiative effect of biogenic volatile organic compound emissions over the forest site. A correlation between estimated emission rates and the observed nighttime longwave difference, as well as the persistence of the longwave bias provide support for this hypothesis. However, more research is needed to conclusively determine if the effect indeed exists.
Mackay, D. Scott; Band, Lawrence E.
1998-04-01
This paper presents a new method for extracting flow directions, contributing (upslope) areas, and nested catchments from digital elevation models in lake-dominated areas. Existing tools for acquiring descriptive variables of the topography, such as surface flow directions and contributing areas, were developed for moderate to steep topography. These tools are typically difficult to apply in gentle topography owing to limitations in explicitly handling lakes and other flat areas. This paper addresses the problem of accurately representing general topographic features by first identifying distinguishing features, such as lakes, in gentle topography areas and then using these features to guide the search for topographic flow directions and catchment marking. Lakes are explicitly represented in the topology of a watershed for use in water routing. Nonlake flat features help guide the search for topographic flow directions in areas of low signal to noise. This combined feature-based and grid-based search for topographic features yields improved contributing areas and watershed boundaries where there are lakes and other flat areas. Lakes are easily classified from remotely sensed imagery, which makes automated representation of lakes as subsystems within a watershed system tractable with widely available data sets.
Arevalo, John; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper presents a novel method for basal-cell carcinoma detection, which combines state-of-the-art methods for unsupervised feature learning (UFL) and bag of features (BOF) representation. BOF, which is a form of representation learning, has shown a good performance in automatic histopathology image classi cation. In BOF, patches are usually represented using descriptors such as SIFT and DCT. We propose to use UFL to learn the patch representation itself. This is accomplished by applying a topographic UFL method (T-RICA), which automatically learns visual invariance properties of color, scale and rotation from an image collection. These learned features also reveals these visual properties associated to cancerous and healthy tissues and improves carcinoma detection results by 7% with respect to traditional autoencoders, and 6% with respect to standard DCT representations obtaining in average 92% in terms of F-score and 93% of balanced accuracy.
National Research Council Canada - National Science Library
Ngodock, Hans E; Smith, Scott R; Jacobs, Gregg A
2007-01-01
... (LCE) in the Gulf of Mexico. It was reported that the representer method was more accurate than its ensemble counterparts, yet it had difficulties fitting the data in the last month of the 4-month assimilation window...
Regional Densification of a Global VTEC Model Based on B-Spline Representations
Erdogan, Eren; Schmidt, Michael; Dettmering, Denise; Goss, Andreas; Seitz, Florian; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Mrotzek, Niclas
2017-04-01
The project OPTIMAP is a joint initiative of the Bundeswehr GeoInformation Centre (BGIC), the German Space Situational Awareness Centre (GSSAC), the German Geodetic Research Institute of the Technical University Munich (DGFI-TUM) and the Institute for Astrophysics at the University of Göttingen (IAG). The main goal of the project is the development of an operational tool for ionospheric mapping and prediction (OPTIMAP). Two key features of the project are the combination of different satellite observation techniques (GNSS, satellite altimetry, radio occultations and DORIS) and the regional densification as a remedy against problems encountered with the inhomogeneous data distribution. Since the data from space-geoscientific mission which can be used for modeling ionospheric parameters, such as the Vertical Total Electron Content (VTEC) or the electron density, are distributed rather unevenly over the globe at different altitudes, appropriate modeling approaches have to be developed to handle this inhomogeneity. Our approach is based on a two-level strategy. To be more specific, in the first level we compute a global VTEC model with a moderate regional and spectral resolution which will be complemented in the second level by a regional model in a densification area. The latter is a region characterized by a dense data distribution to obtain a high spatial and spectral resolution VTEC product. Additionally, the global representation means a background model for the regional one to avoid edge effects at the boundaries of the densification area. The presented approach based on a global and a regional model part, i.e. the consideration of a regional densification is called the Two-Level VTEC Model (TLVM). The global VTEC model part is based on a series expansion in terms of polynomial B-Splines in latitude direction and trigonometric B-Splines in longitude direction. The additional regional model part is set up by a series expansion in terms of polynomial B-splines for
Directory of Open Access Journals (Sweden)
Stefan eHuber
2014-04-01
Full Text Available Decimal fractions comply with the base-10 notational system of natural Arabic numbers. Nevertheless, recent research suggested that decimal fractions may be represented differently than natural numbers because two number processing effects (i.e., semantic interference and compatibility effects differed in their size between decimal fractions and natural numbers. In the present study, we examined whether these differences indeed indicate that decimal fractions are represented differently from natural numbers. Therefore, we provided an alternative explanation for the semantic congruity effect, namely a string length congruity effect. Moreover, we suggest that the smaller compatibility effect for decimal fractions compared to natural numbers was driven by differences in processing strategy (sequential vs. parallel.To evaluate this claim, we manipulated the tenth and hundredth digits in a magnitude comparison task with participants' eye movements recorded, while the unit digits remained identical. In addition, we evaluated whether our empirical findings could be simulated by an extended version of our computational model originally developed to simulate magnitude comparisons of two-digit natural numbers. In the eye-tracking study, we found evidence that participants processed decimal fractions more sequentially than natural numbers because of the identical leading digit. Importantly, our model was able to account for the smaller compatibility effect found for decimal fractions. Moreover, string length congruity was an alternative account for the prolonged reaction times for incongruent decimal pairs. Consequently, we suggest that representations of natural numbers and decimal fractions do not differ.
Matto, Holly
2005-01-01
A bio-behavioral approach to drug addiction treatment is outlined. The presented treatment model uses dual representation theory as a guiding framework for understanding the bio-behavioral processes activated during the application of expressive therapeutic methods. Specifically, the treatment model explains how visual processing techniques can supplement traditional relapse prevention therapy protocols, to help clients better manage cravings and control triggers in hard-to-treat populations such as chronic substance-dependent persons.
4D Floodplain representation in hydrologic flood forecasting using WRFHydro modeling framework
Gangodagamage, C.; Li, Z.; Adams, T.; Ito, T.; Maitaria, K.; Islam, M.; Dhondia, J.
2015-12-01
Floods claim more lives and damage more property than any other category of natural disaster in the Continental U.S. A system that can demarcate local flood boundaries dynamically could help flood prone communities prepare for and even prevent from catastrophic flood events. Lateral distance from the centerline of the river to the right and left floodplains for the water levels coming out of the models at each grid location have not been properly integrated with the national hydrography dataset (NHDPlus). The NHDPlus dataset represents the stream network with feature classes such as rivers, tributaries, canals, lakes, ponds, dams, coastlines, and stream gages. The NHDPlus dataset consists of approximately 2.7 million river reaches defining how surface water drains to the ocean. These river reaches have upstream and downstream nodes and basic parameters such as flow direction, drainage area, reach slope etc. We modified an existing algorithm (Gangodagamage et al., 2007, 2011) to provide lateral distance from the centerline of the river to the right and left floodplains for the flows simulated by models. Previous work produced floodplain boundaries for static river stages (i.e. 3D metric: distance along the main stem, flow depth, lateral distance from river center line). Our new approach introduces the floodplain boundary for variable water levels with the fourth dimension, time. We use modeled flows from WRFHydro and demarcate the right and left lateral boundaries of inundation dynamically. This approach dynamically integrates with high resolution models (e.g., hourly and ~ 1 km spatial resolution) that are developed from recent advancements in high computational power with ground based measurements (e.g., Fluxnet), lateral inundation vectors (direction and spatial extent) derived from multi-temporal remote sensing data (e.g., LiDAR, WorldView 2, Landsat, ASTER, MODIS), and improved representations of the physical processes through multi-parameterizations. Our
International Nuclear Information System (INIS)
Horrein, L.; Bouscayrol, A.; Cheng, Y.; El Fassi, M.
2015-01-01
Highlights: • Internal Combustion Engine (ICE) dynamical and static models. • Organization of ICE model using Energetic Macroscopic Representation. • Description of the distribution of the chemical, thermal and mechanical power. • Implementation of the ICE model in a global vehicle model. - Abstract: In the simulation of new vehicles, the Internal Combustion Engine (ICE) is generally modeled by a static map. This model yields the mechanical power and the fuel consumption. But some studies require the heat energy from the ICE to be considered (i.e. waste heat recovery, thermal regulation of the cabin). A dynamical multi-physical model of a diesel engine is developed to consider its heat energy. This model is organized using Energetic Macroscopic Representation (EMR) in order to be interconnected to other various models of vehicle subsystems. An experimental validation is provided. Moreover a multi-physical quasi-static model is also derived. According to different modeling aims, a comparison of the dynamical and the quasi-static model is discussed in the case of the simulation of a thermal vehicle. These multi-physical models with different simulation time consumption provide good basis for studying the effects of the thermal energy on the vehicle behaviors, including the possibilities of waste heat recovery
Is coronene better described by Clar's aromatic π-sextet model or by the AdNDP representation?
Kumar, Anand; Duran, Miquel; Solà, Miquel
2017-07-05
The bonding patterns in coronene are complicated and controversial as denoted by the lack of consensus of how its electronic structure should be described. Among the different proposed descriptions, the two most representative are those generated by Clar's aromatic π-sextet and adaptative natural density partitioning (AdNDP) models. Quantum-chemical calculations at the density functional theory level are performed to evaluate the model that gives a better representation of coronene. To this end, we analyse the molecular structure of coronene, we estimate the aromaticity of its inner and outer rings using various local aromaticity descriptors, and we assess its chemical reactivity from the study of the Diels-Alder reaction with cyclopentadiene. Results obtained are compared with those computed for naphthalene and phenanthrene. Our conclusion is that Clar's π-sextet model provides the representation of coronene that better describes the physicochemical behavior of this molecule. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Wada, Kenichi; Sano, Fuminori; Oshima, Kanji; Akimoto, Keigo
2013-01-01
Nuclear power secures affordable carbon-free energy supply, but entails various risks and constraints, such as safety concerns, waste disposal protest campaign, and proliferation. Given the nature of these characteristics of nuclear power generation, there is wide range of variations in representation of nuclear power technologies across models. In this paper, we explore the variance of the model representation of nuclear power generation and its implication to the climate change mitigation assessment, based on the EMF27 study. The most common result is that under efforts to mitigate climate change more nuclear energy use is needed. We find, however, that perspectives on the contribution of nuclear energy to global energy needs vary tremendously among the modeling teams. This diversity mainly comes from the difference in the level of detail that characterize nuclear energy technologies and the broad range of nuclear contributions in the long-term scenarios of global energy use. (author)
Neznamov, V. P.
2011-01-01
The Standard Model with massive fermions is formulated in the isotopic Foldy-Wouthuysen representation. SU(2)xU(1) - invariance of the theory in this representation is independent of whether fermions possess mass or not, and, consequently, it is not necessary to introduce interactions between Higgs bosons and fermions. The study discusses a possible relation between spontaneous breaking of parity in the isotopic Foldy-Wouthuysen representation and the composition of elementary particles of "d...
Richardson, A. D.; Nacp Interim Site Synthesis Participants
2010-12-01
Phenology represents a critical intersection point between organisms and their growth environment. It is for this reason that phenology is a sensitive and robust integrator of the biological impacts of year-to-year climate variability and longer-term climate change on natural systems. However, it is perhaps equally important that phenology, by controlling the seasonal activity of vegetation on the land surface, plays a fundamental role in regulating ecosystem processes, competitive interactions, and feedbacks to the climate system. Unfortunately, the phenological sub-models implemented in most state-of-the-art ecosystem models and land surface schemes are overly simplified. We quantified model errors in the representation of the seasonal cycles of leaf area index (LAI), gross ecosystem photosynthesis (GEP), and net ecosystem exchange of CO2. Our analysis was based on site-level model runs (14 different models) submitted to the North American Carbon Program (NACP) Interim Synthesis, and long-term measurements from 10 forested (5 evergreen conifer, 5 deciduous broadleaf) sites within the AmeriFlux and Fluxnet-Canada networks. Model predictions of the seasonality of LAI and GEP were unacceptable, particularly in spring, and especially for deciduous forests. This is despite an historical emphasis on deciduous forest phenology, and the perception that controls on spring phenology are better understood than autumn phenology. Errors of up to 25 days in predicting “spring onset” transition dates were common, and errors of up to 50 days were observed. For deciduous sites, virtually every model was biased towards spring onset being too early, and autumn senescence being too late. Thus, models predicted growing seasons that were far too long for deciduous forests. For most models, errors in the seasonal representation of deciduous forest LAI were highly correlated with errors in the seasonality of both GPP and NEE, indicating the importance of getting the underlying
Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1
Langenbrunner, B.; Neelin, J. D.
2017-09-01
Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.
Directory of Open Access Journals (Sweden)
X. Liu
2012-05-01
Full Text Available A modal aerosol module (MAM has been developed for the Community Atmosphere Model version 5 (CAM5, the atmospheric component of the Community Earth System Model version 1 (CESM1. MAM is capable of simulating the aerosol size distribution and both internal and external mixing between aerosol components, treating numerous complicated aerosol processes and aerosol physical, chemical and optical properties in a physically-based manner. Two MAM versions were developed: a more complete version with seven lognormal modes (MAM7, and a version with three lognormal modes (MAM3 for the purpose of long-term (decades to centuries simulations. In this paper a description and evaluation of the aerosol module and its two representations are provided. Sensitivity of the aerosol lifecycle to simplifications in the representation of aerosol is discussed.
Simulated sulfate and secondary organic aerosol (SOA mass concentrations are remarkably similar between MAM3 and MAM7. Differences in primary organic matter (POM and black carbon (BC concentrations between MAM3 and MAM7 are also small (mostly within 10%. The mineral dust global burden differs by 10% and sea salt burden by 30–40% between MAM3 and MAM7, mainly due to the different size ranges for dust and sea salt modes and different standard deviations of the log-normal size distribution for sea salt modes between MAM3 and MAM7. The model is able to qualitatively capture the observed geographical and temporal variations of aerosol mass and number concentrations, size distributions, and aerosol optical properties. However, there are noticeable biases; e.g., simulated BC concentrations are significantly lower than measurements in the Arctic. There is a low bias in modeled aerosol optical depth on the global scale, especially in the developing countries. These biases in aerosol simulations clearly indicate the need for improvements of aerosol processes (e.g., emission fluxes of anthropogenic aerosols and
Spero, Tanya L.; Otte, Martin J.; Bowden, Jared H.; Nolte, Christopher G.
2014-10-01
Spectral nudging—a scale-selective interior constraint technique—is commonly used in regional climate models to maintain consistency with large-scale forcing while permitting mesoscale features to develop in the downscaled simulations. Several studies have demonstrated that spectral nudging improves the representation of regional climate in reanalysis-forced simulations compared with not using nudging in the interior of the domain. However, in the Weather Research and Forecasting (WRF) model, spectral nudging tends to produce degraded precipitation simulations when compared to analysis nudging—an interior constraint technique that is scale indiscriminate but also operates on moisture fields which until now could not be altered directly by spectral nudging. Since analysis nudging is less desirable for regional climate modeling because it dampens fine-scale variability, changes are proposed to the spectral nudging methodology to capitalize on differences between the nudging techniques and aim to improve the representation of clouds, radiation, and precipitation without compromising other fields. These changes include adding spectral nudging toward moisture, limiting nudging to below the tropopause, and increasing the nudging time scale for potential temperature, all of which collectively improve the representation of mean and extreme precipitation, 2 m temperature, clouds, and radiation, as demonstrated using a model-simulated 20 year historical period. Such improvements to WRF may increase the fidelity of regional climate data used to assess the potential impacts of climate change on human health and the environment and aid in climate change mitigation and adaptation studies.
Representation of Nucleation Mode Microphysics in a Global Aerosol Model with Sectional Microphysics
Lee, Y. H.; Pierce, J. R.; Adams, P. J.
2013-01-01
In models, nucleation mode (1 nmnucleation mode microphysics impacts aerosol number predictions in the TwO-Moment Aerosol Sectional (TOMAS) aerosol microphysics model running with the GISS GCM II-prime by varying its lowest diameter boundary: 1 nm, 3 nm, and 10 nm. The model with the 1 nm boundary simulates the nucleation mode particles with fully resolved microphysical processes, while the model with the 10 nm and 3 nm boundaries uses a nucleation mode dynamics parameterization to account for the growth of nucleated particles to 10 nm and 3 nm, respectively.We also investigate the impact of the time step for aerosol microphysical processes (a 10 min versus a 1 h time step) to aerosol number predictions in the TOMAS models with explicit dynamics for the nucleation mode particles (i.e., 3 nm and 1 nm boundary). The model with the explicit microphysics (i.e., 1 nm boundary) with the 10 min time step is used as a numerical benchmark simulation to estimate biases caused by varying the lower size cutoff and the time step. Different representations of the nucleation mode have a significant effect on the formation rate of particles larger than 10 nm from nucleated particles (J10) and the burdens and lifetimes of ultrafinemode (10 nm=Dp =70 nm) particles but have less impact on the burdens and lifetimes of CCN-sized particles. The models using parameterized microphysics (i.e., 10 nm and 3 nm boundaries) result in higher J10 and shorter coagulation lifetimes of ultrafine-mode particles than the model with explicit dynamics (i.e., 1 nm boundary). The spatial distributions of CN10 (Dp =10 nm) and CCN(0.2 %) (i.e., CCN concentrations at 0.2%supersaturation) are moderately affected, especially CN10 predictions above 700 hPa where nucleation contributes most strongly to CN10 concentrations. The lowermost-layer CN10 is substantially improved with the 3 nm boundary (compared to 10 nm) in most areas. The overprediction in CN10 with the 3 nm and 10 nm boundaries can be explained by
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2018-06-01
Text categorization has been used extensively in recent years to classify plain-text clinical reports. This study employs text categorization techniques for the classification of open narrative forensic autopsy reports. One of the key steps in text classification is document representation. In document representation, a clinical report is transformed into a format that is suitable for classification. The traditional document representation technique for text categorization is the bag-of-words (BoW) technique. In this study, the traditional BoW technique is ineffective in classifying forensic autopsy reports because it merely extracts frequent but discriminative features from clinical reports. Moreover, this technique fails to capture word inversion, as well as word-level synonymy and polysemy, when classifying autopsy reports. Hence, the BoW technique suffers from low accuracy and low robustness unless it is improved with contextual and application-specific information. To overcome the aforementioned limitations of the BoW technique, this research aims to develop an effective conceptual graph-based document representation (CGDR) technique to classify 1500 forensic autopsy reports from four (4) manners of death (MoD) and sixteen (16) causes of death (CoD). Term-based and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) based conceptual features were extracted and represented through graphs. These features were then used to train a two-level text classifier. The first level classifier was responsible for predicting MoD. In addition, the second level classifier was responsible for predicting CoD using the proposed conceptual graph-based document representation technique. To demonstrate the significance of the proposed technique, its results were compared with those of six (6) state-of-the-art document representation techniques. Lastly, this study compared the effects of one-level classification and two-level classification on the experimental results
Representation of the West African Monsoon System in the aerosol-climate model ECHAM6-HAM2
Stanelle, Tanja; Lohmann, Ulrike; Bey, Isabelle
2017-04-01
The West African Monsoon (WAM) is a major component of the global monsoon system. The temperature contrast between the Saharan land surface in the North and the sea surface temperature in the South dominates the WAM formation. The West African region receives most of its precipitation during the monsoon season between end of June and September. Therefore the existence of the monsoon is of major social and economic importance. We discuss the ability of the climate model ECHAM6 as well as the coupled aerosol climate model ECHAM6-HAM2 to simulate the major features of the WAM system. The north-south temperature gradient is reproduced by both model versions but all model versions fail in reproducing the precipitation amount south of 10° N. A special focus is on the representation of the nocturnal low level jet (NLLJ) and the corresponding enhancement of low level clouds (LLC) at the Guinea Coast, which are a crucial factor for the regional energy budget. Most global climate models have difficulties to represent these features. The pure climate model ECHAM6 is able to simulate the existence of the NLLJ and LLC, but the model does not represent the pronounced diurnal cycle. Overall, the representation of LLC is worse in the coupled model. We discuss the model behaviors on the basis of outputted temperature and humidity tendencies and try to identify potential processes responsible for the model deficiencies.
Dual lattice representations for O(N and CP(N−1 models with a chemical potential
Directory of Open Access Journals (Sweden)
Falk Bruckmann
2015-10-01
Full Text Available We derive dual representations for O(N and CP(N−1 models on the lattice. In terms of the dual variables the partition sums have only real and positive contributions also at finite chemical potential. Thus the complex action problem of the conventional formulation is overcome and using the dual variables Monte Carlo simulations are possible at arbitrary chemical potential.
Directory of Open Access Journals (Sweden)
A. Mahmud
2013-07-01
regions and Malaysian Borneo (Southeast Asia during certain months of the year, and under-predicted in most sites in Asia; relative to those regions, the model performed better for sites in North America. Overall, with the inclusion of additional SOA precursors (MZ4-C2, namely isoprene, MOZART-4 showed consistently better skill (NMB (normalized mean bias of −11 vs. −26% in predicting total OA levels and spatial distributions of SOA as compared with unmodified MOZART-4. Treatment of SOA formation by these known precursors (isoprene, propene and lumped alkenes may be particularly important when MOZART-4 output is used to generate boundary conditions for regional air quality simulations that require more accurate representation of SOA concentrations and distributions.
Energy Technology Data Exchange (ETDEWEB)
Lee, Jung Woon; Park, Young Tack [Soongsil University, Seoul (Korea, Republic of)
1996-07-01
The main objective of this project is modeling of human operator in a main control room of Nuclear Power Plant. For this purpose, we carried out research on knowledge representation and inference method based on Rasmussen`s decision ladder structure. And we have developed SACOM(Simulation= Analyzer with a Cognitive Operator Model) using G2 shell on Sun workstations. SACOM consists of Operator Model, Interaction Analyzer, Situation Generator. Cognitive model aims to build a more detailed model of human operators in an effective way. SACOM is designed to model knowledge-based behavior of human operators more easily. The followings are main research topics carried out this year. First, in order to model knowledge-based behavior of human operators, more detailed scenarios are constructed. And, knowledge representation and inference methods are developed to support the scenarios. Second, meta knowledge structures are studied to support human operators 4 types of diagnoses. This work includes a study on meta and scheduler knowledge structures for generate-and-test, topographic, decision tree and case-based approaches. Third, domain knowledge structure are improved to support meta knowledge. Especially, domain knowledge structures are developed to model topographic diagnosis model. Fourth, more applicable interaction analyzer and situation generator are designed and implemented. The new version is implemented in G2 on Sun workstations. 35 refs., 49 figs. (author)
Rau, Martina A.; Scheines, Richard
2012-01-01
Although learning from multiple representations has been shown to be effective in a variety of domains, little is known about the mechanisms by which it occurs. We analyzed log data on error-rate, hint-use, and time-spent obtained from two experiments with a Cognitive Tutor for fractions. The goal of the experiments was to compare learning from…
Museus, Samuel D.; Jayakumar, Uma M.; Robinson, Thomas
2012-01-01
The failure of many 2-year college students to persist and complete a post-secondary credential or degree remains a problem of paramount importance to higher education policymakers and practitioners. While racial representation--or the extent to which a student's racial group is represented on their respective campus--might be one factor that…
Noncontractible hyperloops in gauge models with Higgs fields in the fundamental representation
Burzlaff, Jürgen
1984-11-01
We study finite-energy configurations in SO( N) gauge theories with Higgs fields in the fundamental representation. For all winding numbers, noncontractible hyperloops are constructed. The corresponding energy density is spherically symmetric, and the configuration with maximal energy on each hyperloop can be determined. Noncontractible hyperloops with an arbitrary winding number for SU(2) gauge theory are also given.
Noncontractible hyperloops in gauge models with Higgs fields in the fundamental representation
International Nuclear Information System (INIS)
Burzlaff, J.
1984-01-01
We study finite-energy configurations in SO(N) gauge theories with Higgs fields in the fundamental representation. For all winding numbers, noncontractible hyperloops are constructed. The corresponding energy density is spherically symmetric, and the configuration with maximal energy on each hyperloop can be determined. Noncontractible hyperloops with an arbitrary winding number for SU(2) gauge theory are also given. (orig.)
Noncontractible hyperloops in gauge models with Higgs fields in the fundamental representation
Energy Technology Data Exchange (ETDEWEB)
Burzlaff, J. (Dublin Inst. for Advanced Studies (Ireland). School of Theoretical Physics)
1984-11-01
We study finite-energy configurations in SO(N) gauge theories with Higgs fields in the fundamental representation. For all winding numbers, noncontractible hyperloops are constructed. The corresponding energy density is spherically symmetric, and the configuration with maximal energy on each hyperloop can be determined. Noncontractible hyperloops with an arbitrary winding number for SU(2) gauge theory are also given.
Munez, David; Orrantia, Josetxu; Rosales, Javier
2013-01-01
This study explored the effectiveness of external representations presented together with compare word problems, and whether such effectiveness was moderated by working memory. Participants were 49 secondary school students. Each participant solved 48 problems presented in 4 presentation types that included 2 difficulty treatments (number of steps…
Klevers, Denis
2016-01-01
We give an explicit construction of a class of F-theory models with matter in the three-index symmetric (4) representation of SU(2). This matter is realized at codimension two loci in the F-theory base where the divisor carrying the gauge group is singular; the associated Weierstrass model does not have the form associated with a generic SU(2) Tate model. For 6D theories, the matter is localized at a triple point singularity of arithmetic genus g=3 in the curve supporting the SU(2) group. This is the first explicit realization of matter in F-theory in a representation corresponding to a genus contribution greater than one. The construction is realized by "unHiggsing" a model with a U(1) gauge factor under which there is matter with charge q=3. The resulting SU(2) models can be further unHiggsed to realize non-Abelian G_2xSU(2) models with more conventional matter content or SU(2)^3 models with trifundamental matter. The U(1) models used as the basis for this construction do not seem to have a Weierstrass real...
Directory of Open Access Journals (Sweden)
Ashley M. Matheny
2017-02-01
Full Text Available Land surface models and dynamic global vegetation models typically represent vegetation through coarse plant functional type groupings based on leaf form, phenology, and bioclimatic limits. Although these groupings were both feasible and functional for early model generations, in light of the pace at which our knowledge of functional ecology, ecosystem demographics, and vegetation-climate feedbacks has advanced and the ever growing demand for enhanced model performance, these groupings have become antiquated and are identified as a key source of model uncertainty. The newest wave of model development is centered on shifting the vegetation paradigm away from plant functional types (PFTs and towards flexible trait-based representations. These models seek to improve errors in ecosystem fluxes that result from information loss due to over-aggregation of dissimilar species into the same functional class. We advocate the importance of the inclusion of plant hydraulic trait representation within the new paradigm through a framework of the whole-plant hydraulic strategy. Plant hydraulic strategy is known to play a critical role in the regulation of stomatal conductance and thus transpiration and latent heat flux. It is typical that coexisting plants employ opposing hydraulic strategies, and therefore have disparate patterns of water acquisition and use. Hydraulic traits are deterministic of drought resilience, response to disturbance, and other demographic processes. The addition of plant hydraulic properties in models may not only improve the simulation of carbon and water fluxes but also vegetation population distributions.
Tao, Cui; Jiang, Guoqian; Oniki, Thomas A; Freimuth, Robert R; Zhu, Qian; Sharma, Deepak; Pathak, Jyotishman; Huff, Stanley M; Chute, Christopher G
2013-05-01
The clinical element model (CEM) is an information model designed for representing clinical information in electronic health records (EHR) systems across organizations. The current representation of CEMs does not support formal semantic definitions and therefore it is not possible to perform reasoning and consistency checking on derived models. This paper introduces our efforts to represent the CEM specification using the Web Ontology Language (OWL). The CEM-OWL representation connects the CEM content with the Semantic Web environment, which provides authoring, reasoning, and querying tools. This work may also facilitate the harmonization of the CEMs with domain knowledge represented in terminology models as well as other clinical information models such as the openEHR archetype model. We have created the CEM-OWL meta ontology based on the CEM specification. A convertor has been implemented in Java to automatically translate detailed CEMs from XML to OWL. A panel evaluation has been conducted, and the results show that the OWL modeling can faithfully represent the CEM specification and represent patient data.
DEFF Research Database (Denmark)
Wulf-Andersen, Trine Østergaard
2012-01-01
, and dialogue, of situated participants. The article includes a lengthy example of a poetic representation of one participant’s story, and the author comments on the potentials of ‘doing’ poetic representations as an example of writing in ways that challenges what sometimes goes unasked in participative social...
Affine histories in quantum gravity: introduction and the representation for a cosmological model
International Nuclear Information System (INIS)
Kessari, Smaragda
2007-01-01
It is shown how consistent histories quantum cosmology can be realized through Isham's histories projection operator consistent histories scheme. This is done by using an affine algebra instead of a canonical one and also by using cocycle representations. A regularization scheme allows us to find a history Hamiltonian which exists as a proper self-adjoint operator. The role of a cocycle choice is also discussed
Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus
2017-02-01
Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies
Energy Technology Data Exchange (ETDEWEB)
Tchamen, G.W.; Gaucher, J. [Hydro-Quebec Production, Montreal, PQ (Canada). Direction Barrage et Environnement, Unite Barrages et Hydraulique
2010-08-15
Owners and operators of high capacity dams in Quebec have a legal obligation to conduct dam break analysis for each of their dams in order to ensure public safety. This paper described traditional hydraulic methodologies and models used to perform dam break analyses. In particular, it examined the influence of the reservoir drawdown submodel on the numerical results of a dam break analysis. Numerical techniques from the field of fluid mechanics and aerodynamics have provided the basis for developing effective hydrodynamic codes that reduce the level of uncertainties associated with dam-break analysis. A static representation that considers the storage curve was compared with a dynamic representation based on Saint-Venant equations and the real bathymetry of the reservoir. The comparison was based on breach of reservoir, maximum water level, flooded area, and wave arrival time in the valley downstream. The study showed that the greatest difference in attained water level was in the vicinity of the dam, and the difference decreased as the distance from the reservoir increased. The analysis showed that the static representation overestimated the maximum depth and inundated area by as much as 20 percent. This overestimation can be reduced by 30 to 40 percent by using dynamic representation. A dynamic model based on a synthetic trapezoidal reconstruction of the storage curve was used, given the lack of bathymetric data for the reservoir. It was concluded that this model can significantly reduce the uncertainty associated with the static model. 7 refs., 9 tabs., 7 figs.
International Nuclear Information System (INIS)
Belich, H.; Cuba, G.; Paunov, R.
1997-12-01
Affine Toda theories based on simple Lie algebras G are known to posses soliton solutions. Toda solitons has been found by Olive, Turok and Underwood within the group-theoretical approach to the integrable field equations. Single solitons are created by exponentials of special elements of the underlying affine Lie algebra which diagonalize the adjoint action of the principal Heisenberg subalgebra. When G is simply laced and level one representations are considered, the generators of the affine Lie algebra are expressed in terms of the principal Heisenberg oscillators. This representation is known as vertex operator construction. It plays a crucial role in the string theory as well as in the conformal field theory. Alternatively, solitons can be generated from the vacuum by dressing transformations. The problem to relate dressing symmetry to the vertex operator representation of the tau functions for the sine-Gordon model was previously considered by Babelon and Bernard. In the present paper, we extend this relation for arbitrary A (1) n Toda field theory. (author)
National Aeronautics and Space Administration — This article presented a discussion on uncertainty representation and management for model-based prog- nostics methodologies based on the Bayesian tracking framework...
Born, Jannis; Galeazzi, Juan M; Stringer, Simon M
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning
Born, Jannis; Stringer, Simon M.
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning
Directory of Open Access Journals (Sweden)
Jannis Born
Full Text Available A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior
Functional representations for quantized fields
International Nuclear Information System (INIS)
Jackiw, R.
1988-01-01
This paper provides information on Representing transformations in quantum theory bosonic quantum field theories: Schrodinger Picture; Represnting Transformations in Bosonic Quantum Field Theory; Two-Dimensional Conformal Transformations, Schrodinger picture representation, Fock space representation, Inequivalent Schrodinger picture representations; Discussion, Self-Dual and Other Models; Field Theory in de Sitter Space. Fermionic Quantum Field Theories: Schroedinger Picture; Schrodinger Picture Representation for Two-Dimensional; Conformal Transformations; Fock Space Dynamics in the Schrodinger Picture; Fock Space Evaluation of Anomalous Current and Conformal Commutators
Energy Technology Data Exchange (ETDEWEB)
Kollias, Pavlos [McGill Univ., Montreal, QC (Canada
2016-09-06
This the final report for the DE-SC0007096 - Advancing Clouds Lifecycle Representation in Numerical Models Using Innovative Analysis Methods that Bridge ARM Observations and Models Over a Breadth of Scales - PI: Pavlos Kollias. The final report outline the main findings of the research conducted using the aforementioned award in the area of cloud research from the cloud scale (10-100 m) to the mesoscale (20-50 km).
Luong, Thang
2018-01-22
A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally mesoscale convective systems. With the use of convective parameterization, high precipitation biases in model simulations are typically observed over the peaks of mountain ranges. To address this issue, the Kain–Fritsch (KF) cumulus parameterization scheme has been modified with new diagnostic equations to compute the updraft velocity, the convective available potential energy closure assumption, and the convective trigger function. The scheme has been adapted for use in the Weather Research and Forecasting (WRF). A numerical weather prediction-type simulation is conducted for the North American Monsoon Experiment Intensive Observing Period 2 and a regional climate simulation is performed, by dynamically downscaling. In both of these applications, there are notable improvements in the WRF model-simulated precipitation due to the better representation of organized, propagating convection. The use of the modified KF scheme for atmospheric model simulations may provide a more computationally economical alternative to improve the representation of organized convection, as compared to convective-permitting simulations at the kilometer scale or a super-parameterization approach.
Yakubova, Gulnoza; Hughes, Elizabeth M; Shinaberry, Megan
2016-07-01
The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the effectiveness of the intervention on the acquisition and maintenance of addition, subtraction, and number comparison skills for four elementary school students with ASD. Findings supported the effectiveness of the intervention in improving skill acquisition and maintenance at a 3-week follow-up. Implications for practice and future research are discussed.
SURVEYING, MODELING AND 3D REPRESENTATION OF A WRECK FOR DIVING PURPOSES: CARGO SHIP “VERA”
Ktistis, A.; Tokmakidis, P.; Papadimitriou, K.
2017-01-01
This paper presents the results from an underwater recording of the stern part of a contemporary cargo-ship wreck. The aim of this survey was to create 3D representations of this wreck mainly for recreational diving purposes. The key points of this paper are: a) the implementation of the underwater recording at a diving site; b) the reconstruction of a 3d model from data that have been captured by recreational divers; and c) the development of a set of products to be used by the general publi...
An, Gary C; Faeder, James R
2009-01-01
Intracellular signaling/synthetic pathways are being increasingly extensively characterized. However, while these pathways can be displayed in static diagrams, in reality they exist with a degree of dynamic complexity that is responsible for heterogeneous cellular behavior. Multiple parallel pathways exist and interact concurrently, limiting the ability to integrate the various identified mechanisms into a cohesive whole. Computational methods have been suggested as a means of concatenating this knowledge to aid in the understanding of overall system dynamics. Since the eventual goal of biomedical research is the identification and development of therapeutic modalities, computational representation must have sufficient detail to facilitate this 'engineering' process. Adding to the challenge, this type of representation must occur in a perpetual state of incomplete knowledge. We present a modeling approach to address this challenge that is both detailed and qualitative. This approach is termed 'dynamic knowledge representation,' and is intended to be an integrated component of the iterative cycle of scientific discovery. BioNetGen (BNG), a software platform for modeling intracellular signaling pathways, was used to model the toll-like receptor 4 (TLR-4) signal transduction cascade. The informational basis of the model was a series of reference papers on modulation of (TLR-4) signaling, and some specific primary research papers to aid in the characterization of specific mechanistic steps in the pathway. This model was detailed with respect to the components of the pathway represented, but qualitative with respect to the specific reaction coefficients utilized to execute the reactions. Responsiveness to simulated lipopolysaccharide (LPS) administration was measured by tumor necrosis factor (TNF) production. Simulation runs included evaluation of initial dose-dependent response to LPS administration at 10, 100, 1000 and 10,000, and a subsequent examination of
Directory of Open Access Journals (Sweden)
Florian Lesaint
2014-02-01
Full Text Available Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US, some rats (sign-trackers come to approach and engage the conditioned stimulus (CS itself - a lever - more and more avidly, whereas other rats (goal-trackers learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in
Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B.; Robinson, Terry E.; Khamassi, Mehdi
2014-01-01
Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself – a lever – more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in
Lesaint, Florian; Sigaud, Olivier; Flagel, Shelly B; Robinson, Terry E; Khamassi, Mehdi
2014-02-01
Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself - a lever - more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational
Moore, Gaye; Hepworth, Graham; Weiland, Tracey; Manias, Elizabeth; Gerdtz, Marie Frances; Kelaher, Margaret; Dunt, David
2012-02-01
To prospectively evaluate the accuracy of a predictive model to identify homeless people at risk of representation to an emergency department. A prospective cohort analysis utilised one month of data from a Principal Referral Hospital in Melbourne, Australia. All visits involving people classified as homeless were included, excluding those who died. Homelessness was defined as living on the streets, in crisis accommodation, in boarding houses or residing in unstable housing. Rates of re-presentation, defined as the total number of visits to the same emergency department within 28 days of discharge from hospital, were measured. Performance of the risk screening tool was assessed by calculating sensitivity, specificity, positive and negative predictive values and likelihood ratios. Over the study period (April 1, 2009 to April 30, 2009), 3298 presentations from 2888 individuals were recorded. The homeless population accounted for 10% (n=327) of all visits and 7% (n=211) of all patients. A total of 90 (43%) homeless people re-presented to the emergency department. The predictive model included nine variables and achieved 98% (CI, 0.92-0.99) sensitivity and 66% (CI, 0.57-0.74) specificity. The positive predictive value was 68% and the negative predictive value was 98%. The positive likelihood ratio 2.9 (CI, 2.2-3.7) and the negative likelihood ratio was 0.03 (CI, 0.01-0.13). The high emergency department re-presentation rate for people who were homeless identifies unresolved psychosocial health needs. The emergency department remains a vital access point for homeless people, particularly after hours. The risk screening tool is key to identify medical and social aspects of a homeless patient's presentation to assist early identification and referral. Copyright Â© 2012 College of Emergency Nursing Australasia Ltd. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Perzyński Konrad
2015-06-01
Full Text Available The developed numerical model of a local nanoindentation test, based on the digital material representation (DMR concept, has been presented within the paper. First, an efficient algorithm describing the pulsed laser deposition (PLD process was proposed to realistically recreate the specific morphology of a nanolayered material in an explicit manner. The nanolayered Ti/TiN composite was selected for the investigation. Details of the developed cellular automata model of the PLD process were presented and discussed. Then, the Ti/TiN DMR was incorporated into the finite element software and numerical model of the nanoindentation test was established. Finally, examples of obtained results presenting capabilities of the proposed approach were highlighted.
Directory of Open Access Journals (Sweden)
Carlo Bianchini
2016-06-01
Full Text Available It’s established that in the design and construc- tion of new buildings, BIM is a fundamental refe- rence especially when the standardization is the typical character of the project. As Architecture, with the management of the entire building pro- cess, requires standardization for greater eco- nomy, thanks to BIM tools the building process seems to have actually moved to a 2.0 phase; on the contrary, when BIM is applied to historical bu- ildings it still reveals not so adequate. In this framework, this paper will not discuss the differences between CAD and BIM or the un- doubted potential of BIM software from a tech- nical or operational standpoint; we would focus instead on the implication of BIM referring to the Representation disciplines and to the issues con- nected with its application to the existing built stock and especially to historic buildings.
Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai
2016-02-19
In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver's EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver's vigilance level. Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model.
Directory of Open Access Journals (Sweden)
Zutao Zhang
2016-02-01
Full Text Available In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI system with eight channels is designed for collecting the driver’s EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT is adopted to extract the EEG power spectrum density (PSD. In this step, sparse representation classification combined with k-singular value decomposition (KSVD is firstly introduced in PSD to estimate the driver’s vigilance level. Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model.
SURVEYING, MODELING AND 3D REPRESENTATION OF A WRECK FOR DIVING PURPOSES: CARGO SHIP “VERA”
Directory of Open Access Journals (Sweden)
A. Ktistis
2017-02-01
Full Text Available This paper presents the results from an underwater recording of the stern part of a contemporary cargo-ship wreck. The aim of this survey was to create 3D representations of this wreck mainly for recreational diving purposes. The key points of this paper are: a the implementation of the underwater recording at a diving site; b the reconstruction of a 3d model from data that have been captured by recreational divers; and c the development of a set of products to be used by the general public for the ex situ presentation or for the in situ navigation. The idea behind this project is to define a simple and low cost procedure for the surveying, modeling and 3D representation of a diving site. The perspective of our team is to repeat the proposed methodology for the documentation and the promotion of other diving sites with cultural features, as well as to train recreational divers in underwater surveying procedures towards public awareness and community engagement in the maritime heritage.
Berardi, D.; Gomez-Casanovas, N.; Hudiburg, T. W.
2017-12-01
Improving the certainty of ecosystem models is essential to ensuring their legitimacy, value, and ability to inform management and policy decisions. With more than a century of research exploring the variables controlling soil respiration, a high level of uncertainty remains in the ability of ecosystem models to accurately estimate respiration with changing climatic conditions. Refining model estimates of soil carbon fluxes is a high priority for climate change scientists to determine whether soils will be carbon sources or sinks in the future. We found that DayCent underestimates heterotrophic respiration by several magnitudes for our temperate mixed conifer forest site. While traditional ecosystem models simulate decomposition through first order kinetics, recent research has found that including microbial mechanisms explains 20 percent more spatial heterogeneity. We manipulated the DayCent heterotrophic respiration model to include a more mechanistic representation of microbial dynamic and compared the new model with continuous and survey observations from our experimental forest site in the Northern Rockies ecoregion. We also calibrated the model's sensitivity to soil moisture and temperature to our experimental data. We expect to improve the accuracy of the model by 20-30 percent. By using a more representative and calibrated model of soil carbon dynamics, we can better predict feedbacks between climate and soil carbon pools.
Alexander, P. M.; LeGrande, A. N.; Fischer, E.; Tedesco, M.; Kelley, M.; Schmidt, G. A.; Fettweis, X.
2017-12-01
Towards achieving coupled simulations between the NASA Goddard Institute for Space Studies (GISS) ModelE2 general circulation model (GCM) and ice sheet models (ISMs), improvements have been made to the representation of the ice sheet surface in ModelE2. These include a sub-grid-scale elevation class scheme, a multi-layer snow model, a time-variable surface albedo scheme, and adjustments to parameterization of sublimation/evaporation. These changes improve the spatial resolution and physical representation of the ice sheet surface such that the surface is represented at a level of detail closer to that of Regional Climate Models (RCMs). We assess the impact of these changes on simulated Greenland Ice Sheet (GrIS) surface mass balance (SMB). We also compare ModelE2 simulations in which winds have been nudged to match the European Center for Medium-Range Weather Forecasts (ECMWF) ERA-Interim reanalysis with simulations from the Modèle Atmosphérique Régionale (MAR) RCM forced by the same reanalysis. Adding surface elevation classes results in a much higher spatial resolution representation of the surface necessary for coupling with ISMs, but has a negligible impact on overall SMB. Implementing a variable surface albedo scheme increases melt by 100%, bringing it closer to melt simulated by MAR. Adjustments made to the representation of topography-influenced surface roughness length in ModelE2 reduce a positive bias in evaporation relative to MAR. We also examine the impact of changes to the GrIS surface on regional atmospheric and oceanic climate in coupled ocean-atmosphere simulations with ModelE2, finding a general warming of the Arctic due to a warmer GrIS, and a cooler North Atlantic in scenarios with doubled atmospheric CO2 relative to pre-industrial levels. The substantial influence of changes to the GrIS surface on the oceans and atmosphere highlight the importance of including these processes in the GCM, in view of potential feedbacks between the ice sheet
Schiffler, Ralf
2014-01-01
This book is intended to serve as a textbook for a course in Representation Theory of Algebras at the beginning graduate level. The text has two parts. In Part I, the theory is studied in an elementary way using quivers and their representations. This is a very hands-on approach and requires only basic knowledge of linear algebra. The main tool for describing the representation theory of a finite-dimensional algebra is its Auslander-Reiten quiver, and the text introduces these quivers as early as possible. Part II then uses the language of algebras and modules to build on the material developed before. The equivalence of the two approaches is proved in the text. The last chapter gives a proof of Gabriel’s Theorem. The language of category theory is developed along the way as needed.
Frank, Laurence Emmanuelle
2006-01-01
Feature Network Models (FNM) are graphical structures that represent proximity data in a discrete space with the use of features. A statistical inference theory is introduced, based on the additivity properties of networks and the linear regression framework. Considering features as predictor
Polynomial representations of GLn
Green, James A; Erdmann, Karin
2007-01-01
The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.
Polynomial representations of GLN
Green, James A
1980-01-01
The first half of this book contains the text of the first edition of LNM volume 830, Polynomial Representations of GLn. This classic account of matrix representations, the Schur algebra, the modular representations of GLn, and connections with symmetric groups, has been the basis of much research in representation theory. The second half is an Appendix, and can be read independently of the first. It is an account of the Littelmann path model for the case gln. In this case, Littelmann's 'paths' become 'words', and so the Appendix works with the combinatorics on words. This leads to the repesentation theory of the 'Littelmann algebra', which is a close analogue of the Schur algebra. The treatment is self- contained; in particular complete proofs are given of classical theorems of Schensted and Knuth.
DEFF Research Database (Denmark)
Photography not only represents space. Space is produced photographically. Since its inception in the 19th century, photography has brought to light a vast array of represented subjects. Always situated in some spatial order, photographic representations have been operatively underpinned by social...... to the enterprises of the medium. This is the subject of Representational Machines: How photography enlists the workings of institutional technologies in search of establishing new iconic and social spaces. Together, the contributions to this edited volume span historical epochs, social environments, technological...... possibilities, and genre distinctions. Presenting several distinct ways of producing space photographically, this book opens a new and important field of inquiry for photography research....
Karpilovsky, G
1994-01-01
This third volume can be roughly divided into two parts. The first part is devoted to the investigation of various properties of projective characters. Special attention is drawn to spin representations and their character tables and to various correspondences for projective characters. Among other topics, projective Schur index and projective representations of abelian groups are covered. The last topic is investigated by introducing a symplectic geometry on finite abelian groups. The second part is devoted to Clifford theory for graded algebras and its application to the corresponding theory
DEFF Research Database (Denmark)
Rasmussen, Majken Kirkegaard; Petersen, Marianne Graves
2011-01-01
Stereotypic presumptions about gender affect the design process, both in relation to how users are understood and how products are designed. As a way to decrease the influence of stereotypic presumptions in design process, we propose not to disregard the aspect of gender in the design process......, as the perspective brings valuable insights on different approaches to technology, but instead to view gender through a value lens. Contributing to this perspective, we have developed Value Representations as a design-oriented instrument for staging a reflective dialogue with users. Value Representations...
Dewi, N. R.; Arini, F. Y.
2018-03-01
The main purpose of this research is developing and produces a Calculus textbook model that supported with GeoGebra. This book was designed to enhancing students’ mathematical problem solving and mathematical representation. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 6 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2. The data were analyzed with descriptive statistics. The analysis showed that the Calculus textbook model that supported with GeoGebra, valid and fill up the criteria of practicality.
Vanuytrecht, Eline; Thorburn, Peter J
2017-05-01
Elevated atmospheric CO 2 concentrations ([CO 2 ]) cause direct changes in crop physiological processes (e.g. photosynthesis and stomatal conductance). To represent these CO 2 responses, commonly used crop simulation models have been amended, using simple and semicomplex representations of the processes involved. Yet, there is no standard approach to and often poor documentation of these developments. This study used a bottom-up approach (starting with the APSIM framework as case study) to evaluate modelled responses in a consortium of commonly used crop models and illuminate whether variation in responses reflects true uncertainty in our understanding compared to arbitrary choices of model developers. Diversity in simulated CO 2 responses and limited validation were common among models, both within the APSIM framework and more generally. Whereas production responses show some consistency up to moderately high [CO 2 ] (around 700 ppm), transpiration and stomatal responses vary more widely in nature and magnitude (e.g. a decrease in stomatal conductance varying between 35% and 90% among models was found for [CO 2 ] doubling to 700 ppm). Most notably, nitrogen responses were found to be included in few crop models despite being commonly observed and critical for the simulation of photosynthetic acclimation, crop nutritional quality and carbon allocation. We suggest harmonization and consideration of more mechanistic concepts in particular subroutines, for example, for the simulation of N dynamics, as a way to improve our predictive understanding of CO 2 responses and capture secondary processes. Intercomparison studies could assist in this aim, provided that they go beyond simple output comparison and explicitly identify the representations and assumptions that are causal for intermodel differences. Additionally, validation and proper documentation of the representation of CO 2 responses within models should be prioritized. © 2017 John Wiley & Sons Ltd.
Energy Technology Data Exchange (ETDEWEB)
Mishra, Umakant; Drewniak, Beth; Jastrow, Julie D.; Matamala, Roser M.; Vitharana, U. W. A.
2017-08-01
Soil properties such as soil organic carbon (SOC) stocks and active-layer thickness are used in earth system models (F.SMs) to predict anthropogenic and climatic impacts on soil carbon dynamics, future changes in atmospheric greenhouse gas concentrations, and associated climate changes in the permafrost regions. Accurate representation of spatial and vertical distribution of these soil properties in ESMs is a prerequisite for redudng existing uncertainty in predicting carbon-climate feedbacks. We compared the spatial representation of SOC stocks and active-layer thicknesses predicted by the coupled Modellntercomparison Project Phase 5 { CMIP5) ESMs with those predicted from geospatial predictions, based on observation data for the state of Alaska, USA. For the geospatial modeling. we used soil profile observations {585 for SOC stocks and 153 for active-layer thickness) and environmental variables (climate, topography, land cover, and surficial geology types) and generated fine-resolution (50-m spatial resolution) predictions of SOC stocks (to 1-m depth) and active-layer thickness across Alaska. We found large inter-quartile range (2.5-5.5 m) in predicted active-layer thickness of CMIP5 modeled results and small inter-quartile range (11.5-22 kg m-2) in predicted SOC stocks. The spatial coefficient of variability of active-layer thickness and SOC stocks were lower in CMIP5 predictions compared to our geospatial estimates when gridded at similar spatial resolutions (24.7 compared to 30% and 29 compared to 38%, respectively). However, prediction errors. when calculated for independent validation sites, were several times larger in ESM predictions compared to geospatial predictions. Primaly factors leading to observed differences were ( 1) lack of spatial heterogeneity in ESM predictions, (2) differences in assumptions concerning environmental controls, and (3) the absence of pedogenic processes in ESM model structures. Our results suggest that efforts to incorporate
Djurfeldt, Mikael
2012-07-01
The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.
He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing
2014-12-01
Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.
Brown, Steven; Bilitza, Dieter; Yiǧit, Erdal
2018-06-01
A new monthly ionospheric index, IGNS, is presented to improve the representation of the solar cycle variation of the ionospheric F2 peak plasma frequency, foF2. IGNS is calculated using a methodology similar to the construction of the "global effective sunspot number", IG, given by Liu et al. (1983) but selects ionosonde observations based on hemispheres. We incorporated the updated index into the International Reference Ionosphere (IRI) model and compared the foF2 model predictions with global ionospheric observations. We also investigated the influence of the underlying foF2 model on the IG index. IRI has two options for foF2 specification, the CCIR-66 and URSI-88 foF2 models. For the first time, we have calculated IG using URSI-88 and assessed the impact on model predictions. Through a retrospective model-data comparison, results show that the inclusion of the new monthly IGNS index in place of the current 12-month smoothed IG index reduce the foF2 model prediction errors by nearly a factor of two. These results apply to both day-time and nightime predictions. This is due to an overall improved prediction of foF2 seasonal and solar cycle variations in the different hemispheres.
Hydrological processes and model representation: impact of soft data on calibration
J.G. Arnold; M.A. Youssef; H. Yen; M.J. White; A.Y. Sheshukov; A.M. Sadeghi; D.N. Moriasi; J.L. Steiner; Devendra Amatya; R.W. Skaggs; E.B. Haney; J. Jeong; M. Arabi; P.H. Gowda
2015-01-01
Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for model calibration and validation in the literature. In an effort to develop accepted model calibration...
Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi
2012-03-01
We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.
Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi
2013-03-01
In this paper, we present a texture classification method based on texton learned via sparse representation (SR) with new feature histogram maps in the classification of emphysema. First, an overcomplete dictionary of textons is learned via KSVD learning on every class image patches in the training dataset. In this stage, high-pass filter is introduced to exclude patches in smooth area to speed up the dictionary learning process. Second, 3D joint-SR coefficients and intensity histograms of the test images are used for characterizing regions of interest (ROIs) instead of conventional feature histograms constructed from SR coefficients of the test images over the dictionary. Classification is then performed using a classifier with distance as a histogram dissimilarity measure. Four hundreds and seventy annotated ROIs extracted from 14 test subjects, including 6 paraseptal emphysema (PSE) subjects, 5 centrilobular emphysema (CLE) subjects and 3 panlobular emphysema (PLE) subjects, are used to evaluate the effectiveness and robustness of the proposed method. The proposed method is tested on 167 PSE, 240 CLE and 63 PLE ROIs consisting of mild, moderate and severe pulmonary emphysema. The accuracy of the proposed system is around 74%, 88% and 89% for PSE, CLE and PLE, respectively.
International Nuclear Information System (INIS)
Gruber, R.; Degtyarev, L.M.; Kuper, A.; Martynov, A.A.; Medvedev, S.Yu.; Shafranov, V.D.
1996-01-01
Equations for the three-dimensional equilibrium of a plasma are formulated in the poloidal representation. The magnetic field is expressed in terms of the poloidal magnetic flux Ψ and the poloidal electric current F. As a result, three-dimensional equilibrium configurations are analyzed with the help of a set of equations including the elliptical equation for the poloidal flux, the magnetic differential equation for the parallel current, and the equations for the basis vector field b. To overcome the difficulties associated with peculiarities that can arise in solving the magnetic differential equation at rational toroidal magnetic surfaces, small regulating corrections are introduced into the proposed set of equations. In this case, second-order differential terms with a small parameter appear in the magnetic differential equations. As a result, these equations take the form of elliptical equations. Three versions of regulating corrections are proposed. The equations obtained can be used to develop numerical codes for calculating three-dimensional equilibrium plasma configurations with an island structure
Tracking and recognition face in videos with incremental local sparse representation model
Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang
2013-10-01
This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.
Directory of Open Access Journals (Sweden)
Kamil A. Grajski
2016-07-01
Full Text Available Mechanisms underlying the emergence and plasticity of representational discontinuities in the mammalian primary somatosensory cortical representation of the hand are investigated in a computational model. The model consists of an input lattice organized as a three-digit hand forward-connected to a lattice of cortical columns each of which contains a paired excitatory and inhibitory cell. Excitatory and inhibitory synaptic plasticity of feedforward and lateral connection weights is implemented as a simple covariance rule and competitive normalization. Receptive field properties are computed independently for excitatory and inhibitory cells and compared within and across columns. Within digit representational zones intracolumnar excitatory and inhibitory receptive field extents are concentric, single-digit, small, and unimodal. Exclusively in representational boundary-adjacent zones, intracolumnar excitatory and inhibitory receptive field properties diverge: excitatory cell receptive fields are single-digit, small, and unimodal; and the paired inhibitory cell receptive fields are bimodal, double-digit, and large. In simulated syndactyly (webbed fingers, boundary-adjacent intracolumnar receptive field properties reorganize to within-representation type; divergent properties are reacquired following syndactyly release. This study generates testable hypotheses for assessment of cortical laminar-dependent receptive field properties and plasticity within and between cortical representational zones. For computational studies, present results suggest that concurrent excitatory and inhibitory plasticity may underlie novel emergent properties.
International Nuclear Information System (INIS)
Gao, Xiankun; Cui, Yan; Hu, Jianjun; Xu, Guangyin; Yu, Yongchang
2016-01-01
Highlights: • Lambert W-function based exact representation (LBER) is presented for double diode model (DDM). • Fitness difference between LBER and DDM is verified by reported parameter values. • The proposed LBER can better represent the I–V and P–V characteristics of solar cells. • Parameter extraction difference between LBER and DDM is validated by two algorithms. • The parameter values extracted from LBER are more accurate than those from DDM. - Abstract: Accurate modeling and parameter extraction of solar cells play an important role in the simulation and optimization of PV systems. This paper presents a Lambert W-function based exact representation (LBER) for traditional double diode model (DDM) of solar cells, and then compares their fitness and parameter extraction performance. Unlike existing works, the proposed LBER is rigorously derived from DDM, and in LBER the coefficients of Lambert W-function are not extra parameters to be extracted or arbitrary scalars but the vectors of terminal voltage and current of solar cells. The fitness difference between LBER and DDM is objectively validated by the reported parameter values and experimental I–V data of a solar cell and four solar modules from different technologies. The comparison results indicate that under the same parameter values, the proposed LBER can better represent the I–V and P–V characteristics of solar cells and provide a closer representation to actual maximum power points of all module types. Two different algorithms are used to compare the parameter extraction performance of LBER and DDM. One is our restart-based bound constrained Nelder-Mead (rbcNM) algorithm implemented in Matlab, and the other is the reported R_c_r-IJADE algorithm executed in Visual Studio. The comparison results reveal that, the parameter values extracted from LBER using two algorithms are always more accurate and robust than those from DDM despite more time consuming. As an improved version of DDM, the
Kajtar, Jules B.; Santoso, Agus; McGregor, Shayne; England, Matthew H.; Baillie, Zak
2018-02-01
The strengthening of the Pacific trade winds in recent decades has been unmatched in the observational record stretching back to the early twentieth century. This wind strengthening has been connected with numerous climate-related phenomena, including accelerated sea-level rise in the western Pacific, alterations to Indo-Pacific ocean currents, increased ocean heat uptake, and a slow-down in the rate of global-mean surface warming. Here we show that models in the Coupled Model Intercomparison Project phase 5 underestimate the observed range of decadal trends in the Pacific trade winds, despite capturing the range in decadal sea surface temperature (SST) variability. Analysis of observational data suggests that tropical Atlantic SST contributes considerably to the Pacific trade wind trends, whereas the Atlantic feedback in coupled models is muted. Atmosphere-only simulations forced by observed SST are capable of recovering the time-variation and the magnitude of the trade wind trends. Hence, we explore whether it is the biases in the mean or in the anomalous SST patterns that are responsible for the under-representation in fully coupled models. Over interannual time-scales, we find that model biases in the patterns of Atlantic SST anomalies are the strongest source of error in the precipitation and atmospheric circulation response. In contrast, on decadal time-scales, the magnitude of the model biases in Atlantic mean SST are directly linked with the trade wind variability response.
Haverd, Vanessa; Cuntz, Matthias; Nieradzik, Lars P.; Harman, Ian N.
2016-09-01
CABLE is a global land surface model, which has been used extensively in offline and coupled simulations. While CABLE performs well in comparison with other land surface models, results are impacted by decoupling of transpiration and photosynthesis fluxes under drying soil conditions, often leading to implausibly high water use efficiencies. Here, we present a solution to this problem, ensuring that modelled transpiration is always consistent with modelled photosynthesis, while introducing a parsimonious single-parameter drought response function which is coupled to root water uptake. We further improve CABLE's simulation of coupled soil-canopy processes by introducing an alternative hydrology model with a physically accurate representation of coupled energy and water fluxes at the soil-air interface, including a more realistic formulation of transfer under atmospherically stable conditions within the canopy and in the presence of leaf litter. The effects of these model developments are assessed using data from 18 stations from the global eddy covariance FLUXNET database, selected to span a large climatic range. Marked improvements are demonstrated, with root mean squared errors for monthly latent heat fluxes and water use efficiencies being reduced by 40 %. Results highlight the important roles of deep soil moisture in mediating drought response and litter in dampening soil evaporation.
Monteghetti, Florian; Matignon, Denis; Piot, Estelle; Pascal, Lucas
2016-09-01
A methodology to design broadband time-domain impedance boundary conditions (TDIBCs) from the analysis of acoustical models is presented. The derived TDIBCs are recast exclusively as first-order differential equations, well-suited for high-order numerical simulations. Broadband approximations are yielded from an elementary linear least squares optimization that is, for most models, independent of the absorbing material geometry. This methodology relies on a mathematical technique referred to as the oscillatory-diffusive (or poles and cuts) representation, and is applied to a wide range of acoustical models, drawn from duct acoustics and outdoor sound propagation, which covers perforates, semi-infinite ground layers, as well as cavities filled with a porous medium. It is shown that each of these impedance models leads to a different TDIBC. Comparison with existing numerical models, such as multi-pole or extended Helmholtz resonator, provides insights into their suitability. Additionally, the broadly-applicable fractional polynomial impedance models are analyzed using fractional calculus.
Lacot, Emilie; Vautier, Stéphane; Kőhler, Stefan; Pariente, Jérémie; Martin, Chris B; Puel, Michèle; Lotterie, Jean-Albert; Barbeau, Emmanuel J
2017-09-01
Although it is known that medial temporal lobe (MTL) structures support declarative memory, the fact these structures have different architectonics and circuitry suggests they may also play different functional roles. Selective lesions of MTL structures offer an opportunity to understand these roles. We report, in this study, on JMG, a patient who presents highly unusual lesions that completely affected all MTL structures except for the right hippocampus and parts of neighbouring medial parahippocampal cortex. We first demonstrate that JMG shows preserved recall for visual material on 5 experimental tasks. This finding suggests that his right hippocampus is functional, even though it appears largely disconnected from most of its MTL afferents. In contrast, JMG performed very poorly, as compared to control subjects, on 7 tasks of visual recognition memory for single items. Although he sometimes performed above chance, neither familiarity nor recollection appeared fully preserved. These results indicate that extrahippocampal structures, damaged bilaterally in JMG, perform critical operations for item recognition; and that the hippocampus cannot take over that role, including recollection, when these structures are largely damaged. Finally, in a set of 3 recognition memory tasks with scenes as stimuli, JMG performed at the level of control participants and obtained normal indices of familiarity and recollection. Overall, our findings suggest that the right hippocampus and remnants of parahippocampal cortex can support recognition memory for scenes in the absence of preserved item-recognition memory. The patterns of dissociations, which we report in the present study, provide support for a representational account of the functional organization of MTL structures. Copyright © 2017 Elsevier Ltd. All rights reserved.
An object-oriented forest landscape model and its representation of tree species
Hong S. He; David J. Mladenoff; Joel Boeder
1999-01-01
LANDIS is a forest landscape model that simulates the interaction of large landscape processes and forest successional dynamics at tree species level. We discuss how object-oriented design (OOD) approaches such as modularity, abstraction and encapsulation are integrated into the design of LANDIS. We show that using OOD approaches, model decisions (olden as model...
An Analysis of the Educational Value of Low-Fidelity Anatomy Models as External Representations
Chan, Lap Ki; Cheng, Maurice M. W.
2011-01-01
Although high-fidelity digital models of human anatomy based on actual cross-sectional images of the human body have been developed, reports on the use of physical models in anatomy teaching continue to appear. This article aims to examine the common features shared by these physical models and analyze their educational value based on the…
Tangible Models and Haptic Representations Aid Learning of Molecular Biology Concepts
Johannes, Kristen; Powers, Jacklyn; Couper, Lisa; Silberglitt, Matt; Davenport, Jodi
2016-01-01
Can novel 3D models help students develop a deeper understanding of core concepts in molecular biology? We adapted 3D molecular models, developed by scientists, for use in high school science classrooms. The models accurately represent the structural and functional properties of complex DNA and Virus molecules, and provide visual and haptic…
Directory of Open Access Journals (Sweden)
Chao Yang
2018-03-01
Full Text Available An accurate and comprehensive representation of an observation task is a prerequisite in disaster monitoring to achieve reliable sensor observation planning. However, the extant disaster event or task information models do not fully satisfy the observation requirements for the accurate and efficient planning of remote-sensing satellite sensors. By considering the modeling requirements for a disaster observation task, we propose an observation task chain (OTChain representation model that includes four basic OTChain segments and eight-tuple observation task metadata description structures. A prototype system, namely OTChainManager, is implemented to provide functions for modeling, managing, querying, and visualizing observation tasks. In the case of flood water monitoring, we use a flood remote-sensing satellite sensor observation task for the experiment. The results show that the proposed OTChain representation model can be used in modeling process-owned flood disaster observation tasks. By querying and visualizing the flood observation task instances in the Jinsha River Basin, the proposed model can effectively express observation task processes, represent personalized observation constraints, and plan global remote-sensing satellite sensor observations. Compared with typical observation task information models or engines, the proposed OTChain representation model satisfies the information demands of the OTChain and its processes as well as impels the development of a long time-series sensor observation scheme.
A Model for the representation of Speech Signals in Normal and Impaired Ears
DEFF Research Database (Denmark)
Christiansen, Thomas Ulrich
2004-01-01
hearing was modelled as a combination of outer- and inner hair cell loss. The percentage of dead inner hair cells was calculated based on a new computational method relating auditory nerve fibre thresholds to behavioural thresholds. Finally, a model of the entire auditory nerve fibre population......A model of human auditory periphery, ranging from the outer ear to the auditory nerve, was developed. The model consists of the following components: outer ear transfer function, middle ear transfer function, basilar membrane velocity, inner hair cell receptor potential, inner hair cell probability...... of neurotransmitter release and auditory nerve fibre refractoriness. The model builds on previously published models, however, parameters for basilar membrane velocity and inner hair cell probability of neurotransmitter release were successfully fitted to model data from psychophysical and physiological data...
Representation of tropical deep convection in atmospheric models – Part 2: Tracer transport
Directory of Open Access Journals (Sweden)
C. R. Hoyle
2011-08-01
Full Text Available The tropical transport processes of 14 different models or model versions were compared, within the framework of the SCOUT-O3 (Stratospheric-Climate Links with Emphasis on the Upper Troposphere and Lower Stratosphere project. The tested models range from the regional to the global scale, and include numerical weather prediction (NWP, chemical transport, and chemistry-climate models. Idealised tracers were used in order to prevent the model's chemistry schemes from influencing the results substantially, so that the effects of modelled transport could be isolated. We find large differences in the vertical transport of very short-lived tracers (with a lifetime of 6 h within the tropical troposphere. Peak convective outflow altitudes range from around 300 hPa to almost 100 hPa among the different models, and the upper tropospheric tracer mixing ratios differ by up to an order of magnitude. The timing of convective events is found to be different between the models, even among those which source their forcing data from the same NWP model (ECMWF. The differences are less pronounced for longer lived tracers, however they could have implications for modelling the halogen burden of the lowermost stratosphere through transport of species such as bromoform, or short-lived hydrocarbons into the lowermost stratosphere. The modelled tracer profiles are strongly influenced by the convective transport parameterisations, and different boundary layer mixing parameterisations also have a large impact on the modelled tracer profiles. Preferential locations for rapid transport from the surface into the upper troposphere are similar in all models, and are mostly concentrated over the western Pacific, the Maritime Continent and the Indian Ocean. In contrast, models do not indicate that upward transport is highest over western Africa.
Uniqueness of the Fock representation of the Gowdy S1 x S2 and S3 models
International Nuclear Information System (INIS)
Cortez, Jeronimo; Marugan, Guillermo A Mena; Velhinho, Jose M
2008-01-01
After a suitable gauge fixing, the local gravitational degrees of freedom of the Gowdy S 1 x S 2 and S 3 cosmologies are encoded in an axisymmetric field on the sphere S 2 . Recently, it has been shown that a standard field parametrization of these reduced models admits no Fock quantization with a unitary dynamics. This lack of unitarity is surpassed by a convenient redefinition of the field and the choice of an adequate complex structure. The result is a Fock quantization where both the dynamics and the SO(3)-symmetries of the field equations are unitarily implemented. The present work proves that this Fock representation is in fact unique inasmuch as, up to equivalence, there exists no other possible choice of SO(3)-invariant complex structure leading to a unitary implementation of the time evolution
Modeling with a Conceptual Representation: Is It Necessary? Does It Work?
Directory of Open Access Journals (Sweden)
Rebecca C. Jordan
2017-04-01
Full Text Available In response to recent educational imperatives in the United States, modeling and systems thinking have been identified as being critical for science learning. In this paper, we investigate models in the classroom from two important perspectives: (1 from the teacher perspective to understand how teachers perceive models and use models in the classroom and (2 from the students perspective to understand how student use model-based reasoning to represent their understanding in a classroom setting. Qualitative data collected from 19 teachers who attended a professional development workshop in the northeastern United States indicate that while teachers see the value in teaching to think with models (i.e., during inquiry practices, they tend to use models mostly as communication tools in the classroom. Quantitative data collected about the modeling practices of 42 middle school students who worked collaboratively in small groups (4–5 students using a computer modeling program indicated that students tended to engage in more mechanistic and function-related thinking with time as they reasoned about a complex system. Furthermore, students had a typified trajectory of first adding and then next paring down ideas in their models. Implications for science education are discussed.
Directory of Open Access Journals (Sweden)
Juan Manuel Galeazzi
2015-12-01
Full Text Available Neurons that respond to visual targets in a hand-centred frame of reference have been found within various areas of the primate brain. We investigate how hand-centred visual representations may develop in a neural network model of the primate visual system called VisNet, when the model is trained on images of the hand seen against natural visual scenes. The simulations show how such neurons may develop through a biologically plausible process of unsupervised competitive learning and self-organisation. In an advance on our previous work, the visual scenes consisted of multiple targets presented simultaneously with respect to the hand. Three experiments are presented. First, VisNet was trained with computerized images consisting of a realistic image of a hand and and a variety of natural objects, presented in different textured backgrounds during training. The network was then tested with just one textured object near the hand in order to verify if the output cells were capable of building hand-centered representations with a single localised receptive field. We explain the underlying principles of the statistical decoupling that allows the output cells of the network to develop single localised receptive fields even when the network is trained with multiple objects. In a second simulation we examined how some of the cells with hand-centred receptive fields decreased their shape selectivity and started responding to a localised region of hand-centred space as the number of objects presented in overlapping locations during training increases. Lastly, we explored the same learning principles training the network with natural visual scenes collected by volunteers. These results provide an important step in showing how single, localised, hand-centered receptive fields could emerge under more ecologically realistic visual training conditions.
Parameterizing Bayesian network Representations of Social-Behavioral Models by Expert Elicitation
Energy Technology Data Exchange (ETDEWEB)
Walsh, Stephen J.; Dalton, Angela C.; Whitney, Paul D.; White, Amanda M.
2010-05-23
Bayesian networks provide a general framework with which to model many natural phenomena. The mathematical nature of Bayesian networks enables a plethora of model validation and calibration techniques: e.g parameter estimation, goodness of fit tests, and diagnostic checking of the model assumptions. However, they are not free of shortcomings. Parameter estimation from relevant extant data is a common approach to calibrating the model parameters. In practice it is not uncommon to find oneself lacking adequate data to reliably estimate all model parameters. In this paper we present the early development of a novel application of conjoint analysis as a method for eliciting and modeling expert opinions and using the results in a methodology for calibrating the parameters of a Bayesian network.
On the representability of complete genomes by multiple competing finite-context (Markov models.
Directory of Open Access Journals (Sweden)
Armando J Pinho
Full Text Available A finite-context (Markov model of order k yields the probability distribution of the next symbol in a sequence of symbols, given the recent past up to depth k. Markov modeling has long been applied to DNA sequences, for example to find gene-coding regions. With the first studies came the discovery that DNA sequences are non-stationary: distinct regions require distinct model orders. Since then, Markov and hidden Markov models have been extensively used to describe the gene structure of prokaryotes and eukaryotes. However, to our knowledge, a comprehensive study about the potential of Markov models to describe complete genomes is still lacking. We address this gap in this paper. Our approach relies on (i multiple competing Markov models of different orders (ii careful programming techniques that allow orders as large as sixteen (iii adequate inverted repeat handling (iv probability estimates suited to the wide range of context depths used. To measure how well a model fits the data at a particular position in the sequence we use the negative logarithm of the probability estimate at that position. The measure yields information profiles of the sequence, which are of independent interest. The average over the entire sequence, which amounts to the average number of bits per base needed to describe the sequence, is used as a global performance measure. Our main conclusion is that, from the probabilistic or information theoretic point of view and according to this performance measure, multiple competing Markov models explain entire genomes almost as well or even better than state-of-the-art DNA compression methods, such as XM, which rely on very different statistical models. This is surprising, because Markov models are local (short-range, contrasting with the statistical models underlying other methods, where the extensive data repetitions in DNA sequences is explored, and therefore have a non-local character.
Umakanth, U.; Kesarkar, Amit P.; Attada, Raju; Vijaya Bhaskar Rao, S.
2015-01-01
combinations of Grell (G) and Emanuel (E) cumulus schemes namely, RegCM-EG, RegCM-EE and RegCM-GE have been used. The model is initialized at 1st January, 2000 for a 13-year continuous simulation at a spatial resolution of 50 km. The models reasonably simulate
Representations of Chemical Bonding Models in School Textbooks--Help or Hindrance for Understanding?
Bergqvist, Anna; Drechsler, Michal; De Jong, Onno; Rundgren, Shu-Nu Chang
2013-01-01
Models play an important and central role in science as well as in science education. Chemical bonding is one of the most important topics in upper secondary school chemistry, and this topic is dominated by the use of models. In the past decade, research has shown that chemical bonding is a topic that students find difficult, and therefore, a wide…
Tuning of methods for offset free MPC based on ARX model representations
DEFF Research Database (Denmark)
Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay
2010-01-01
In this paper we investigate model predictive control (MPC) based on ARX models. ARX models can be identified from data using convex optimization technologies and is linear in the system parameters. Compared to other model parameterizations this feature is an advantage in embedded applications...... for robust and automatic system identification. Standard MPC is not able to reject a sustained, unmeasured, non zero mean disturbance and will therefore not provide offset free tracking. Offset free tracking can be guaranteed for this type of disturbances if Δ variables are used or if the state space...... is extended with a disturbance model state. The relation between the base case and the two extended methods are illustrated which provides good understanding and a platform for discussing tuning for good closed loop performance....
Simakov, Nikolay A.
2010-01-01
A soft repulsion (SR) model of short range interactions between mobile ions and protein atoms is introduced in the framework of continuum representation of the protein and solvent. The Poisson-Nernst-Plank (PNP) theory of ion transport through biological channels is modified to incorporate this soft wall protein model. Two sets of SR parameters are introduced: the first is parameterized for all essential amino acid residues using all atom molecular dynamic simulations; the second is a truncated Lennard – Jones potential. We have further designed an energy based algorithm for the determination of the ion accessible volume, which is appropriate for a particular system discretization. The effects of these models of short-range interaction were tested by computing current-voltage characteristics of the α-hemolysin channel. The introduced SR potentials significantly improve prediction of channel selectivity. In addition, we studied the effect of choice of some space-dependent diffusion coefficient distributions on the predicted current-voltage properties. We conclude that the diffusion coefficient distributions largely affect total currents and have little effect on rectifications, selectivity or reversal potential. The PNP-SR algorithm is implemented in a new efficient parallel Poisson, Poisson-Boltzman and PNP equation solver, also incorporated in a graphical molecular modeling package HARLEM. PMID:21028776
Directory of Open Access Journals (Sweden)
Martin Gugat
2012-05-01
Full Text Available Compressible squeeze film damping is a phenomenon of great importance for micromachines. For example, for the optimal design of an electrostatically actuated micro-cantilever mass sensor that operates in air, it is essential to have a model for the system behavior that can be evaluated efficiently. An analytical model that is based upon a solution of the linearized Reynolds equation has been given by R.B. Darling. In this paper we explain how some infinite sums that appear in Darling’s model can be evaluated analytically. As an example of applications of these closed form representations, we compute an approximation for the critical frequency where the spring component of the reaction force on the microplate, due to the motion through the air, is equal to a certain given multiple of the damping component. We also show how some double series that appear in the model can be reduced to a single infinite series that can be approximated efficiently.
Energy Technology Data Exchange (ETDEWEB)
Ehleringer, James [Univ. of Utah, Salt Lake City, UT (United States). Dept. of Biology; Randerson, James [Univ. of California, Irvine, CA (United States); Lai, Chun-Ta [San Diego State Univ., CA (United States)
2016-02-16
The objective of the proposed research was to collect data and develop models to improve our understanding of the role of drought and fire impacts on the terrestrial carbon cycle in the western US, including impacts associated with urban systems as they impacted regional carbon cycles. Using data we collected and a synthesis of other measurements, we developed new ways (a) to evaluate the representation of drought stress and fire emissions in the Community Land Model, (b) to model net ecosystem exchange combining ground level atmospheric observations with boundary layer theory, (c) to model upstream impacts of fire and fossil fuel emissions on atmospheric carbon dioxide observations, and (d) to model carbon dioxide observations within urban systems and at the urban-wildland interfaces of forest ecosystems.
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
Wang, Q. J.; Robertson, D. E.; Haines, C. L.
2009-02-01
Irrigation is important to many agricultural businesses but also has implications for catchment health. A considerable body of knowledge exists on how irrigation management affects farm business and catchment health. However, this knowledge is fragmentary; is available in many forms such as qualitative and quantitative; is dispersed in scientific literature, technical reports, and the minds of individuals; and is of varying degrees of certainty. Bayesian networks allow the integration of dispersed knowledge into quantitative systems models. This study describes the development, validation, and application of a Bayesian network model of farm irrigation in the Shepparton Irrigation Region of northern Victoria, Australia. In this first paper we describe the process used to integrate a range of sources of knowledge to develop a model of farm irrigation. We describe the principal model components and summarize the reaction to the model and its development process by local stakeholders. Subsequent papers in this series describe model validation and the application of the model to assess the regional impact of historical and future management intervention.
Kantzas, Euripides; Quegan, Shaun
2015-04-01
Fire constitutes a violent and unpredictable pathway of carbon from the terrestrial biosphere into the atmosphere. Despite fire emissions being in many biomes of similar magnitude to that of Net Ecosystem Exchange, even the most complex Dynamic Vegetation Models (DVMs) embedded in IPCC General Circulation Models poorly represent fire behavior and dynamics, a fact which still remains understated. As DVMs operate on a deterministic, grid cell-by-grid cell basis they are unable to describe a host of important fire characteristics such as its propagation, magnitude of area burned and stochastic nature. Here we address these issues by describing a model-independent methodology which assimilates Earth Observation (EO) data by employing image analysis techniques and algorithms to offer a realistic fire disturbance regime in a DVM. This novel approach, with minimum model restructuring, manages to retain the Fire Return Interval produced by the model whilst assigning pragmatic characteristics to its fire outputs thus allowing realistic simulations of fire-related processes such as carbon injection into the atmosphere and permafrost degradation. We focus our simulations in the Arctic and specifically Canada and Russia and we offer a snippet of how this approach permits models to engage in post-fire dynamics hitherto absent from any other model regardless of complexity.
Analysis of the industrial sector representation in the Fossil2 energy-economic model
International Nuclear Information System (INIS)
Wise, M.A.; Woodruff, M.G.; Ashton, W.B.
1992-08-01
The Fossil2 energy-economic model is used by the US Department of Energy (DOE) for a variety of energy and environmental policy analyses. A number of improvements to the model are under way or are being considered. This report was prepared by the Pacific Northwest Laboratory (PNL) to provide a clearer understanding of the current industrial sector module of Fossil2 and to explore strategies for improving it. The report includes a detailed description of the structure and decision logic of the industrial sector module, along with results from several simulation exercises to demonstrate the behavior of the module in different policy scenarios and under different values of key model parameters. The cases were run with the Fossil2 model at PNL using the National Energy Strategy Actions Case of 1991 as the point of departure. The report also includes a discussion of suggested industrial sector module improvements. These improvements include changes in the way the current model is used; on- and off-line adjustments to some of the model's parameters; and significant changes to include more detail on the industrial processes, technologies, and regions of the country being modeled. The potential benefits and costs of these changes are also discussed
A representation theory for a class of vector autoregressive models for fractional processes
DEFF Research Database (Denmark)
Johansen, Søren
2008-01-01
Based on an idea of Granger (1986), we analyze a new vector autoregressive model defined from the fractional lag operator 1-(1-L)^{d}. We first derive conditions in terms of the coefficients for the model to generate processes which are fractional of order zero. We then show that if there is a un...... root, the model generates a fractional process X(t) of order d, d>0, for which there are vectors ß so that ß'X(t) is fractional of order d-b, 0...
Personality and Cultural Modeling for Agent-Based Representation of a Terrorist Cell, Phase 1
National Research Council Canada - National Science Library
Hogan, C. M; Van Houten, Robert A; La, Nini
2003-01-01
This report describes the research into the use of personality, cultural and socio-political modeling in order to provide a robust asymmetric opponent for Military Operation in Urban Terrain training...
The Aggregate Representation of Terrestrial Land Covers Within Global Climate Models (GCM)
Shuttleworth, W. James; Sorooshian, Soroosh
1996-01-01
This project had four initial objectives: (1) to create a realistic coupled surface-atmosphere model to investigate the aggregate description of heterogeneous surfaces; (2) to develop a simple heuristic model of surface-atmosphere interactions; (3) using the above models, to test aggregation rules for a variety of realistic cover and meteorological conditions; and (4) to reconcile biosphere-atmosphere transfer scheme (BATS) land covers with those that can be recognized from space; Our progress in meeting these objectives can be summarized as follows. Objective 1: The first objective was achieved in the first year of the project by coupling the Biosphere-Atmosphere Transfer Scheme (BATS) with a proven two-dimensional model of the atmospheric boundary layer. The resulting model, BATS-ABL, is described in detail in a Masters thesis and reported in a paper in the Journal of Hydrology Objective 2: The potential value of the heuristic model was re-evaluated early in the project and a decision was made to focus subsequent research around modeling studies with the BATS-ABL model. The value of using such coupled surface-atmosphere models in this research area was further confirmed by the success of the Tucson Aggregation Workshop. Objective 3: There was excellent progress in using the BATS-ABL model to test aggregation rules for a variety of realistic covers. The foci of attention have been the site of the First International Satellite Land Surface Climatology Project Field Experiment (FIFE) in Kansas and one of the study sites of the Anglo-Brazilian Amazonian Climate Observational Study (ABRACOS) near the city of Manaus, Amazonas, Brazil. These two sites were selected because of the ready availability of relevant field data to validate and initiate the BATS-ABL model. The results of these tests are given in a Masters thesis, and reported in two papers. Objective 4: Progress far exceeded original expectations not only in reconciling BATS land covers with those that can be
Walker, R. L., II; Knepley, M.; Aminzadeh, F.
2017-12-01
We seek to use the tools provided by the Portable, Extensible Toolkit for Scientific Computation (PETSc) to represent a multiphysics problem in a form that decouples the element definition from the fully coupled equation through the use of pointwise functions that imitate the strong form of the governing equation. This allows allows individual physical processes to be expressed as independent kernels that may be then coupled with the existing finite element framework, PyLith, and capitalizes upon the flexibility offered by the solver, data management, and time stepping algorithms offered by PETSc. To demonstrate a characteristic example of coupled geophysical simulation devised in this manner, we present a model of a synthetic poroelastic environment, with and without the consideration of inertial effects, with fluid initially represented as a single phase. Matrix displacement and fluid pressure serve as the desired unknowns, with the option for various model parameters represented as dependent variables of the central unknowns. While independent of PyLith, this model also serves to showcase the adaptability of physics kernels for synthetic forward modeling. In addition, we seek to expand the base case to demonstrate the impact of modeling fluid as single phase compressible versus a single incompressible phase. As a goal, we also seek to include multiphase fluid modeling, as well as capillary effects.
Wienhöfer, J.; Zehe, E.
2012-04-01
Rapid lateral flow processes via preferential flow paths are widely accepted to play a key role for rainfall-runoff response in temperate humid headwater catchments. A quantitative description of these processes, however, is still a major challenge in hydrological research, not least because detailed information about the architecture of subsurface flow paths are often impossible to obtain at a natural site without disturbing the system. Our study combines physically based modelling and field observations with the objective to better understand how flow network configurations influence the hydrological response of hillslopes. The system under investigation is a forested hillslope with a small perennial spring at the study area Heumöser, a headwater catchment of the Dornbirnerach in Vorarlberg, Austria. In-situ points measurements of field-saturated hydraulic conductivity and dye staining experiments at the plot scale revealed that shrinkage cracks and biogenic macropores function as preferential flow paths in the fine-textured soils of the study area, and these preferential flow structures were active in fast subsurface transport of artificial tracers at the hillslope scale. For modelling of water and solute transport, we followed the approach of implementing preferential flow paths as spatially explicit structures of high hydraulic conductivity and low retention within the 2D process-based model CATFLOW. Many potential configurations of the flow path network were generated as realisations of a stochastic process informed by macropore characteristics derived from the plot scale observations. Together with different realisations of soil hydraulic parameters, this approach results in a Monte Carlo study. The model setups were used for short-term simulation of a sprinkling and tracer experiment, and the results were evaluated against measured discharges and tracer breakthrough curves. Although both criteria were taken for model evaluation, still several model setups
Directory of Open Access Journals (Sweden)
Nick eBouskill
2012-10-01
Full Text Available Trait-based microbial models show clear promise as tools to represent the diversity and activity of microorganisms across ecosystem gradients. These models parameterize specific traits that determine the relative fitness of an ‘organism’ in a given environment, and represent the complexity of biological systems across temporal and spatial scales. In this study we introduce a microbial community trait-based modeling framework (MicroTrait focused on nitrification (MicroTrait-N that represents the ammonia-oxidizing bacteria (AOB and ammonia-oxidizing archaea (AOA and nitrite oxidizing bacteria (NOB using traits related to enzyme kinetics and physiological properties. We used this model to predict nitrifier diversity, ammonia (NH3 oxidation rates and nitrous oxide (N2O production across pH, temperature and substrate gradients. Predicted nitrifier diversity was predominantly determined by temperature and substrate availability, the latter was strongly influenced by pH. The model predicted that transient N2O production rates are maximized by a decoupling of the AOB and NOB communities, resulting in an accumulation and detoxification of nitrite to N2O by AOB. However, cumulative N2O production (over six month simulations is maximized in a system where the relationship between AOB and NOB is maintained. When the reactions uncouple, the AOB become unstable and biomass declines rapidly, resulting in decreased NH3 oxidation and N2O production. We evaluated this model against site level chemical datasets from the interior of Alaska and accurately simulated NH3 oxidation rates and the relative ratio of AOA:AOB biomass. The predicted community structure and activity indicate (a parameterization of a small number of traits may be sufficient to broadly characterize nitrifying community structure and (b changing decadal trends in climate and edaphic conditions could impact nitrification rates in ways that are not captured by extant biogeochemical models.
Energy Technology Data Exchange (ETDEWEB)
Hall, Alex [University of California, Los Angeles, CA (United States). Joint Institute for Regional Earth System Science and Engineering
2013-07-24
Stratocumulus and shallow cumulus clouds in subtropical oceanic regions (e.g., Southeast Pacific) cover thousands of square kilometers and play a key role in regulating global climate (e.g., Klein and Hartmann, 1993). Numerical modeling is an essential tool to study these clouds in regional and global systems, but the current generation of climate and weather models has difficulties in representing them in a realistic way (e.g., Siebesma et al., 2004; Stevens et al., 2007; Teixeira et al., 2011). While numerical models resolve the large-scale flow, subgrid-scale parameterizations are needed to estimate small-scale properties (e.g. boundary layer turbulence and convection, clouds, radiation), which have significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. To represent the contribution of these fine-scale processes to the resolved scale, climate models use various parameterizations, which are the main pieces in the model that contribute to the low clouds dynamics and therefore are the major sources of errors or approximations in their representation. In this project, we aim to 1) improve our understanding of the physical processes in thermal circulation and cloud formation, 2) examine the performance and sensitivity of various parameterizations in the regional weather model (Weather Research and Forecasting model; WRF), and 3) develop, implement, and evaluate the advanced boundary layer parameterization in the regional model to better represent stratocumulus, shallow cumulus, and their transition. Thus, this project includes three major corresponding studies. We find that the mean diurnal cycle is sensitive to model domain in ways that reveal the existence of different contributions originating from the Southeast Pacific land-masses. The experiments suggest that diurnal variations in circulations and thermal structures over this region are influenced by convection over the Peruvian sector of the Andes cordillera, while
Directory of Open Access Journals (Sweden)
Christopher D. Taylor
2018-01-01
Full Text Available The computational modeling of corrosion inhibitors at the level of molecular interactions has been pursued for decades, and recent developments are allowing increasingly realistic models to be developed for inhibitor–inhibitor, inhibitor–solvent and inhibitor–metal interactions. At the same time, there remains a need for simplistic models to be used for the purpose of screening molecules for proposed inhibitor performance. Herein, we apply a reductionist model for metal surfaces consisting of a metal cation with hydroxide ligands and use quantum chemical modeling to approximate the free energy of adsorption for several imidazoline class candidate corrosion inhibitors. The approximation is made using the binding energy and the partition coefficient. As in some previous work, we consider different methods for incorporating solvent and reference systems for the partition coefficient. We compare the findings from this short study with some previous theoretical work on similar systems. The binding energies for the inhibitors to the metal hydroxide clusters are found to be intermediate to the binding energies calculated in other work for bare metal vs. metal oxide surfaces. The method is applied to copper, iron, aluminum and nickel metal systems.
Guo, Yang; Lin, Wenfang; Yu, Shuyang; Ji, Yang
2018-01-01
Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs) and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM). However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D) to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods. PMID:29621131
Directory of Open Access Journals (Sweden)
Zhenyu Wu
2018-04-01
Full Text Available Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM. However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods.
Cheng, Lei; Li, Yizeng; Grosh, Karl
2013-08-15
An approximate boundary condition is developed in this paper to model fluid shear viscosity at boundaries of coupled fluid-structure system. The effect of shear viscosity is approximated by a correction term to the inviscid boundary condition, written in terms of second order in-plane derivatives of pressure. Both thin and thick viscous boundary layer approximations are formulated; the latter subsumes the former. These approximations are used to develop a variational formation, upon which a viscous finite element method (FEM) model is based, requiring only minor modifications to the boundary integral contributions of an existing inviscid FEM model. Since this FEM formulation has only one degree of freedom for pressure, it holds a great computational advantage over the conventional viscous FEM formulation which requires discretization of the full set of linearized Navier-Stokes equations. The results from thick viscous boundary layer approximation are found to be in good agreement with the prediction from a Navier-Stokes model. When applicable, thin viscous boundary layer approximation also gives accurate results with computational simplicity compared to the thick boundary layer formulation. Direct comparison of simulation results using the boundary layer approximations and a full, linearized Navier-Stokes model are made and used to evaluate the accuracy of the approximate technique. Guidelines are given for the parameter ranges over which the accurate application of the thick and thin boundary approximations can be used for a fluid-structure interaction problem.
Dunne, J. P.; John, J. G.; Stock, C. A.
2013-12-01
The world's major Eastern Boundary Currents (EBC) such as the California Current Large Marine Ecosystem (CCLME) are critically important areas for global fisheries. Computational limitations have divided past EBC modeling into two types: high resolution regional approaches that resolve the strong meso-scale structures involved, and coarse global approaches that represent the large scale context for EBCs, but only crudely resolve only the largest scales of their manifestation. These latter global studies have illustrated the complex mechanisms involved in the climate change and acidification response in these regions, with the CCLME response dominated not by local adjustments but large scale reorganization of ocean circulation through remote forcing of water-mass supply pathways. While qualitatively illustrating the limitations of regional high resolution studies in long term projection, these studies lack the ability to robustly quantify change because of the inability of these models to represent the baseline meso-scale structures of EBCs. In the present work, we compare current generation coarse resolution (one degree) and a prototype next generation high resolution (1/10 degree) Earth System Models (ESMs) from NOAA's Geophysical Fluid Dynamics Laboratory in representing the four major EBCs. We review the long-known temperature biases that the coarse models suffer in being unable to represent the timing and intensity of upwelling-favorable winds, along with lack of representation of the observed high chlorophyll and biological productivity resulting from this upwelling. In promising contrast, we show that the high resolution prototype is capable of representing not only the overall meso-scale structure in physical and biogeochemical fields, but also the appropriate offshore extent of temperature anomalies and other EBC characteristics. Results for chlorophyll were mixed; while high resolution chlorophyll in EBCs were strongly enhanced over the coarse resolution
De Brún, Aoife; McCarthy, Mary; McKenzie, Kenneth; McGloin, Aileen
2015-01-01
This study examined the Irish media discourse on obesity by employing the Common Sense Model of Illness Representations. A media sample of 368 transcripts was compiled from newspaper articles (n = 346), radio discussions (n = 5), and online news articles (n = 17) on overweight and obesity from the years 2005, 2007, and 2009. Using the Common Sense Model and framing theory to guide the investigation, a thematic analysis was conducted on the media sample. Analysis revealed that the behavioral dimensions of diet and activity levels were the most commonly cited causes of and interventions in obesity. The advertising industry was blamed for obesity, and there were calls for increased government action to tackle the issue. Physical illness and psychological consequences of obesity were prevalent in the sample, and analysis revealed that the economy, regardless of its state, was blamed for obesity. These results are discussed in terms of expectations of audience understandings of the issue and the implications of these dominant portrayals and framings on public support for interventions. The article also outlines the value of a qualitative analytical framework that combines the Common Sense Model and framing theory in the investigation of illness narratives.
Mondeel, Thierry D G A; Crémazy, Frédéric; Barberis, Matteo
2018-02-01
Multi-scale modeling of biological systems requires integration of various information about genes and proteins that are connected together in networks. Spatial, temporal and functional information is available; however, it is still a challenge to retrieve and explore this knowledge in an integrated, quick and user-friendly manner. We present GEMMER (GEnome-wide tool for Multi-scale Modelling data Extraction and Representation), a web-based data-integration tool that facilitates high quality visualization of physical, regulatory and genetic interactions between proteins/genes in Saccharomyces cerevisiae. GEMMER creates network visualizations that integrate information on function, temporal expression, localization and abundance from various existing databases. GEMMER supports modeling efforts by effortlessly gathering this information and providing convenient export options for images and their underlying data. GEMMER is freely available at http://gemmer.barberislab.com. Source code, written in Python, JavaScript library D3js, PHP and JSON, is freely available at https://github.com/barberislab/GEMMER. M.Barberis@uva.nl. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press.
Representation of Solar Capacity Value in the ReEDS Capacity Expansion Model
Energy Technology Data Exchange (ETDEWEB)
Sigrin, B. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sullivan, P. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ibanez, E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2014-03-01
An important issue for electricity system operators is the estimation of renewables' capacity contributions to reliably meeting system demand, or their capacity value. While the capacity value of thermal generation can be estimated easily, assessment of wind and solar requires a more nuanced approach due to the resource variability. Reliability-based methods, particularly assessment of the Effective Load-Carrying Capacity, are considered to be the most robust and widely-accepted techniques for addressing this resource variability. This report compares estimates of solar PV capacity value by the Regional Energy Deployment System (ReEDS) capacity expansion model against two sources. The first comparison is against values published by utilities or other entities for known electrical systems at existing solar penetration levels. The second comparison is against a time-series ELCC simulation tool for high renewable penetration scenarios in the Western Interconnection. Results from the ReEDS model are found to compare well with both comparisons, despite being resolved at a super-hourly temporal resolution. Two results are relevant for other capacity-based models that use a super-hourly resolution to model solar capacity value. First, solar capacity value should not be parameterized as a static value, but must decay with increasing penetration. This is because -- for an afternoon-peaking system -- as solar penetration increases, the system's peak net load shifts to later in the day -- when solar output is lower. Second, long-term planning models should determine system adequacy requirements in each time period in order to approximate LOLP calculations. Within the ReEDS model we resolve these issues by using a capacity value estimate that varies by time-slice. Within each time period the net load and shadow price on ReEDS's planning reserve constraint signals the relative importance of additional firm capacity.
Umakanth, U.
2015-11-07
The aim of the study is to evaluate the performance of regional climate model (RegCM) version 4.4 over south Asian CORDEX domain to simulate seasonal mean and monsoon intraseasonal oscillations (MISOs) during Indian summer monsoon. Three combinations of Grell (G) and Emanuel (E) cumulus schemes namely, RegCM-EG, RegCM-EE and RegCM-GE have been used. The model is initialized at 1st January, 2000 for a 13-year continuous simulation at a spatial resolution of 50 km. The models reasonably simulate the seasonal mean low level wind pattern though they differ in simulating mean precipitation pattern. All models produce dry bias in precipitation over Indian land region except in RegCM-EG where relatively low value of dry bias is observed. On seasonal scale, the performance of RegCM-EG is more close to observation though it fails at intraseasonal time scales. In wave number-frequency spectrum, the observed peak in zonal wind (850 hPa) at 40–50 day scale is captured by all models with a slight change in amplitude, however, the 40–50 day peak in precipitation is completely absent in RegCM-EG. The space–time characteristics of MISOs are well captured by RegCM-EE over RegCM-GE, however it fails to show the eastward propagation of the convection across the Maritime Continent. Except RegCM-EE all other models completely underestimates the moisture advection from Equatorial Indian Ocean onto Indian land region during life-cycle of MISOs. The characteristics of MISOs are studied for strong (SM) and weak (WM) monsoon years and the differences in model performances are analyzed. The wavelet spectrum of rainfall over central India denotes that, the SM years are dominated by high frequency oscillations (period <20 days) whereas little higher periods (>30 days) along with dominated low periods (<20 days) observed during WM years. During SM, RegCM-EE is dominated with high frequency oscillations (period <20 days) whereas in WM, RegCM-EE is dominated with periods >20
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
Hoffman, Ross N.
2001-01-01
We completed the formulation of the smoothness penalty functional this past quarter. We used a simplified procedure for estimating the statistics of the FCA solution spectral coefficients from the results of the unconstrained, low-truncation FCA (stopping criterion) solutions. During the current reporting period we have completed the calculation of GEOS-2 model-equivalent brightness temperatures for the 6.7 micron and 11 micron window channels used in the GOES imagery for all 10 cases from August 1999. These were simulated using the AER-developed Optimal Spectral Sampling (OSS) model.
Miguel, Isabel; Valentim, Joaquim Pires; Carugati, Felice
2013-01-01
Within the theoretical framework of social representations theory, a substantial body of literature has advocated and shown that, as interpretative systems and forms of knowledge concurring in the construction of a social reality, social representations are guides for action, influencing behaviours and social relations. Based on this assumption,…
Bengtson, Barbara J.
2013-01-01
Understanding the linear relationship of numbers is essential for doing practical and abstract mathematics throughout education and everyday life. There is evidence that number line activities increase learners' number sense, improving the linearity of mental number line representations (Siegler & Ramani, 2009). Mental representations of…
Singh, Michael V.
2018-01-01
In recent years mentorship has become a popular 'solution' for struggling boys of color and has led to the recruitment of more male of color teachers. While not arguing against the merits of mentorship, this article critiques what the author deems 'corrective representations.' Corrective representations are the imagined embodiment of proper and…
Energy Technology Data Exchange (ETDEWEB)
Campos, Tarcisio P.R.; Andrade, Joao Paulo Lopes de; Costa, Igor Temponi; Teixeira, Cleuza H. [Minas Gerais Univ., Belo Horizonte, MG (Brazil). Programa de Pos-graduacao em Ciencias e Tecnicas Nucleares]. E-mail: campos@nuclear.ufmg
2005-07-01
Animal models have been used in experimentation with ionizing radiation. The evaluation of the energy absorbed per unit tissue mass in vivo transported by nuclear particles is a task to be performed before experimentation. Stochastic or deterministic methodology can be applied, however the dosimetric protocols applied in radiotherapy center cannot be applied directly due to the inherent small geometry and chemical composition of the animal distinct from human. The present article addresses a method in development that will predict the dose distribution into the rabbit thorax based on the solution of the transport phenomena in a voxel model. The model will be applied to simulate a seed implant experiment on a rabbit. Herein, the construction of the three-dimensional voxel model anthropomorphic -anthropometrics to the rabbit is presented. The model is assembling from a set of computer tomography of the rabbit. The computational phantom of the thorax starts at the digitalisation of the CT images, tissue definition, and color image representation of each tissue and organ. The chemical composition and mass density of each tissue is evaluated as similar date presented by ICRU-44. To treat the images, a code namely SISCODES, developed in house, was used. The in vivo experiment that will be simulated is also described. That is a implant of five seeds of 1.6x2 mm performed in a rabbit's liver. The perspective of this work is the application of the model in dosimetric studies predicting the dose distribution around the seed's implanted in vivo experiments. (author)
Mathematical representation of bolted-joint stiffness: A new suggested model
Energy Technology Data Exchange (ETDEWEB)
Haidar, Nawras; Obeed, Salwan; Jawad, Mohamed [College of Engineering, University of Babylon, Babel (Iraq)
2011-11-15
Joint member stiffness in a bolted connection directly influences the safety of a design in regard to both static and fatigue loading, as well as in the prevention of separation in the connection. This work provides a new simple model for computing the member stiffness in bolted connections for both fully and partially developed stress envelope fields. The new model is built using a stress distribution polynomial of third order. Finite element analysis (FEA) is performed for some joints geometries, and the results are used to estimate the best analytical envelope angle in the proposed analytical model that gives suitable convergence between the compared results. An experimental effort is exerted to validate the accuracy of a suggested model. When analytical results are compared with FEA results and experimental data, the maximum absolute percentage errors are found to be 2.69 and 14.69, respectively. Also, a good agreement is obtained when the analytical results are compared with other researchers' results.
Relating the Spherical representation of vocational interests to the HEXACO personality model
Holtrop, D.J.; Born, M.Ph.; de Vries, R.E.
2015-01-01
The present study extends previous research on interests-personality relations by comparing recent models of vocational interests (using the Personal Globe Inventory; PGI, Tracey, 2002) and personality (using the HEXACO-PI-R; Ashton, Lee, & de Vries, 2014) with each other. First, the structure of
Improving Conceptual Understanding and Representation Skills through Excel-Based Modeling
Malone, Kathy L.; Schunn, Christian D.; Schuchardt, Anita M.
2018-01-01
The National Research Council framework for science education and the Next Generation Science Standards have developed a need for additional research and development of curricula that is both technologically model-based and includes engineering practices. This is especially the case for biology education. This paper describes a quasi-experimental…
J. Keith Gilless; Jeremy S. Fried
1998-01-01
A fire behavior module was developed for the California Fire Economics Simulator version 2 (CFES2), a stochastic simulation model of initial attack on wildland fire used by the California Department of Forestry and Fire Protection. Fire rate of spread (ROS) and fire dispatch level (FDL) for simulated fires "occurring" on the same day are determined by making...
Representation of a common 3-pool compartment model for N turnover of ruminants
International Nuclear Information System (INIS)
Ulbrich, M.
1989-01-01
On the basis of an existing 3-pool compartment model for the N turnover of lactating ruminants a method was elaborated for N turnover determination in non-lactating ruminants by measuring the 15 N frequency in NPN pool and without experimental measurements of the 15 N frequency in the amino acid pool
Arens, A. Katrin; Morin, Alexandre J. S.
2017-01-01
This study illustrates an integrative psychometric framework to investigate two sources of construct-relevant multidimensionality in answers to the Self-Perception Profile for Children (SPPC). Using a sample of 2,353 German students attending Grades 3 to 6, we contrasted: (a) first-order versus hierarchical and bifactor models to investigate…
Exactly renormalizable model in quantum field theory. II. The physical-particle representation
Ruijgrok, Th.W.
1958-01-01
For the simplified model of quantum field theory discussed in a previous paper it is shown how the physical particles can be properly described by means of the so-called asymptotically stationary (a.s.) states. It is possible by formulating the theory in terms of these a.s. states to express it
Critical Source Area Delineation: The representation of hydrology in effective erosion modeling.
Fowler, A.; Boll, J.; Brooks, E. S.; Boylan, R. D.
2017-12-01
Despite decades of conservation and millions of conservation dollars, nonpoint source sediment loading associated with agricultural disturbance continues to be a significant problem in many parts of the world. Local and national conservation organizations are interested in targeting critical source areas for control strategy implementation. Currently, conservation practices are selected and located based on the Revised Universal Soil Loss Equation (RUSLE) hillslope erosion modeling, and the National Resource Conservation Service will soon be transiting to the Watershed Erosion Predict Project (WEPP) model for the same purpose. We present an assessment of critical source areas targeted with RUSLE, WEPP and a regionally validated hydrology model, the Soil Moisture Routing (SMR) model, to compare the location of critical areas for sediment loading and the effectiveness of control strategies. The three models are compared for the Palouse dryland cropping region of the inland northwest, with un-calibrated analyses of the Kamiache watershed using publicly available soils, land-use and long-term simulated climate data. Critical source areas were mapped and the side-by-side comparison exposes the differences in the location and timing of runoff and erosion predictions. RUSLE results appear most sensitive to slope driving processes associated with infiltration excess. SMR captured saturation excess driven runoff events located at the toe slope position, while WEPP was able to capture both infiltration excess and saturation excess processes depending on soil type and management. A methodology is presented for down-scaling basin level screening to the hillslope management scale for local control strategies. Information on the location of runoff and erosion, driven by the runoff mechanism, is critical for effective treatment and conservation.
Sinusoidal Representation of Acoustic Signals
Honda, Masaaki
Sinusoidal representation of acoustic signals has been an important tool in speech and music processing like signal analysis, synthesis and time scale or pitch modifications. It can be applicable to arbitrary signals, which is an important advantage over other signal representations like physical modeling of acoustic signals. In sinusoidal representation, acoustic signals are composed as sums of sinusoid (sine wave) with different amplitudes, frequencies and phases, which is based on the timedependent short-time Fourier transform (STFT). This article describes the principles of acoustic signal analysis/synthesis based on a sinusoid representation with focus on sine waves with rapidly varying frequency.
Room acoustics modeling using a point-cloud representation of the room geometry
DEFF Research Database (Denmark)
Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte
2013-01-01
Room acoustics modeling is usually based on the room geometry that is parametrically described prior to a sound transmission calculation. This is a highly room-specific task and rather time consuming if a complex geometry is to be described. Here, a run time generic method for an arbitrary room...... geometry acquisition is presented. The method exploits a depth sensor of the Kinect device that provides a point based information of a scanned room interior. After post-processing of the Kinect output data, a 3D point-cloud model of the room is obtained. Sound transmission between two selected points...... level of user immersion by a real time acoustical simulation of a dynamic scenes....
Dewaele, Hélène; Munier, Simon; Albergel, Clément; Planque, Carole; Laanaia, Nabil; Carrer, Dominique; Calvet, Jean-Christophe
2017-09-01
Soil maximum available water content (MaxAWC) is a key parameter in land surface models (LSMs). However, being difficult to measure, this parameter is usually uncertain. This study assesses the feasibility of using a 15-year (1999-2013) time series of satellite-derived low-resolution observations of leaf area index (LAI) to estimate MaxAWC for rainfed croplands over France. LAI interannual variability is simulated using the CO2-responsive version of the Interactions between Soil, Biosphere and Atmosphere (ISBA) LSM for various values of MaxAWC. Optimal value is then selected by using (1) a simple inverse modelling technique, comparing simulated and observed LAI and (2) a more complex method consisting in integrating observed LAI in ISBA through a land data assimilation system (LDAS) and minimising LAI analysis increments. The evaluation of the MaxAWC estimates from both methods is done using simulated annual maximum above-ground biomass (Bag) and straw cereal grain yield (GY) values from the Agreste French agricultural statistics portal, for 45 administrative units presenting a high proportion of straw cereals. Significant correlations (p value Bag and GY are found for up to 36 and 53 % of the administrative units for the inverse modelling and LDAS tuning methods, respectively. It is found that the LDAS tuning experiment gives more realistic values of MaxAWC and maximum Bag than the inverse modelling experiment. Using undisaggregated LAI observations leads to an underestimation of MaxAWC and maximum Bag in both experiments. Median annual maximum values of disaggregated LAI observations are found to correlate very well with MaxAWC.
A lattice-model representation of continuous-time random walks
International Nuclear Information System (INIS)
Campos, Daniel; Mendez, Vicenc
2008-01-01
We report some ideas for constructing lattice models (LMs) as a discrete approach to the reaction-dispersal (RD) or reaction-random walks (RRW) models. The analysis of a rather general class of Markovian and non-Markovian processes, from the point of view of their wavefront solutions, let us show that in some regimes their macroscopic dynamics (front speed) turns out to be different from that by classical reaction-diffusion equations, which are often used as a mean-field approximation to the problem. So, the convenience of a more general framework as that given by the continuous-time random walks (CTRW) is claimed. Here we use LMs as a numerical approach in order to support that idea, while in previous works our discussion was restricted to analytical models. For the two specific cases studied here, we derive and analyze the mean-field expressions for our LMs. As a result, we are able to provide some links between the numerical and analytical approaches studied
A lattice-model representation of continuous-time random walks
Energy Technology Data Exchange (ETDEWEB)
Campos, Daniel [School of Mathematics, Department of Applied Mathematics, University of Manchester, Manchester M60 1QD (United Kingdom); Mendez, Vicenc [Grup de Fisica Estadistica, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra (Barcelona) (Spain)], E-mail: daniel.campos@uab.es, E-mail: vicenc.mendez@uab.es
2008-02-29
We report some ideas for constructing lattice models (LMs) as a discrete approach to the reaction-dispersal (RD) or reaction-random walks (RRW) models. The analysis of a rather general class of Markovian and non-Markovian processes, from the point of view of their wavefront solutions, let us show that in some regimes their macroscopic dynamics (front speed) turns out to be different from that by classical reaction-diffusion equations, which are often used as a mean-field approximation to the problem. So, the convenience of a more general framework as that given by the continuous-time random walks (CTRW) is claimed. Here we use LMs as a numerical approach in order to support that idea, while in previous works our discussion was restricted to analytical models. For the two specific cases studied here, we derive and analyze the mean-field expressions for our LMs. As a result, we are able to provide some links between the numerical and analytical approaches studied.
Energy Technology Data Exchange (ETDEWEB)
Elsworth, Derek [Pennsylvania State Univ., State College, PA (United States); Izadi, Ghazal [Pennsylvania State Univ., State College, PA (United States); Gan, Quan [Pennsylvania State Univ., State College, PA (United States); Fang, Yi [Pennsylvania State Univ., State College, PA (United States); Taron, Josh [US Geological Survey, Menlo Park, CA (United States); Sonnenthal, Eric [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2015-07-28
This work has investigated the roles of effective stress induced by changes in fluid pressure, temperature and chemistry in contributing to the evolution of permeability and induced seismicity in geothermal reservoirs. This work has developed continuum models [1] to represent the progress or seismicity during both stimulation [2] and production [3]. These methods have been used to resolve anomalous observations of induced seismicity at the Newberry Volcano demonstration project [4] through the application of modeling and experimentation. Later work then focuses on the occurrence of late stage seismicity induced by thermal stresses [5] including the codifying of the timing and severity of such responses [6]. Furthermore, mechanistic linkages between observed seismicity and the evolution of permeability have been developed using data from the Newberry project [7] and benchmarked against field injection experiments. Finally, discontinuum models [8] incorporating the roles of discrete fracture networks have been applied to represent stimulation and then thermal recovery for new arrangements of geothermal wells incorporating the development of flow manifolds [9] in order to increase thermal output and longevity in EGS systems.
Dynamic Floodplain representation in hydrologic flood forecasting using WRF-Hydro modeling framework
Gangodagamage, C.; Li, Z.; Maitaria, K.; Islam, M.; Ito, T.; Dhondia, J.
2016-12-01
Floods claim more lives and damage more property than any other category of natural disaster in the Continental United States. A system that can demarcate local flood boundaries dynamically could help flood prone communities prepare for and even prevent from catastrophic flood events. Lateral distance from the centerline of the river to the right and left floodplains for the water levels coming out of the models at each grid location have not been properly integrated with the national hydrography dataset (NHDPlus). The NHDPlus dataset represents the stream network with feature classes such as rivers, tributaries, canals, lakes, ponds, dams, coastlines, and stream gages. The NHDPlus dataset consists of approximately 2.7 million river reaches defining how surface water drains to the ocean. These river reaches have upstream and downstream nodes and basic parameters such as flow direction, drainage area, reach slope etc. We modified an existing algorithm (Gangodagamage et al., 2007) to provide lateral distance from the centerline of the river to the right and left floodplains for the flows simulated by models. Previous work produced floodplain boundaries for static river stages (i.e. 3D metric: distance along the main stem, flow depth, lateral distance from river center line). Our new approach introduces the floodplain boundary for variable water levels at each reach with the fourth dimension, time. We use modeled flows from WRF-Hydro and demarcate the right and left lateral boundaries of inundation dynamically by appropriately mapping discharges into hydraulically corrected stages. Backwater effects from the mainstem to tributaries are considered and proper corrections are applied for the tributary inundations. We obtained river stages by optimizing reach level channel parameters using newly developed stream flow routing algorithm. Non uniform inundations are mapped at each NHDplus reach (upstream and downstream nodes) and spatial interpolation is carried out on a
On Input Vector Representation for the SVR model of Reactor Core Loading Pattern Critical Parameters
International Nuclear Information System (INIS)
Trontl, K.; Pevec, D.; Smuc, T.
2008-01-01
Determination and optimization of reactor core loading pattern is an important factor in nuclear power plant operation. The goal is to minimize the amount of enriched uranium (fresh fuel) and burnable absorbers placed in the core, while maintaining nuclear power plant operational and safety characteristics. The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an optimization algorithm and a computer code used for evaluating proposed loading patterns. The speed of the optimization process is highly dependent on the computer code used for the evaluation. Recently, we proposed a new method for fast loading pattern evaluation based on general robust regression model relying on the state of the art research in the field of machine learning. We employed Support Vector Regression (SVR) technique. SVR is a supervised learning method in which model parameters are automatically determined by solving a quadratic optimization problem. The preliminary tests revealed a good potential of the SVR method application for fast and accurate reactor core loading pattern evaluation. However, some aspects of model development are still unresolved. The main objective of the work reported in this paper was to conduct additional tests and analyses required for full clarification of the SVR applicability for loading pattern evaluation. We focused our attention on the parameters defining input vector, primarily its structure and complexity, and parameters defining kernel functions. All the tests were conducted on the NPP Krsko reactor core, using MCRAC code for the calculation of reactor core loading pattern critical parameters. The tested input vector structures did not influence the accuracy of the models suggesting that the initially tested input vector, consisted of the number of IFBAs and the k-inf at the beginning of the cycle, is adequate. The influence of kernel function specific parameters (σ for RBF kernel
Pike, Richard J.
2002-01-01
Terrain modeling, the practice of ground-surface quantification, is an amalgam of Earth science, mathematics, engineering, and computer science. The discipline is known variously as geomorphometry (or simply morphometry), terrain analysis, and quantitative geomorphology. It continues to grow through myriad applications to hydrology, geohazards mapping, tectonics, sea-floor and planetary exploration, and other fields. Dating nominally to the co-founders of academic geography, Alexander von Humboldt (1808, 1817) and Carl Ritter (1826, 1828), the field was revolutionized late in the 20th Century by the computer manipulation of spatial arrays of terrain heights, or digital elevation models (DEMs), which can quantify and portray ground-surface form over large areas (Maune, 2001). Morphometric procedures are implemented routinely by commercial geographic information systems (GIS) as well as specialized software (Harvey and Eash, 1996; Köthe and others, 1996; ESRI, 1997; Drzewiecki et al., 1999; Dikau and Saurer, 1999; Djokic and Maidment, 2000; Wilson and Gallant, 2000; Breuer, 2001; Guth, 2001; Eastman, 2002). The new Earth Surface edition of the Journal of Geophysical Research, specializing in surficial processes, is the latest of many publication venues for terrain modeling. This is the fourth update of a bibliography and introduction to terrain modeling (Pike, 1993, 1995, 1996, 1999) designed to collect the diverse, scattered literature on surface measurement as a resource for the research community. The use of DEMs in science and technology continues to accelerate and diversify (Pike, 2000a). New work appears so frequently that a sampling must suffice to represent the vast literature. This report adds 1636 entries to the 4374 in the four earlier publications1. Forty-eight additional entries correct dead Internet links and other errors found in the prior listings. Chronicling the history of terrain modeling, many entries in this report predate the 1999 supplement
Energy Technology Data Exchange (ETDEWEB)
Ibrahim, B.; Karambiri, H. [Institut International d' Ingenierie de l' Eau et de l' Environnement (2iE), Ouagadougou 01 (Burkina Faso); Polcher, J. [Laboratoire de Meteorologie Dynamique du CNRS, Institut Pierre Simon Laplace, Paris Cedex 05 (France); Rockel, B. [Helmholtz-Zentrum Geesthacht Institute of Coastal Research/Group Regional Atmospheric Modeling, Geesthacht (Germany)
2012-09-15
West African monsoon is one of the most challenging climate components to model. Five regional climate models (RCMs) were run over the West African region with two lateral boundary conditions, ERA-Interim re-analysis and simulations from two general circulation models (GCMs). Two sets of daily rainfall data were generated from these boundary conditions. These simulated rainfall data are analyzed here in comparison to daily rainfall data collected over a network of ten synoptic stations in Burkina Faso from 1990 to 2004. The analyses are based on a description of the rainy season throughout a number of it's characteristics. It was found that the two sets of rainfall data produced with the two driving data present significant biases. The RCMs generally produce too frequent low rainfall values (between 0.1 and 5 mm/day) and too high extreme rainfalls (more than twice the observed values). The high frequency of low rainfall events in the RCMs induces shorter dry spells at the rainfall thresholds of 0.1-1 mm/day. Altogether, there are large disagreements between the models on the simulate season duration and the annual rainfall amounts but most striking are their differences in representing the distribution of rainfall intensity. It is remarkable that these conclusions are valid whether the RCMs are driven by re-analysis or GCMs. In none of the analyzed rainy season characteristics, a significant improvement of their representation can be found when the RCM is forced by the re-analysis, indicating that these deficiencies are intrinsic to the models. (orig.)
Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland
2011-01-01
Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.
Directory of Open Access Journals (Sweden)
James Matthew Tromans
Full Text Available Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE respond primarily to facial identity, while cells within the superior temporal sulcus (STS respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs, with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.
Haase, S.; Matthes, K. B.
2017-12-01
Changes in stratospheric ozone can trigger tropospheric circulation changes. In the Southern hemisphere (SH), the observed shift of the Southern Annular Mode was attributed to the observed trend in lower stratospheric ozone. In the Northern Hemisphere (NH), a recent study showed that extremely low stratospheric ozone conditions during spring produce robust anomalies in the troposphere (zonal wind, temperature and precipitation). This could only be reproduced in a coupled chemistry climate model indicating that chemical-dynamical feedbacks are also important on the NH. To further investigate the importance of interactive chemistry for surface climate, we conducted a set of experiments using NCAR's Community Earth System Model (CESM1) with the Whole Atmosphere Community Climate Model (WACCM) as the atmosphere component. WACCM contains a fully interactive stratospheric chemistry module in its standard configuration. It also allows for an alternative configuration, referred to as SC-WACCM, in which the chemistry (O3, NO, O, O2, CO2 and chemical and shortwave heating rates) is specified as a 2D field in the radiation code. A comparison of the interactive vs. the specified chemistry version enables us to evaluate the relative importance of interactive chemistry by systematically inhibiting the feedbacks between chemistry and dynamics. To diminish the effect of temporal interpolation when prescribing ozone, we use daily resolved zonal mean ozone fields for the specified chemistry run. Here, we investigate the differences in stratosphere-troposphere coupling between the interactive and specified chemistry simulations for the mainly chemically driven SH as well as for the mainly dynamically driven NH. We will especially consider years that are characterized by extremely low stratospheric ozone on the one hand and by large dynamical disturbances, i.e. Sudden Stratospheric Warmings, on the other hand.
Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing
Jeffery, C.; Reisner, J.
2005-12-01
Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as
Younesi, Erfan; Malhotra, Ashutosh; Gündel, Michaela; Scordis, Phil; Kodamullil, Alpha Tom; Page, Matt; Müller, Bernd; Springstubbe, Stephan; Wüllner, Ullrich; Scheller, Dieter; Hofmann-Apitius, Martin
2015-09-22
Despite the unprecedented and increasing amount of data, relatively little progress has been made in molecular characterization of mechanisms underlying Parkinson's disease. In the area of Parkinson's research, there is a pressing need to integrate various pieces of information into a meaningful context of presumed disease mechanism(s). Disease ontologies provide a novel means for organizing, integrating, and standardizing the knowledge domains specific to disease in a compact, formalized and computer-readable form and serve as a reference for knowledge exchange or systems modeling of disease mechanism. The Parkinson's disease ontology was built according to the life cycle of ontology building. Structural, functional, and expert evaluation of the ontology was performed to ensure the quality and usability of the ontology. A novelty metric has been introduced to measure the gain of new knowledge using the ontology. Finally, a cause-and-effect model was built around PINK1 and two gene expression studies from the Gene Expression Omnibus database were re-annotated to demonstrate the usability of the ontology. The Parkinson's disease ontology with a subclass-based taxonomic hierarchy covers the broad spectrum of major biomedical concepts from molecular to clinical features of the disease, and also reflects different views on disease features held by molecular biologists, clinicians and drug developers. The current version of the ontology contains 632 concepts, which are organized under nine views. The structural evaluation showed the balanced dispersion of concept classes throughout the ontology. The functional evaluation demonstrated that the ontology-driven literature search could gain novel knowledge not present in the reference Parkinson's knowledge map. The ontology was able to answer specific questions related to Parkinson's when evaluated by experts. Finally, the added value of the Parkinson's disease ontology is demonstrated by ontology-driven modeling of PINK1
Knowledge Representation: A Brief Review.
Vickery, B. C.
1986-01-01
Reviews different structures and techniques of knowledge representation: structure of database records and files, data structures in computer programming, syntatic and semantic structure of natural language, knowledge representation in artificial intelligence, and models of human memory. A prototype expert system that makes use of some of these…
SEP-induced activity and its thermographic cortical representation in a murine model.
Hoffmann, Klaus-Peter; Ruff, Roman; Kirsch, Matthias
2013-06-01
This article is a methodical report on the generation of reproducible changes in brain activity in a murine model. Somatosensory evoked potentials (SEP) are used to generate synchronized cortical activity. After electrical stimulation of mice forelimbs, the potentials were recorded with a flexible thin-film polyimide electrode structure directly from the cortex. Every registration included a simultaneous recording from both hemispheres that repeated four times to reproduce and compare the results. The SEPs in the murine model were shown to generate a very stable signal. The latency of the second positive wave (P2 wave) ranged between 16 and 19 ms, and the N1-P2 amplitude ranged between 39 and 48 µV. In addition, the temperature distribution of the cortex was acquired using infrared thermography. Surface cortical temperature changed during electrical stimulation without a clear hemispheric correlation. These initial results could be a step toward a better understanding of the different synchronized cortical activities and basic methods of evaluation of various mathematical algorithms to detect them.
Cai, X.; Riley, W. J.; Zhu, Q.
2017-12-01
Deforestation causes a series of changes to the climate, water, and nutrient cycles. Employing a state-of-the-art earth system model—ACME (Accelerated Climate Modeling for Energy), we comprehensively investigate the impacts of deforestation on these processes. We first assess the performance of the ACME Land Model (ALM) in simulating runoff, evapotranspiration, albedo, and plant productivity at 42 FLUXNET sites. The single column mode of ACME is then used to examine climate effects (temperature cooling/warming) and responses of runoff, evapotranspiration, and nutrient fluxes to deforestation. This approach separates local effects of deforestation from global circulation effects. To better understand the deforestation effects in a global context, we use the coupled (atmosphere, land, and slab ocean) mode of ACME to demonstrate the impacts of deforestation on global climate, water, and nutrient fluxes. Preliminary results showed that the land component of ACME has advantages in simulating these processes and that local deforestation has potentially large impacts on runoff and atmospheric processes.
Hirose, Y; Sasaki, Y; Kinoshita, A
2001-01-01
We have previously reported the access control mechanism and audit strategy of the "patient-doctor relation and clinical situation at the point-of-care" model with multi-axial access control matrix (ACM). This mechanism overcomes the deficit of ACM in the aspect of data accessibility but does not resolve the representation of the staff's affiliate and/or plural membership in the complex real world. Care groups inside a department or inter-department clinical team plays significant clinical role but also spend great amount of time and money in the hospital. Therefore the impact of human resource assignment and cost of such stakeholders to the hospital management is huge, so that they should be accurately treated in the hospital information system. However multi-axial ACM has problems with the representation of staff groups due to static parameters such as department/license because staffs belong to a group rather temporarily and/or a medical staff may belong to plural groups. As a solution, we have designed and implemented "cascading staff-group authoring" method with "relation and situation" model and multi-axial ACM. In this mechanism, (i) a system administrator certifies "group chief certifying person" according to the request and authorization by the department director, (ii) the "group chief certifying person" certifies "group chief(s)", (iii) the "group chief" recruits its members from the medical staffs, and at the same time the "group chief" decides the profit distribution policy of this group. This will enable medical staff to access EMR according to the role he/she plays whether it is as a department staff or as a group member. This solution has worked successfully over the past few years. It provides end-users with a flexible and time-to-time staff-group authoring environment using a simple human-interfaced tool without security breach and without system administration cost. In addition, profit and cost distribution is clarified among departments and
Representation of radiative strength functions within a practical model of cascade gamma decay
Energy Technology Data Exchange (ETDEWEB)
Vu, D. C., E-mail: vuconghnue@gmail.com; Sukhovoj, A. M., E-mail: suchovoj@nf.jinr.ru; Mitsyna, L. V., E-mail: mitsyna@nf.jinr.ru; Zeinalov, Sh., E-mail: zeinal@nf.jinr.ru [Joint Institute for Nuclear Research (Russian Federation); Jovancevic, N., E-mail: nikola.jovancevic@df.uns.ac.rs; Knezevic, D., E-mail: david.knezevic@df.uns.ac.rs; Krmar, M., E-mail: krmar@df.uns.ac.rs [University of Novi Sad, Department of Physics, Faculty of Sciences (Serbia); Dragic, A., E-mail: dragic@ipb.ac.rs [Institute of Physics Belgrade (Serbia)
2017-03-15
A practical model developed at the Joint Institute for Nuclear Research (JINR, Dubna) in order to describe the cascade gamma decay of neutron resonances makes it possible to determine simultaneously, from an approximation of the intensities of two-step cascades, parameters of nuclear level densities and partial widths with respect to the emission of nuclear-reaction products. The number of the phenomenological ideas used isminimized in themodel version considered in the present study. An analysis of new results confirms what was obtained earlier for the dependence of dynamics of the interaction of fermion and boson nuclear states on the nuclear shape. From the ratio of the level densities for excitations of the vibrational and quasiparticle types, it also follows that this interaction manifests itself in the region around the neutron binding energy and is probably different in nuclei that have different parities of nucleons.
International Nuclear Information System (INIS)
Smith, Curtis L.; Prescott, Steven; Kvarfordt, Kellie; Sampath, Ram; Larson, Katie
2015-01-01
Early in 2013, researchers at the Idaho National Laboratory outlined a technical framework to support the implementation of state-of-the-art probabilistic risk assessment to predict the safety performance of advanced small modular reactors. From that vision of the advanced framework for risk analysis, specific tasks have been underway in order to implement the framework. This report discusses the current development of a several tasks related to the framework implementation, including a discussion of a 3D physics engine that represents the motion of objects (including collision and debris modeling), cloud-based analysis tools such as a Bayesian-inference engine, and scenario simulations. These tasks were performed during 2015 as part of the technical work associated with the Advanced Reactor Technologies Program.
Model representation of the ambient electron density distribution in the middle atmosphere
Ramanamurty, Y. V.
1989-01-01
While the Langmuir probe controlled by rocket propagation experiments by the University of Illinois at midlatitude revealed the existence of a permanent D region turning point (DTP), similar measurements over the Thumba equatorial station did not clearly bring out the above daytime feature. Moreover, the calibration constant (ratio of electron density to the current drawn by the Langmuir probe) increased with height (in the 70 to 100 km region) in the case of the midlatitude observations whereas the recent measurements over Thumba showed a decrease up to about 90 km followed by an increase above 90 km. Secondly, there is the problem of reconciling the station oriented observations from the COSPAR family with the ground based radio propagation measurements from the URSI family. Thirdly, new information on Winter in Northern Europe (WINE) and in USSR is available by asking for its incorporation into any global model such as the IRI. The results of investigation of the above aspects are presented.
Effects of the Representation of Convection on the Modelling of Hurricane Tomas (2010
Directory of Open Access Journals (Sweden)
Irene Marras
2017-01-01
Full Text Available The cumulus parameterization is widely recognised as a crucial factor in tropical meteorology: this paper intends to shed further light on the effects of convection parameterization on tropical cyclones’ numerical predictions in the “grey zone” (10–1 km grid spacing. Ten experiments are devised by combining five different convection treatments over the innermost, 5 km grid spacing, domain, and two different global circulation model datasets (IFS and ERA-Interim. All ten experiments are finally analysed and compared to observations provided by the National Hurricane Center’s best track record and multisatellite rainfall measurements. Results manifestly point to the superiority of employing no convective parameterization at the scale of 5 km versus the usage of any of those provided by WRF to reproduce the case study of Hurricane Tomas, which hit the Lesser Antilles and Greater Antilles in late October and early November 2010.
Directory of Open Access Journals (Sweden)
Gabriel Recchia
2015-01-01
Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.
Directory of Open Access Journals (Sweden)
A. A. Shevtsov
2015-01-01
Full Text Available Spray drying of solutions and suspensions is among the most common methods of producing a wide range of powdered products in chemical, food and pharmaceutical industries. For the drying of heat-sensitive materials, which is fully applicable to the distillery stillage filtrate continuous-flow type of contact of drying agent and the solution droplets is examined. Two-phase simulation method of computational hydrodynamics in a stationary state for studying the process of drying of the distillery stillage filtrate in the pilot spray dryer under the following assumptions was used. The components form an ideal mixture, the properties of which are calculated directly from the properties of the components and their proportions. The droplets were presented in spherical form. The density and specific heat of the solution and the coefficient of vapors diffusion in the gas phase remained unchanged. To solve the heat exchange equations between the drying agent and the drops by the finite volume method the software package ANSYS CFX was used. The bind between the two phases was established by Navier-Stokes equations. The continuous phase (droplets of the distillery stillage filtrate was described by the k-ε turbulence model. The results obtained showed that the interaction of "drop-wall" causes a significant change of velocity, temperature and humidity both of a drying agent and the product particles. The behavior of the particles by spraying, collision with walls and deposition of the finished product allowed to determine the dependence of physical parameters of the drying process, of the geometric dimensions of the dryer. Comparison of simulation results with experimental data showed satisfactory convergence of the results: for the temperature of the powder 10% its humidity of 12% and temperature of the spent drying agent at the outlet from the drier of 13%. The possibility of using the model in the spray dryers designing, and control of the drying process
Webster, Clare; Rutter, Nick; Jonas, Tobias
2017-09-01
A comprehensive analysis of canopy surface temperatures was conducted around a small and large gap at a forested alpine site in the Swiss Alps during the 2015 and 2016 snowmelt seasons (March-April). Canopy surface temperatures within the small gap were within 2-3°C of measured reference air temperature. Vertical and horizontal variations in canopy surface temperatures were greatest around the large gap, varying up to 18°C above measured reference air temperature during clear-sky days. Nighttime canopy surface temperatures around the study site were up to 3°C cooler than reference air temperature. These measurements were used to develop a simple parameterization for correcting reference air temperature for elevated canopy surface temperatures during (1) nighttime conditions (subcanopy shortwave radiation is 0 W m-2) and (2) periods of increased subcanopy shortwave radiation >400 W m-2 representing penetration of shortwave radiation through the canopy. Subcanopy shortwave and longwave radiation collected at a single point in the subcanopy over a 24 h clear-sky period was used to calculate a nighttime bulk offset of 3°C for scenario 1 and develop a multiple linear regression model for scenario 2 using reference air temperature and subcanopy shortwave radiation to predict canopy surface temperature with a root-mean-square error (RMSE) of 0.7°C. Outside of these two scenarios, reference air temperature was used to predict subcanopy incoming longwave radiation. Modeling at 20 radiometer locations throughout two snowmelt seasons using these parameterizations reduced the mean bias and RMSE to below 10 W m s-2 at all locations.
Zhang, Y.; Novick, K. A.; Song, C.; Zhang, Q.; Hwang, T.
2017-12-01
Drought and heat waves are expected to increase both in frequency and amplitude, exhibiting a major disturbance to global carbon and water cycles under future climate change. However, how these climate anomalies translate into physiological drought, or ecosystem moisture stress are still not clear, especially under the co-limitations from soil moisture supply and atmospheric demand for water. In this study, we characterized the ecosystem-level moisture stress in a deciduous forest in the southeastern United States using the Coupled Carbon and Water (CCW) model and in-situ eddy covariance measurements. Physiologically, vapor pressure deficit (VPD) as an atmospheric water demand indicator largely controls the openness of leaf stomata, and regulates atmospheric carbon and water exchanges during periods of hydrological stress. Here, we tested three forms of VPD-related moisture scalars, i.e. exponent (K2), hyperbola (K3), and logarithm (K4) to quantify the sensitivity of light-use efficiency to VPD along different soil moisture conditions. The sensitivity indicators of K values were calibrated based on the framework of CCW using Monte Carlo simulations on the hourly scale, in which VPD and soil water content (SWC) are largely decoupled and the full carbon and water exchanging information are held. We found that three K values show similar performances in the predictions of ecosystem-level photosynthesis and transpiration after calibration. However, all K values show consistent gradient changes along SWC, indicating that this deciduous forest is less responsive to VPD as soil moisture decreases, a phenomena of isohydricity in which plants tend to close stomata to keep the leaf water potential constant and reduce the risk of hydraulic failure. Our study suggests that accounting for such isohydric information, or spectrum of moisture stress along different soil moisture conditions in models can significantly improve our ability to predict ecosystem responses to future
Todd, Martin; Cavazos, Carolina; Wang, Yi
2013-04-01
The Saharan atmospheric boundary layer (SABL) during summer is one of the deepest on Earth, and is crucial in controlling the vertical redistribution and long-range transport of dust in the Sahara. The SABL is typically made up of an actively growing convective layer driven by high sensible heating at the surface, with a deep, near-neutrally stratified Saharan residual layer (SRL) above it, which is mostly well mixed in humidity and temperature and reaches a height of ˜5-6km. These two layers are usually separated by a weak (≤1K) temperature inversion. Model representation of the SPBL structure and evolution is important for accurate weather/climate and aerosol prediction. In this work, we evaluate model performance of the Weather Research and Forecasting (WRF) to represent key multi-scale processes in the SABL during summer 2011, including depiction of the diurnal cycle. For this purpose, a sensitivity analysis is performed to examine the performance of seven PBL schemes (YSU, MYJ, QNSE, MYNN, ACM, Boulac and MRF) and two land-surface model (Noah and RUC) schemes. In addition, the sensitivity to the choice of lateral boundary conditions (ERA-Interim and NCEP) and land use classification maps (USGS and MODIS-based) is tested. Model outputs were confronted upper-air and surface observations from the Fennec super-site at Bordj Moktar and automatic weather station (AWS) in Southern Algeria Vertical profiles of wind speed, potential temperature and water vapour mixing ratio were examined to diagnose differences in PBL heights and model efficacy to reproduce the diurnal cycle of the SABL. We find that the structure of the model SABL is most sensitive the choice of land surface model and lateral boundary conditions and relatively insensitive to the PBL scheme. Overall the model represents well the diurnal cycle in the structure of the SABL. Consistent model biases include (i) a moist (1-2 gkg-1) and slightly cool (~1K) bias in the daytime convective boundary layer (ii
Directory of Open Access Journals (Sweden)
P. Mathiot
2017-07-01
Full Text Available Ice-shelf–ocean interactions are a major source of freshwater on the Antarctic continental shelf and have a strong impact on ocean properties, ocean circulation and sea ice. However, climate models based on the ocean–sea ice model NEMO (Nucleus for European Modelling of the Ocean currently do not include these interactions in any detail. The capability of explicitly simulating the circulation beneath ice shelves is introduced in the non-linear free surface model NEMO. Its implementation into the NEMO framework and its assessment in an idealised and realistic circum-Antarctic configuration is described in this study. Compared with the current prescription of ice shelf melting (i.e. at the surface, inclusion of open sub-ice-shelf cavities leads to a decrease in sea ice thickness along the coast, a weakening of the ocean stratification on the shelf, a decrease in salinity of high-salinity shelf water on the Ross and Weddell sea shelves and an increase in the strength of the gyres that circulate within the over-deepened basins on the West Antarctic continental shelf. Mimicking the overturning circulation under the ice shelves by introducing a prescribed meltwater flux over the depth range of the ice shelf base, rather than at the surface, is also assessed. It yields similar improvements in the simulated ocean properties and circulation over the Antarctic continental shelf to those from the explicit ice shelf cavity representation. With the ice shelf cavities opened, the widely used three equation ice shelf melting formulation, which enables an interactive computation of melting, is tested. Comparison with observational estimates of ice shelf melting indicates realistic results for most ice shelves. However, melting rates for the Amery, Getz and George VI ice shelves are considerably overestimated.
Towards a better representation of the solar cycle in general circulation models
Directory of Open Access Journals (Sweden)
K. M. Nissen
2007-10-01
Full Text Available We introduce the improved Freie Universität Berlin (FUB high-resolution radiation scheme FUBRad and compare it to the 4-band standard ECHAM5 SW radiation scheme of Fouquart and Bonnel (FB. Both schemes are validated against the detailed radiative transfer model libRadtran. FUBRad produces realistic heating rate variations during the solar cycle. The SW heating rate response with the FB scheme is about 20 times smaller than with FUBRad and cannot produce the observed temperature signal. A reduction of the spectral resolution to 6 bands for solar irradiance and ozone absorption cross sections leads to a degradation (reduction of the solar SW heating rate signal by about 20%.
The simulated temperature response agrees qualitatively well with observations in the summer upper stratosphere and mesosphere where irradiance variations dominate the signal.
Comparison of the total short-wave heating rates under solar minimum conditions shows good agreement between FUBRad, FB and libRadtran up to the middle mesosphere (60–70 km indicating that both parameterizations are well suited for climate integrations that do not take solar variability into account.
The FUBRad scheme has been implemented as a sub-submodel of the Modular Earth Submodel System (MESSy.
Directory of Open Access Journals (Sweden)
Ming Li
2012-01-01
Full Text Available Quality function deployment (QFD is a customer-driven approach for product design and development. A QFD analysis process includes a series of subprocesses, such as determination of the importance of customer requirements (CRs, the correlation among engineering characteristics (ECs, and the relationship between CRs and ECs. Usually more than group of one decision makers are involved in the subprocesses to make the decision. In most decision making problems, they often provide their evaluation information in the linguistic form. Moreover, because of different knowledge, background, and discrimination ability, decision makers may express their linguistic preferences in multigranularity linguistic information. Therefore, an effective approach to deal with the multi-granularity linguistic information in QFD analysis process is highly needed. In this study, the QFD methodology is extended with 2-tuple linguistic representation model under multi-granularity linguistic environment. The extended QFD methodology can cope with multi-granularity linguistic evaluation information and avoid the loss of information. The applicability of the proposed approach is demonstrated with a numerical example.
Directory of Open Access Journals (Sweden)
Mei-Shiang Chang
2013-01-01
Full Text Available The facility layout problem is a typical combinational optimization problem. In this research, a slicing tree representation and a quadratically constrained program model are combined with harmony search to develop a heuristic method for solving the unequal-area block layout problem. Because of characteristics of slicing tree structure, we propose a regional structure of harmony memory to memorize facility layout solutions and two kinds of harmony improvisation to enhance global search ability of the proposed heuristic method. The proposed harmony search based heuristic is tested on 10 well-known unequal-area facility layout problems from the literature. The results are compared with the previously best-known solutions obtained by genetic algorithm, tabu search, and ant system as well as exact methods. For problems O7, O9, vC10Ra, M11*, and Nug12, new best solutions are found. For other problems, the proposed approach can find solutions that are very similar to previous best-known solutions.
Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye
2017-10-01
Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Liu, Yi-Chin; Fan, Jiwen; Zhang, Guang J.; Xu, Kuan-Man; Ghan, Steven J.
2015-04-01
Following Part I, in which 3-D cloud-resolving model (CRM) simulations of a squall line and mesoscale convective complex in the midlatitude continental and the tropical regions are conducted and evaluated, we examine the scale dependence of eddy transport of water vapor, evaluate different eddy transport formulations, and improve the representation of convective transport across all scales by proposing a new formulation that more accurately represents the CRM-calculated eddy flux. CRM results show that there are strong grid-spacing dependencies of updraft and downdraft fractions regardless of altitudes, cloud life stage, and geographical location. As for the eddy transport of water vapor, updraft eddy flux is a major contributor to total eddy flux in the lower and middle troposphere. However, downdraft eddy transport can be as large as updraft eddy transport in the lower atmosphere especially at the mature stage of midlatitude continental convection. We show that the single-updraft approach significantly underestimates updraft eddy transport of water vapor because it fails to account for the large internal variability of updrafts, while a single downdraft represents the downdraft eddy transport of water vapor well. We find that using as few as three updrafts can account for the internal variability of updrafts well. Based on the evaluation with the CRM simulated data, we recommend a simplified eddy transport formulation that considers three updrafts and one downdraft. Such formulation is similar to the conventional one but much more accurately represents CRM-simulated eddy flux across all grid scales.
Storelvmo, Trude; Sagoo, Navjit; Tan, Ivy
2016-04-01
Despite the growing effort in improving the cloud microphysical schemes in GCMs, most of this effort has not focused on improving the ability of GCMs to accurately simulate phase partitioning in mixed-phase clouds. Getting the relative proportion of liquid droplets and ice crystals in clouds right in GCMs is critical for the representation of cloud radiative forcings and cloud-climate feedbacks. Here, we first present satellite observations of cloud phase obtained by NASA's CALIOP instrument, and report on robust statistical relationships between cloud phase and several aerosols species that have been demonstrated to act as ice nuclei (IN) in laboratory studies. We then report on results from model intercomparison projects that reveal that GCMs generally underestimate the amount of supercooled liquid in clouds. For a selected GCM (NCAR 's CAM5), we thereafter show that the underestimate can be attributed to two main factors: i) the presence of IN in the mixed-phase temperature range, and ii) the Wegener-Bergeron-Findeisen process, which converts liquid to ice once ice crystals have formed. Finally, we show that adjusting these two processes such that the GCM's cloud phase is in agreement with the observed has a substantial impact on the simulated radiative forcing due to IN perturbations, as well as on the cloud-climate feedbacks and ultimately climate sensitivity simulated by the GCM.
Directory of Open Access Journals (Sweden)
María de Lourdes Ortíz Boza
2007-08-01
Full Text Available SUMMARYThe purpose of this paper it’s to analyze the stereotypes managed in the campaign 1988-2000 of family planning, produced by National Council of Population (CONAPO and emitted by mexican television in all the modalities; open and payperview. This campaign it’s one of the last that the state, through the CONAPO, has been put in the massive communication media, specifically in television. The campaign was designed specially for this media and was transmitted since 1998 to 2004, in which it was reprogrammed as part of the festivity for the 30 years of reproductive health campaigns of Mexican state. After this campaign, practically none of them has been emitted through the televise media. Another thing that makes it interesting to be the object of study is the fact that for first time the masculine stereotype is included as decisive part of family planning. This audiovisual material constitute a good source of information in its kind to analyze those messages produced by the State and are included as well stereotypes such as urban, and rural. The model taken was the social representations, from Sergei Moscovici, as well as some elements from the techniques of content analysis. All 22 messages of television that integrate the campaign 1998-2000 were analyzed. Of them, 11 directed to urban zones and 11 to rural zones. In both were quantified the times that man and women in which they take part, the way the take part: alone or in couple and a positive or negative value was assigned to the stereotype present in messages, taking as criteria of this assignment or evaluation, the measurement in which (trough the textual or visual speech it is fomented or not of equitable way the masculine and feminine participation in relation with the decision to plan the family. 10 charts were elaborated were the exercise is done by each of the thematic approached in the messages of the campaign and from the results obtained inferences were realized from two
Paired structures in knowledge representation
DEFF Research Database (Denmark)
Montero, J.; Bustince, H.; Franco de los Ríos, Camilo
2016-01-01
In this position paper we propose a consistent and unifying view to all those basic knowledge representation models that are based on the existence of two somehow opposite fuzzy concepts. A number of these basic models can be found in fuzzy logic and multi-valued logic literature. Here...... of the relationships between several existing knowledge representation formalisms, providing a basis from which more expressive models can be later developed....
Mohamed-Salah, Boukhechem; Alain, Dumon
2016-01-01
This study aims to assess whether the handling of concrete ball-and-stick molecular models promotes translation between diagrammatic representations and a concrete model (or vice versa) and the coordination of the different types of structural representations of a given molecular structure. Forty-one Algerian undergraduate students were requested…
DEFF Research Database (Denmark)
Tattini, Jacopo; Ramea, Kalai; Gargiulo, Maurizio
2018-01-01
and mathematical expressions required to develop the approach. This study develops MoCho-TIMES in the standalone transportation sector of TIMES-DK, the integrated energy system model for Denmark. The model is tested for the Business as Usual scenario and for four alternative scenarios that imply diverse......This study presents MoCho-TIMES, an original methodology for incorporating modal choice into energy-economy-environment-engineering (E4) system models. MoCho-TIMES addresses the scarce ability of E4 models to realistically depict behaviour in transport and allows for modal shift towards transit...
Sakti, Apurba; Gallagher, Kevin G.; Sepulveda, Nestor; Uckun, Canan; Vergara, Claudio; de Sisternes, Fernando J.; Dees, Dennis W.; Botterud, Audun
2017-02-01
We develop three novel enhanced mixed integer-linear representations of the power limit of the battery and its efficiency as a function of the charge and discharge power and the state of charge of the battery, which can be directly implemented in large-scale power systems models and solved with commercial optimization solvers. Using these battery representations, we conduct a techno-economic analysis of the performance of a 10 MWh lithium-ion battery system testing the effect of a 5-min vs. a 60-min price signal on profits using real time prices from a selected node in the MISO electricity market. Results show that models of lithium-ion batteries where the power limits and efficiency are held constant overestimate profits by 10% compared to those obtained from an enhanced representation that more closely matches the real behavior of the battery. When the battery system is exposed to a 5-min price signal, the energy arbitrage profitability improves by 60% compared to that from hourly price exposure. These results indicate that a more accurate representation of li-ion batteries as well as the market rules that govern the frequency of electricity prices can play a major role on the estimation of the value of battery technologies for power grid applications.
The representation of low-level clouds during the West African monsoon in weather and climate models
Kniffka, Anke; Hannak, Lisa; Knippertz, Peter; Fink, Andreas
2016-04-01
The West African monsoon is one of the most important large-scale circulation features in the tropics and the associated seasonal rainfalls are crucial to rain-fed agriculture and water resources for hundreds of millions of people. However, numerical weather and climate models still struggle to realistically represent salient features of the monsoon across a wide range of scales. Recently it has been shown that substantial errors in radiation and clouds exist in the southern parts of West Africa (8°W-8°E, 5-10°N) during summer. This area is characterised by strong low-level jets associated with the formation of extensive ultra-low stratus clouds. Often persisting long after sunrise, these clouds have a substantial impact on the radiation budget at the surface and thus the diurnal evolution of the planetary boundary layer (PBL). Here we present some first results from a detailed analysis of the representation of these clouds and the associated PBL features across a range of weather and climate models. Recent climate model simulations for the period 1991-2010 run in the framework of the Year of Tropical Convection (YOTC) offer a great opportunity for this analysis. The models are those used for the latest Assessment Report of the Intergovernmental Panel on Climate Change, but for YOTC the model output has a much better temporal resolution, allowing to resolve the diurnal cycle, and includes diabatic terms, allowing to much better assess physical reasons for errors in low-level temperature, moisture and thus cloudiness. These more statistical climate model analyses are complemented by experiments using ICON (Icosahedral non-hydrostatic general circulation model), the new numerical weather prediction model of the German Weather Service and the Max Planck Institute for Meteorology. ICON allows testing sensitivities to model resolution and numerical schemes. These model simulations are validated against (re-)analysis data, satellite observations (e.g. CM SAF cloud and
Omrani, H.; Drobinski, P.; Dubos, T.
2009-09-01
In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.
Energy Technology Data Exchange (ETDEWEB)
Morrison, Hugh [National Center for Atmospheric Research, Boulder, CO (United States)
2012-11-12
This is the first meeting of the whole new GEWEX (Global Energy and Water Cycle Experiment) Atmospheric System Study (GASS) project that has been formed from the merger of the GEWEX Cloud System Study (GCSS) Project and the GEWEX Atmospheric Boundary Layer Studies (GABLS). As such, this meeting will play a major role in energizing GEWEX work in the area of atmospheric parameterizations of clouds, convection, stable boundary layers, and aerosol-cloud interactions for the numerical models used for weather and climate projections at both global and regional scales. The representation of these processes in models is crucial to GEWEX goals of improved prediction of the energy and water cycles at both weather and climate timescales. This proposal seeks funds to be used to cover incidental and travel expenses for U.S.-based graduate students and early career scientists (i.e., within 5 years of receiving their highest degree). We anticipate using DOE funding to support 5-10 people. We will advertise the availability of these funds by providing a box to check for interested participants on the online workshop registration form. We will also send a note to our participants' mailing lists reminding them that the funds are available and asking senior scientists to encourage their more junior colleagues to participate. All meeting participants are encouraged to submit abstracts for oral or poster presentations. The science organizing committee (see below) will base funding decisions on the relevance and quality of these abstracts, with preference given to under-represented populations (especially women and minorities) and to early career scientists being actively mentored at the meeting (e.g. students or postdocs attending the meeting with their adviser).
Storelvmo, T.
2015-12-01
Substantial improvements have been made to the cloud microphysical schemes used in the latest generation of global climate models (GCMs), however, an outstanding weakness of these schemes lies in the arbitrariness of their tuning parameters. Despite the growing effort in improving the cloud microphysical schemes in GCMs, most of this effort has not focused on improving the ability of GCMs to accurately simulate phase partitioning in mixed-phase clouds. Getting the relative proportion of liquid droplets and ice crystals in clouds right in GCMs is critical for the representation of cloud radiative forcings and cloud-climate feedbacks. Here, we first present satellite observations of cloud phase obtained by NASA's CALIOP instrument, and report on robust statistical relationships between cloud phase and several aerosols species that have been demonstrated to act as ice nuclei (IN) in laboratory studies. We then report on results from model intercomparison projects that reveal that GCMs generally underestimate the amount of supercooled liquid in clouds. For a selected GCM (NCAR 's CAM5), we thereafter show that the underestimate can be attributed to two main factors: i) the presence of IN in the mixed-phase temperature range, and ii) the Wegener-Bergeron-Findeisen process, which converts liquid to ice once ice crystals have formed. Finally, we show that adjusting these two processes such that the GCM's cloud phase is in agreement with the observed has a substantial impact on the simulated radiative forcing due to IN perturbations, as well as on the cloud-climate feedbacks and ultimately climate sensitivity simulated by the GCM.
Throop, David R.
1992-01-01
The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.
Lohmar, Johannes; Bambach, Markus; Karhausen, Kai F.
2013-01-01
Integrated computational materials engineering is an up to date method for developing new materials and optimizing complete process chains. In the simulation of a process chain, material models play a central role as they capture the response of the material to external process conditions. While much effort is put into their development and improvement, less attention is paid to their implementation, which is problematic because the representation of microstructure in the model has a decisive influence on modeling accuracy and calculation speed. The aim of this article is to analyze the influence of different microstructure representation concepts on the prediction of flow stress and microstructure evolution when using the same set of material equations. Scalar, tree-based and cluster-based concepts are compared for a multi-stage rolling process of an AA5182 alloy. It was found that implementation influences the predicted flow stress and grain size, in particular in the regime of coupled hardening and softening.
Computer representation of molecular surfaces
International Nuclear Information System (INIS)
Max, N.L.
1981-01-01
This review article surveys recent work on computer representation of molecular surfaces. Several different algorithms are discussed for producing vector or raster drawings of space-filling models formed as the union of spheres. Other smoother surfaces are also considered
DeSouza-Machado, Sergio; Larrabee Strow, L.; Tangborn, Andrew; Huang, Xianglei; Chen, Xiuhong; Liu, Xu; Wu, Wan; Yang, Qiguang
2018-01-01
One-dimensional variational retrievals of temperature and moisture fields from hyperspectral infrared (IR) satellite sounders use cloud-cleared radiances (CCRs) as their observation. These derived observations allow the use of clear-sky-only radiative transfer in the inversion for geophysical variables but at reduced spatial resolution compared to the native sounder observations. Cloud clearing can introduce various errors, although scenes with large errors can be identified and ignored. Information content studies show that, when using multilayer cloud liquid and ice profiles in infrared hyperspectral radiative transfer codes, there are typically only 2-4 degrees of freedom (DOFs) of cloud signal. This implies a simplified cloud representation is sufficient for some applications which need accurate radiative transfer. Here we describe a single-footprint retrieval approach for clear and cloudy conditions, which uses the thermodynamic and cloud fields from numerical weather prediction (NWP) models as a first guess, together with a simple cloud-representation model coupled to a fast scattering radiative transfer algorithm (RTA). The NWP model thermodynamic and cloud profiles are first co-located to the observations, after which the N-level cloud profiles are converted to two slab clouds (TwoSlab; typically one for ice and one for water clouds). From these, one run of our fast cloud-representation model allows an improvement of the a priori cloud state by comparing the observed and model-simulated radiances in the thermal window channels. The retrieval yield is over 90 %, while the degrees of freedom correlate with the observed window channel brightness temperature (BT) which itself depends on the cloud optical depth. The cloud-representation and scattering package is benchmarked against radiances computed using a maximum random overlap (RMO) cloud scheme. All-sky infrared radiances measured by NASA's Atmospheric Infrared Sounder (AIRS) and NWP thermodynamic and cloud
Directory of Open Access Journals (Sweden)
S. DeSouza-Machado
2018-01-01
Full Text Available One-dimensional variational retrievals of temperature and moisture fields from hyperspectral infrared (IR satellite sounders use cloud-cleared radiances (CCRs as their observation. These derived observations allow the use of clear-sky-only radiative transfer in the inversion for geophysical variables but at reduced spatial resolution compared to the native sounder observations. Cloud clearing can introduce various errors, although scenes with large errors can be identified and ignored. Information content studies show that, when using multilayer cloud liquid and ice profiles in infrared hyperspectral radiative transfer codes, there are typically only 2–4 degrees of freedom (DOFs of cloud signal. This implies a simplified cloud representation is sufficient for some applications which need accurate radiative transfer. Here we describe a single-footprint retrieval approach for clear and cloudy conditions, which uses the thermodynamic and cloud fields from numerical weather prediction (NWP models as a first guess, together with a simple cloud-representation model coupled to a fast scattering radiative transfer algorithm (RTA. The NWP model thermodynamic and cloud profiles are first co-located to the observations, after which the N-level cloud profiles are converted to two slab clouds (TwoSlab; typically one for ice and one for water clouds. From these, one run of our fast cloud-representation model allows an improvement of the a priori cloud state by comparing the observed and model-simulated radiances in the thermal window channels. The retrieval yield is over 90 %, while the degrees of freedom correlate with the observed window channel brightness temperature (BT which itself depends on the cloud optical depth. The cloud-representation and scattering package is benchmarked against radiances computed using a maximum random overlap (RMO cloud scheme. All-sky infrared radiances measured by NASA's Atmospheric Infrared Sounder (AIRS and NWP
Attention and Representational Momentum
Hayes, Amy; Freyd, Jennifer J
1995-01-01
Representational momentum, the tendency for memory to be distorted in the direction of an implied transformation, suggests that dynamics are an intrinsic part of perceptual representations. We examined the effect of attention on dynamic representation by testing for representational momentum under conditions of distraction. Forward memory shifts increase when attention is divided. Attention may be involved in halting but not in maintaining dynamic representations.
A generalized wavelet extrema representation
Energy Technology Data Exchange (ETDEWEB)
Lu, Jian; Lades, M.
1995-10-01
The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.
Multiple representations in physics education
Duit, Reinders; Fischer, Hans E
2017-01-01
This volume is important because despite various external representations, such as analogies, metaphors, and visualizations being commonly used by physics teachers, educators and researchers, the notion of using the pedagogical functions of multiple representations to support teaching and learning is still a gap in physics education. The research presented in the three sections of the book is introduced by descriptions of various psychological theories that are applied in different ways for designing physics teaching and learning in classroom settings. The following chapters of the book illustrate teaching and learning with respect to applying specific physics multiple representations in different levels of the education system and in different physics topics using analogies and models, different modes, and in reasoning and representational competence. When multiple representations are used in physics for teaching, the expectation is that they should be successful. To ensure this is the case, the implementati...
Impossibility Theorem in Proportional Representation Problem
International Nuclear Information System (INIS)
Karpov, Alexander
2010-01-01
The study examines general axiomatics of Balinski and Young and analyzes existed proportional representation methods using this approach. The second part of the paper provides new axiomatics based on rational choice models. New system of axioms is applied to study known proportional representation systems. It is shown that there is no proportional representation method satisfying a minimal set of the axioms (monotonicity and neutrality).
Knowledge Representation and Ontologies
Grimm, Stephan
Knowledge representation and reasoning aims at designing computer systems that reason about a machine-interpretable representation of the world. Knowledge-based systems have a computational model of some domain of interest in which symbols serve as surrogates for real world domain artefacts, such as physical objects, events, relationships, etc. [1]. The domain of interest can cover any part of the real world or any hypothetical system about which one desires to represent knowledge for com-putational purposes. A knowledge-based system maintains a knowledge base, which stores the symbols of the computational model in the form of statements about the domain, and it performs reasoning by manipulating these symbols. Applications can base their decisions on answers to domain-relevant questions posed to a knowledge base.
Modified GMDH-NN algorithm and its application for global sensitivity analysis
International Nuclear Information System (INIS)
Song, Shufang; Wang, Lu
2017-01-01
Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages. - Highlights: • The GMDH-NN is improved to construct the explicit polynomial model of optimal complexity by self-organization. • The paper aims at combining improved GMDH-NN with HDMR expansions and using it to compute Sobol' indices directly. • The method can be applied in uniform, normal and exponential distribution by using suitable orthogonal polynomials. • Engineering examples, e.g., electronic circuit models can be solved by the presented method.
Directory of Open Access Journals (Sweden)
M. V. Serzhantova
2016-05-01
Full Text Available Subject of Research. We analyze the problems of finite Markov chains apparatus application for simulating a human operator activity in the quasi-static functional environment. It is shown that the functional environment stochastic nature is generated by a factor of interval character of human operator properties. Method. The problem is solved in the class of regular (recurrent finite Markov chains with three states of the human operator: with a favorable, median and unfavorable combination of the values of mathematical model parameters of the human operator in a quasi-static functional environment. The finite Markov chain is designed taking into account the factors of human operator tiredness and interval character of parameters of the model representation of his properties. The device is based on the usage of mathematical approximation of the standard curve of the human operator activity performance during work shift. The standard curve of the human operator activity performance is based on the extensive research experience of functional activity of the human operator with the help of photos of the day, his action timing and ergonomic generalizations. Main Results. The apparatus of regular finite Markov chains gave the possibility to evaluate correctly the human operator activity performance in a quasi-static functional environment with the use of the main information component of these chains as a vector of final probabilities. In addition, we managed to build an algorithmic basis for estimating the stationary time (time study for transit of human operator from arbitrary initial functional state into a state corresponding to a vector of final probabilities for a used chain after it reaches the final state based on the analysis of the eigenvalues spectrum of the matrix of transition probabilities for a regular (recurrent finite Markov chain. Practical Relevance. Obtained theoretical results are confirmed by illustrative examples, which
Energy Technology Data Exchange (ETDEWEB)
Mitchell, David L. [Desert Research Institute, Reno, NV (United States)
2013-09-05
It is well known that cirrus clouds play a major role in regulating the earth’s climate, but the details of how this works are just beginning to be understood. This project targeted the main property of cirrus clouds that influence climate processes; the ice fall speed. That is, this project improves the representation of the mass-weighted ice particle fall velocity, V_{m}, in climate models, used to predict future climate on global and regional scales. Prior to 2007, the dominant sizes of ice particles in cirrus clouds were poorly understood, making it virtually impossible to predict how cirrus clouds interact with sunlight and thermal radiation. Due to several studies investigating the performance of optical probes used to measure the ice particle size distribution (PSD), as well as the remote sensing results from our last ARM project, it is now well established that the anomalously high concentrations of small ice crystals often reported prior to 2007 were measurement artifacts. Advances in the design and data processing of optical probes have greatly reduced these ice artifacts that resulted from the shattering of ice particles on the probe tips and/or inlet tube, and PSD measurements from one of these improved probes (the 2-dimensional Stereo or 2D-S probe) are utilized in this project to parameterize V_{m} for climate models. Our original plan in the proposal was to parameterize the ice PSD (in terms of temperature and ice water content) and ice particle mass and projected area (in terms of mass- and area-dimensional power laws or m-D/A-D expressions) since these are the microphysical properties that determine V_{m}, and then proceed to calculate V_{m} from these parameterized properties. But the 2D-S probe directly measures ice particle projected area and indirectly estimates ice particle mass for each size bin. It soon became apparent that the original plan would introduce more uncertainty in the V_{m} calculations
Andrew D. Richardson; Ryan S. Anderson; M. Altaf Arain; Alan G. Barr; Gil Bohrer; Guangsheng Chen; Jing M. Chen; Philippe Ciais; Kenneth J. David; Ankur R. Desai; Michael C. Dietze; Danilo Dragoni; Steven R. Garrity; Christopher M. Gough; Robert Grant; David Hollinger; Hank A. Margolis; Harry McCaughey; Mirco Migliavacca; Russel K. Monson; J. William Munger; Benjamin Poulter; Brett M. Raczka; Daniel M. Ricciuto; Alok K. Sahoo; Kevin Schaefer; Hanqin Tian; Rodrigo Vargas; Hans Verbeeck; Jingfeng Xiao; Yongkang. Xue
2012-01-01
Phenology, by controlling the seasonal activity of vegetation on the land surface, plays a fundamental role in regulating photosynthesis and other ecosystem processes, as well as competitive interactions and feedbacks to the climate system. We conducted an analysis to evaluate the representation of phenology, and the associated seasonality of ecosystem-scale CO
Louis, Linda
2013-01-01
This article reports on the most recent phase of an ongoing research program that examines the artistic graphic representational behavior and paintings of children between the ages of four and seven. The goal of this research program is to articulate a contemporary account of artistic growth and to illuminate how young children's changing…
International Nuclear Information System (INIS)
Anderson, Gregory W.; Blazek, Tomas
2005-01-01
E 6 is an attractive group for unification model building. However, the complexity of a rank 6 group makes it nontrivial to write down the structure of higher dimensional operators in an E 6 theory in terms of the states labeled by quantum numbers of the standard model gauge group. In this paper, we show the results of our computation of the Clebsch-Gordan coefficients for the products of the 27 with irreducible representations of higher dimensionality: 78, 351, 351 ' , 351, and 351 ' . Application of these results to E 6 model building involving higher dimensional operators is straightforward
Directory of Open Access Journals (Sweden)
L. Santos
2018-04-01
Full Text Available In many conceptual rainfall–runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall–runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs, which are frequent in rainfall–runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2018-04-01
In many conceptual rainfall-runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall-runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs), which are frequent in rainfall-runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.
Energy Technology Data Exchange (ETDEWEB)
Liu, Xiaohong; Easter, Richard C.; Ghan, Steven J.; Zaveri, Rahul A.; Rasch, Philip J.; Shi, Xiangjun; Lamarque, J.-F.; Gettelman, A.; Morrison, H.; Vitt, Francis; Conley, Andrew; Park, S.; Neale, Richard; Hannay, Cecile; Ekman, A. M.; Hess, Peter; Mahowald, N.; Collins, William D.; Iacono, Michael J.; Bretherton, Christopher S.; Flanner, M. G.; Mitchell, David
2012-05-21
A modal aerosol module (MAM) has been developed for the Community Atmosphere Model version 5 (CAM5), the atmospheric component of the Community Earth System Model version 1 (CESM1). MAM is capable of simulating the aerosol size distribution and both internal and external mixing between aerosol components, treating numerous complicated aerosol processes and aerosol physical, chemical and optical properties in a physically based manner. Two MAM versions were developed: a more complete version with seven-lognormal modes (MAM7), and a three-lognormal mode version (MAM3) for the purpose of long-term (decades to centuries) simulations. Major approximations in MAM3 include assuming immediate mixing of primary organic matter (POM) and black carbon (BC) with other aerosol components, merging of the MAM7 fine dust and fine sea salt modes into the accumulation mode, merging of the MAM7 coarse dust and coarse sea salt modes into the single coarse mode, and neglecting the explicit treatment of ammonia and ammonium cycles. Simulated sulfate and secondary organic aerosol (SOA) mass concentrations are remarkably similar between MAM3 and MAM7 as most ({approx}90%) of these aerosol species are in the accumulation mode. Differences of POM and BC concentrations between MAM3 and MAM7 are also small (mostly within 10%) because of the assumed hygroscopic nature of POM, so that freshly emitted POM and BC are wet-removed before mixing internally with soluble aerosol species. Sensitivity tests with the POM assumed to be hydrophobic and with slower aging process increase the POM and BC concentrations, especially at high latitudes (by several times). The mineral dust global burden differs by 10% and sea salt burden by 30-40% between MAM3 and MAM7 mainly due to the different size ranges for dust and sea salt modes and different standard deviations of log-normal size distribution for sea salt modes between MAM3 and MAM7. The model is able to qualitatively capture the observed geographical and
Ghimire, B.; Riley, W. J.; Koven, C. D.; Randerson, J. T.; Mu, M.; Kattge, J.; Rogers, A.; Reich, P. B.
2014-12-01
In many ecosystems, nitrogen is the most limiting nutrient for plant growth and productivity. However mechanistic representation of nitrogen uptake linked to root traits, and functional nitrogen allocation among different leaf enzymes involved in respiration and photosynthesis is currently lacking in Earth System models. The linkage between nitrogen availability and plant productivity is simplistically represented by potential photosynthesis rates, and is subsequently downregulated depending on nitrogen supply and other nitrogen consumers in the model (e.g., nitrification). This type of potential photosynthesis rate calculation is problematic for several reasons. Firstly, plants do not photosynthesize at potential rates and then downregulate. Secondly, there is considerable subjectivity on the meaning of potential photosynthesis rates. Thirdly, there exists lack of understanding on modeling these potential photosynthesis rates in a changing climate. In addition to model structural issues in representing photosynthesis rates, the role of plant roots in nutrient acquisition have been largely ignored in Earth System models. For example, in CLM4.5, nitrogen uptake is linked to leaf level processes (e.g., primarily productivity) rather than root scale process involved in nitrogen uptake. We present a new plant model for CLM with an improved mechanistic presentation of plant nitrogen uptake based on root scale Michaelis Menten kinetics, and stronger linkages between leaf nitrogen and plant productivity by inferring relationships observed in global databases of plant traits (including the TRY database and several individual studies). We also incorporate improved representation of plant nitrogen leaf allocation, especially in tropical regions where significant over-prediction of plant growth and productivity in CLM4.5 simulations exist. We evaluate our improved global model simulations using the International Land Model Benchmarking (ILAMB) framework. We conclude that
Naturalising Representational Content
Shea, Nicholas
2014-01-01
This paper sets out a view about the explanatory role of representational content and advocates one approach to naturalising content – to giving a naturalistic account of what makes an entity a representation and in virtue of what it has the content it does. It argues for pluralism about the metaphysics of content and suggests that a good strategy is to ask the content question with respect to a variety of predictively successful information processing models in experimental psychology and cognitive neuroscience; and hence that data from psychology and cognitive neuroscience should play a greater role in theorising about the nature of content. Finally, the contours of the view are illustrated by drawing out and defending a surprising consequence: that individuation of vehicles of content is partly externalist. PMID:24563661
Zampieri, Matteo
2012-02-01
Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions and the ecosystem dynamics. In regional-scale climate applications land surface models (LSMs) are commonly coupled to atmospheric models to close the surface energy, mass and carbon balance. LSMs in these applications are used to resolve the momentum, heat, water and carbon vertical fluxes, accounting for the effect of vegetation, soil type and other surface parameters, while lack of adequate resolution prevents using them to resolve horizontal sub-grid processes. Specifically, LSMs resolve the large-scale runoff production associated with infiltration excess and sub-grid groundwater convergence, but they neglect the effect from loosing streams to groundwater. Through the analysis of observed data of soil moisture obtained from the Oklahoma Mesoscale Network stations and land surface temperature derived from MODIS we provide evidence that the regional scale soil moisture and surface temperature patterns are affected by the rivers. This is demonstrated on the basis of simulations from a land surface model (i.e., Community Land Model - CLM, version 3.5). We show that the model cannot reproduce the features of the observed soil moisture and temperature spatial patterns that are related to the underlying mechanism of reinfiltration of river water to groundwater. Therefore, we implement a simple parameterization of this process in CLM showing the ability to reproduce the soil moisture and surface temperature spatial variabilities that relate to the river distribution at regional scale. The CLM with this new parameterization is used to evaluate impacts of the improved representation of river-groundwater interactions on the simulated water cycle parameters and the surface energy budget at the regional scale. © 2011 Elsevier B.V.
Zahner, William; Dent, Nick
2014-01-01
Sometimes a student's unexpected solution turns a routine classroom task into a real problem, one that the teacher cannot resolve right away. Although not knowing the answer can be uncomfortable for a teacher, these moments of uncertainty are also an opportunity to model authentic problem solving. This article describes such a moment in Zahner's…
Noël, Marie-Pascale; Rousselle, Laurence
2011-01-01
Studies on developmental dyscalculia (DD) have tried to identify a basic numerical deficit that could account for this specific learning disability. The first proposition was that the number magnitude representation of these children was impaired. However, Rousselle and Noël (2007) brought data showing that this was not the case but rather that these children were impaired when processing the magnitude of symbolic numbers only. Since then, incongruent results have been published. In this paper, we will propose a developmental perspective on this issue. We will argue that the first deficit shown in DD regards the building of an exact representation of numerical value, thanks to the learning of symbolic numbers, and that the reduced acuity of the approximate number magnitude system appears only later and is secondary to the first deficit.
Factorizations and physical representations
International Nuclear Information System (INIS)
Revzen, M; Khanna, F C; Mann, A; Zak, J
2006-01-01
A Hilbert space in M dimensions is shown explicitly to accommodate representations that reflect the decomposition of M into prime numbers. Representations that exhibit the factorization of M into two relatively prime numbers: the kq representation (Zak J 1970 Phys. Today 23 51), and related representations termed q 1 q 2 representations (together with their conjugates) are analysed, as well as a representation that exhibits the complete factorization of M. In this latter representation each quantum number varies in a subspace that is associated with one of the prime numbers that make up M
Giard, M H; Lavikahen, J; Reinikainen, K; Perrin, F; Bertrand, O; Pernier, J; Näätänen, R
1995-01-01
Abstract The present study analyzed the neural correlates of acoustic stimulus representation in echoic sensory memory. The neural traces of auditory sensory memory were indirectly studied by using the mismatch negativity (MMN), an event-related potential component elicited by a change in a repetitive sound. The MMN is assumed to reflect change detection in a comparison process between the sensory input from a deviant stimulus and the neural representation of repetitive stimuli in echoic memory. The scalp topographies of the MMNs elicited by pure tones deviating from standard tones by either frequency, intensity, or duration varied according to the type of stimulus deviance, indicating that the MMNs for different attributes originate, at least in part, from distinct neural populations in the auditory cortex. This result was supported by dipole-model analysis. If the MMN generator process occurs where the stimulus information is stored, these findings strongly suggest that the frequency, intensity, and duration of acoustic stimuli have a separate neural representation in sensory memory.
Bracco, Annalisa; Choi, Jun; Kurian, Jaison; Chang, Ping
2018-02-01
A set of nine regional ocean model simulations at various horizontal (from 1 to 9 km) and vertical (from 25 to 150 layers) resolutions with different vertical mixing parameterizations is carried out to examine the transport and mixing of a passive tracer released near the ocean bottom over the continental slope in the northern Gulf of Mexico. The release location is in proximity to the Deepwater Horizon oil well that ruptured in April 2010. Horizontal and diapycnal diffusivities are calculated and their dependence on the model set-up and on the representation of mesoscale and submesoscale circulations is discussed. Horizontal and vertical resolutions play a comparable role in determining the modeled horizontal diffusivities. Vertical resolution is key to a proper representation of passive tracer propagation and - in the case of the Gulf of Mexico - contributes to both confining the tracer along the continental slope and limiting its vertical spreading. The choice of the tracer advection scheme is also important, with positive definiteness in the tracer concentration being achieved at the price of spurious mixing across density surfaces. In all cases, however, the diapycnal mixing coefficient derived from the model simulations overestimates the observed value, indicating an area where model improvement is needed.
Gans, B.; Peng, Z.; Carrasco, N.; Gauyacq, D.; Lebonnois, S.; Pernot, P.
2013-03-01
A new wavelength-dependent model for CH4 photolysis branching ratios is proposed, based on the values measured recently by Gans et al. (Gans, B. et al. [2011]. Phys. Chem. Chem. Phys. 13, 8140-8152). We quantify the impact of this representation on the predictions of a photochemical model of Titan’s atmosphere, on their precision, and compare to earlier representations. Although the observed effects on the mole fraction of the species are small (never larger than 50%), it is possible to draw some recommendations for further studies: (i) the Ly-α branching ratios of Wang et al. (Wang, J.H. et al. [2000]. J. Chem. Phys. 113, 4146-4152) used in recent models overestimate the CH2:CH3 ratio, a factor to which a lot of species are sensitive; (ii) the description of out-of-Ly-α branching ratios by the “100% CH3” scenario has to be avoided, as it can bias significantly the mole fractions of some important species (C3H8); and (iii) complementary experimental data in the 130-140 nm range would be useful to constrain the models in the Ly-α deprived 500-700 km altitude range.
International Nuclear Information System (INIS)
Govil, Karan; Gunaydin, Murat
2013-01-01
Quantization of the geometric quasiconformal realizations of noncompact groups and supergroups leads directly to their minimal unitary representations (minreps). Using quasiconformal methods massless unitary supermultiplets of superconformal groups SU(2,2|N) and OSp(8 ⁎ |2n) in four and six dimensions were constructed as minreps and their U(1) and SU(2) deformations, respectively. In this paper we extend these results to SU(2) deformations of the minrep of N=4 superconformal algebra D(2,1;λ) in one dimension. We find that SU(2) deformations can be achieved using n pair of bosons and m pairs of fermions simultaneously. The generators of deformed minimal representations of D(2,1;λ) commute with the generators of a dual superalgebra OSp(2n ⁎ |2m) realized in terms of these bosons and fermions. We show that there exists a precise mapping between symmetry generators of N=4 superconformal models in harmonic superspace studied recently and minimal unitary supermultiplets of D(2,1;λ) deformed by a pair of bosons. This can be understood as a particular case of a general mapping between the spectra of quantum mechanical quaternionic Kähler sigma models with eight super symmetries and minreps of their isometry groups that descends from the precise mapping established between the 4d, N=2 sigma models coupled to supergravity and minreps of their isometry groups.
Chan, Randolph C H; Mak, Winnie W S
2016-12-30
The present study applied the common sense model to understand the underlying mechanism of how cognitive and emotional representations of mental illness among people in recovery of mental illness would impact their endorsement of self-stigma, and how that would, in turn, affect clinical and personal recovery. A cross-sectional survey was administered to 376 people in recovery. Participants were recruited from seven public specialty outpatient clinics and substance abuse assessment clinics across various districts in Hong Kong. They were asked to report their perception towards their mental illness, self-stigma, symptom severity, and personal recovery. The results of structural equation modeling partially supported the hypothesized mediation model indicating that controllability, consequences, and emotional concern of mental illness, but not cause, timeline, and identity, were associated with self-stigma, which was subsequently negatively associated with clinical and personal recovery. The present study demonstrated the mediating role of self-stigma in the relationship between individuals' illness representations towards their mental illness and their recovery. Illness management programs aimed at addressing the maladaptive mental illness-related beliefs and emotions are recommended. Implications on developing self-directed and empowering mental health services are discussed. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
S. Metzger
2012-06-01
Full Text Available Water activity is a key factor in aerosol thermodynamics and hygroscopic growth. We introduce a new representation of water activity (a_{w}, which is empirically related to the solute molality (μ_{s} through a single solute specific constant, ν_{i}. Our approach is widely applicable, considers the Kelvin effect and covers ideal solutions at high relative humidity (RH, including cloud condensation nuclei (CCN activation. It also encompasses concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD. The constant ν_{i} can thus be used to parameterize the aerosol hygroscopic growth over a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. In contrast to other a_{w}-representations, our ν_{i} factor corrects the solute molality both linearly and in exponent form x · a^{x}. We present four representations of our basic a_{w}-parameterization at different levels of complexity for different a_{w}-ranges, e.g. up to 0.95, 0.98 or 1. ν_{i} is constant over the selected a_{w}-range, and in its most comprehensive form, the parameterization describes the entire a_{w} range (0–1. In this work we focus on single solute solutions. ν_{i} can be pre-determined with a root-finding method from our water activity representation using an a_{w}−μ_{s} data pair, e.g. at solute saturation using RHD and solubility measurements. Our a_{w} and supersaturation (Köhler-theory results compare well with the thermodynamic reference model E-AIM for the key compounds NaCl and (NH_{4}_{2}SO_{4} relevant for CCN modeling and calibration studies. Envisaged applications include regional and global atmospheric chemistry and
Turetken, Oktay; Rompen, Tessa; Vanderfeesten, Irene; Dikici, Ahmet; van Moll, Jan; La Rosa, M.; Loos, P.; Pastor, O.
2016-01-01
Many factors influence the creation of understandable business process models for an appropriate audience. Understandability of process models becomes critical particularly when a process is complex and its model is large in structure. Using modularization to represent such models hierarchically
Rumelhart, David E.; Norman, Donald A.
This paper reviews work on the representation of knowledge from within psychology and artificial intelligence. The work covers the nature of representation, the distinction between the represented world and the representing world, and significant issues concerned with propositional, analogical, and superpositional representations. Specific topics…
Chen, J.; Wu, Y.
2012-01-01
This paper presents a study of the integration of the Soil and Water Assessment Tool (SWAT) model and the TOPographic MODEL (TOPMODEL) features for enhancing the physical representation of hydrologic processes. In SWAT, four hydrologic processes, which are surface runoff, baseflow, groundwater re-evaporation and deep aquifer percolation, are modeled by using a group of empirical equations. The empirical equations usually constrain the simulation capability of relevant processes. To replace these equations and to model the influences of topography and water table variation on streamflow generation, the TOPMODEL features are integrated into SWAT, and a new model, the so-called SWAT-TOP, is developed. In the new model, the process of deep aquifer percolation is removed, the concept of groundwater re-evaporation is refined, and the processes of surface runoff and baseflow are remodeled. Consequently, three parameters in SWAT are discarded, and two new parameters to reflect the TOPMODEL features are introduced. SWAT-TOP and SWAT are applied to the East River basin in South China, and the results reveal that, compared with SWAT, the new model can provide a more reasonable simulation of the hydrologic processes of surface runoff, groundwater re-evaporation, and baseflow. This study evidences that an established hydrologic model can be further improved by integrating the features of another model, which is a possible way to enhance our understanding of the workings of catchments.
International Nuclear Information System (INIS)
Han, Jing-Cheng; Huang, Guohe; Huang, Yuefei; Zhang, Hua; Li, Zhong; Chen, Qiuwen
2015-01-01
Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A
Energy Technology Data Exchange (ETDEWEB)
Han, Jing-Cheng [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Huang, Guohe, E-mail: huang@iseis.org [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Huang, Yuefei [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Zhang, Hua [College of Science and Engineering, Texas A& M University — Corpus Christi, Corpus Christi, TX 78412-5797 (United States); Li, Zhong [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Chen, Qiuwen [Center for Eco-Environmental Research, Nanjing Hydraulics Research Institute, Nanjing 210029 (China)
2015-08-15
Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A
Evolved Representation and Computational Creativity
Directory of Open Access Journals (Sweden)
Ashraf Fouad Hafez Ismail
2001-01-01
Full Text Available Advances in science and technology have influenced designing activity in architecture throughout its history. Observing the fundamental changes to architectural designing due to the substantial influences of the advent of the computing era, we now witness our design environment gradually changing from conventional pencil and paper to digital multi-media. Although designing is considered to be a unique human activity, there has always been a great dependency on design aid tools. One of the greatest aids to architectural design, amongst the many conventional and widely accepted computational tools, is the computer-aided object modeling and rendering tool, commonly known as a CAD package. But even though conventional modeling tools have provided designers with fast and precise object handling capabilities that were not available in the pencil-and-paper age, they normally show weaknesses and limitations in covering the whole design process.In any kind of design activity, the design worked on has to be represented in some way. For a human designer, designs are for example represented using models, drawings, or verbal descriptions. If a computer is used for design work, designs are usually represented by groups of pixels (paintbrush programs, lines and shapes (general-purpose CAD programs or higher-level objects like ‘walls’ and ‘rooms’ (purpose-specific CAD programs.A human designer usually has a large number of representations available, and can use the representation most suitable for what he or she is working on. Humans can also introduce new representations and thereby represent objects that are not part of the world they experience with their sensory organs, for example vector representations of four and five dimensional objects. In design computing on the other hand, the representation or representations used have to be explicitly defined. Many different representations have been suggested, often optimized for specific design domains
Representation theory of lattice current algebras
International Nuclear Information System (INIS)
Alekseev, A.Yu.; Eidgenoessische Technische Hochschule, Zurich; Faddeev, L.D.; Froehlich, L.D.; Schomerus, V.; Kyoto Univ.
1996-04-01
Lattice current algebras were introduced as a regularization of the left-and right moving degrees of freedom in the WZNW model. They provide examples of lattice theories with a local quantum symmetry U q (G). Their representation theory is studied in detail. In particular, we construct all irreducible representations along with a lattice analogue of the fusion product for representations of the lattice current algebra. It is shown that for an arbitrary number of lattice sites, the representation categories of the lattice current algebras agree with their continuum counterparts. (orig.)
Thinking together with material representations
DEFF Research Database (Denmark)
Stege Bjørndahl, Johanne; Fusaroli, Riccardo; Østergaard, Svend
2014-01-01
of an experiment. Qualitative micro-analyses of the group interactions motivate a taxonomy of different roles that the material representations play in the joint epistemic processes: illustration, elaboration and exploration. Firstly, the LEGO blocks were used to illustrate already well-formed ideas in support......-down and bottom-up cognitive processes and division of cognitive labor.......How do material representations such as models, diagrams and drawings come to shape and aid collective, epistemic processes? This study investigated how groups of participants spontaneously recruited material objects (in this case LEGO blocks) to support collective creative processes in the context...
International Nuclear Information System (INIS)
Kitamura, Y.; Ikeda, M.; Mizoguchi, R.; Yoshikawa, S.
1999-03-01
This report discusses a representation scheme of device failures anticipated in nuclear power plant, to describe related knowledge in a computer software. Coping ability covering a wide range of physical events is desired in plant operators and maintenance staffs, but it is impractical to give them a set of experience to cover the all possible events in the education/training curriculum. However, in case that their knowledge of plant design and of generally-known physical principles are enforced, their ability of cause identification and of appropriate responding actions against inexperienced events are expected to be enhanced, by combining the basic engineering and physical knowledge. Most of the anomalies anticipated in nuclear power plants are initiated as an incipient failure in some auxiliary equipment initially affecting only within the relative subsystem and hiding from the central control room, and then are propagated to deviate process parameters in the main subsystems to be observed from the control room. Incipient failures in auxiliary subsystems, such as a chemical degrading of an axis holder caused by a blockage of lubricant supply line through increased friction and subsequent extra heating, are typically local and irreversible consequences. On the other hand, deviation propagation in main systems, such as outlet temperature rise by an increased pump rotation friction though decreased coolant flow rate, are typically global and reversible consequences. This paper describes a methodology development to represent a category of knowledge to support operators' and maintenance staffs' effort in understanding local and irreversible failure consequences. (author)
Energy Technology Data Exchange (ETDEWEB)
Mihailovic, D.T.; Pielke, R.A.; Rajkovic, B.; Lee, T.J.; Jeftic, M. (Novi Sad Univ. (Yugoslavia) Colorado State Univ., Fort Collins (United States) Belgrade Univ. (Yugoslavia))
1993-06-01
In the parameterization of land surface processes, attention must be devoted to surface evaporation, one of the main processes in the air-land energy exchange. One of the most used approaches is the resistance representation which requires the calculation of aerodynamic resistances. These resistances are calculated using K theory for different morphologies of plant communities; then, the performance of the evaporation schemes within the alpha, beta, and their combination approaches that parameterize evaporation from bare and partly plant-covered soil surfaces are discussed. Additionally, a new alpha scheme is proposed based on an assumed power dependence alpha on volumetric soil moisture content and its saturated value. Finally, the performance of the considered and the proposed schemes is tested based on time integrations using real data. The first set was for 4 June 1982, and the second for 3 June 1981 at the experimental site in Rimski Sancevi, Yugoslavia, on chernozem soil, as representative for a bare, and partly plant-covered surface, respectively. 63 refs.
Yang, Chihae; Tarkhov, Aleksey; Marusczyk, Jörg; Bienfait, Bruno; Gasteiger, Johann; Kleinoeder, Thomas; Magdziarz, Tomasz; Sacher, Oliver; Schwab, Christof H; Schwoebel, Johannes; Terfloth, Lothar; Arvidson, Kirk; Richard, Ann; Worth, Andrew; Rathman, James
2015-03-23
Chemotypes are a new approach for representing molecules, chemical substructures and patterns, reaction rules, and reactions. Chemotypes are capable of integrating types of information beyond what is possible using current representation methods (e.g., SMARTS patterns) or reaction transformations (e.g., SMIRKS, reaction SMILES). Chemotypes are expressed in the XML-based Chemical Subgraphs and Reactions Markup Language (CSRML), and can be encoded not only with connectivity and topology but also with properties of atoms, bonds, electronic systems, or molecules. CSRML has been developed in parallel with a public set of chemotypes, i.e., the ToxPrint chemotypes, which are designed to provide excellent coverage of environmental, regulatory, and commercial-use chemical space, as well as to represent chemical patterns and properties especially relevant to various toxicity concerns. A software application, ChemoTyper has also been developed and made publicly available in order to enable chemotype searching and fingerprinting against a target structure set. The public ChemoTyper houses the ToxPrint chemotype CSRML dictionary, as well as reference implementation so that the query specifications may be adopted by other chemical structure knowledge systems. The full specifications of the XML-based CSRML standard used to express chemotypes are publicly available to facilitate and encourage the exchange of structural knowledge.
International Nuclear Information System (INIS)
Arponen, J.S.; Bishop, R.F.
1991-01-01
The configuration-interaction method (CIM), normal coupled-cluster method (NCCM), and extended coupled-cluster method (ECCM) form a rather natural hierarchy of formulations of increasing sophistication for describing interacting systems of quantum-mechanical particles or fields. They are denoted generically as independent-cluster (IC) parameterizations in a view of the way in which they incorporate the many-body correlations via sets of amplitudes that describe the various correlated clusters within the interacting system as mutually independent entities. They differ primarily by the way in which they incorporate the exact locality and separability properties. Each method is shown to provide, in principle, an exact mapping of the original quantum-mechanical problem into a corresponding classical Hamiltonian mechanics in terms of a set of multiconfigurational canonical field amplitudes. In perturbation-theoretic terms the IC methods incorporate infinite classes of diagrams at each order of approximation. The diagrams differ in their connectivity or linkedness properties. The structure of the ECCM in particular makes it capable of describing such phenomena as phase transitions, spontaneous symmetry breaking , and topological states. The authors address such fundamentally important questions as the existence and convergence properties of the three IC parameterizations by formulating the holomorphic representation of each one for the class of single-mode bosonic field theories which include the anharmonic oscillators
Learning Document Semantic Representation with Hybrid Deep Belief Network
Directory of Open Access Journals (Sweden)
Yan Yan
2015-01-01
it is also an effective way to remove noise from the different document representation type; the DBN can enhance extract abstract of the document in depth, making the model learn sufficient semantic representation. At the same time, we explore different input strategies for semantic distributed representation. Experimental results show that our model using the word embedding instead of single word has better performance.
DEFF Research Database (Denmark)
Heussen, Kai; Kullmann, Daniel
2010-01-01
Engineering is the art of making complicated things work. There are few things an engineer can’t do. Explaining his work to a computer may be one of them. This paper introduces Functional Modeling with Multilevel Flow Models as an information modeling approach that explicitly relates the functions...
Understanding representations in design
DEFF Research Database (Denmark)
Bødker, Susanne
1998-01-01
Representing computer applications and their use is an important aspect of design. In various ways, designers need to externalize design proposals and present them to other designers, users, or managers. This article deals with understanding design representations and the work they do in design....... The article is based on a series of theoretical concepts coming out of studies of scientific and other work practices and on practical experiences from design of computer applications. The article presents alternatives to the ideas that design representations are mappings of present or future work situations...... and computer applications. It suggests that representations are primarily containers of ideas and that representation is situated at the same time as representations are crossing boundaries between various design and use activities. As such, representations should be carriers of their own contexts regarding...
Luo, Xiangyu; Li, Hong-Yi; Leung, L. Ruby; Tesfa, Teklu K.; Getirana, Augusto; Papa, Fabrice; Hess, Laura L.
2017-03-01
In the Amazon Basin, floodplain inundation is a key component of surface water dynamics and plays an important role in water, energy and carbon cycles. The Model for Scale Adaptive River Transport (MOSART) was extended with a macroscale inundation scheme for representing floodplain inundation. The extended model, named MOSART-Inundation, was used to simulate surface hydrology of the entire Amazon Basin. Previous hydrologic modeling studies in the Amazon Basin identified and addressed a few challenges in simulating surface hydrology of this basin, including uncertainties of floodplain topography and channel geometry, and the representation of river flow in reaches with mild slopes. This study further addressed four aspects of these challenges. First, the spatial variability of vegetation-caused biases embedded in the HydroSHEDS digital elevation model (DEM) data was explicitly addressed. A vegetation height map of about 1 km resolution and a land cover dataset of about 90 m resolution were used in a DEM correction procedure that resulted in an average elevation reduction of 13.2 m for the entire basin and led to evident changes in the floodplain topography. Second, basin-wide empirical formulae for channel cross-sectional dimensions were refined for various subregions to improve the representation of spatial variability in channel geometry. Third, the channel Manning roughness coefficient was allowed to vary with the channel depth, as the effect of riverbed resistance on river flow generally declines with increasing river size. Lastly, backwater effects were accounted for to better represent river flow in mild-slope reaches. The model was evaluated against in situ streamflow records and remotely sensed Envisat altimetry data and Global Inundation Extent from Multi-Satellites (GIEMS) inundation data. In a sensitivity study, seven simulations were compared to evaluate the impacts of the five modeling aspects addressed in this study. The comparisons showed that
Directory of Open Access Journals (Sweden)
David Lo Buglio
2012-12-01
Full Text Available EnWith the arrival of digital technologies in the field of architectural documentation, many tools and methods for data acquisition have been considerably developed. However, these developments are primarily used for recording colorimetric and dimensional properties of the objects processed. The actors, of the disciplines concerned by 3D digitization of architectural heritage, are facing with a large number of data, leaving the survey far from its cognitive dimension. In this context, it seems necessary to provide innovative solutions in order to increase the informational value of the representations produced by strengthen relations between "multiplicity" of data and "intelligibility" of the theoretical model. With the purpose of answering to the lack of methodology we perceived, this article therefore offers an approach to the creation of representation systems that articulate the digital instance with the geometric/semantic model.ItGrazie all’introduzione delle tecnologie digitali nel campo della documentazione architettonica, molti strumenti e metodi di acquisizione hanno avuto un notevole sviluppo. Tuttavia, questi sviluppi si sono principalmente concentrati sulla registrazione e sulla restituzione delle proprietà geometriche e colorimetriche degli oggetti di studio. Le discipline interessate alla digitalizzazione 3D del patrimonio architettonico hanno pertanto la possibilità di produrre delle grandi quantità di dati attraverso un’evoluzione delle pratiche di documentazione che potrebbero progressivamente far scomparire la dimensione cognitiva del rilievo. In questo contesto, appare necessario fornire soluzioni innovative per aumentare il valore informativo delle rappresentazioni digitali tramite l’identificazione delle relazioni potenziali che è possibile costruire fra le nozioni di "molteplicità" ed "intelligibilità". Per rispondere a questo deficit metodologico, questo articolo presenta le basi di un approccio per la
International Nuclear Information System (INIS)
Sillman, S.; Logan, J.A.; Wofsy, S.C.
1990-01-01
A new approach to modeling regional air chemistry is presented for application to industrialized regions such as the continental US. Rural chemistry and transport are simulated using a coarse grid, while chemistry and transport in urban and power plant plumes are represented by detailed subgrid models. Emissions from urban and power plant sources are processed in generalized plumes where chemistry and dilution proceed for 8-12 hours before mixing with air in a large resolution element. A realistic fraction of pollutants reacts under high-NO x conditions, and NO x is removed significantly before dispersal. Results from this model are compared with results from grid odels that do not distinguish plumes and with observational data defining regional ozone distributions. Grid models with coarse resolution are found to artificially disperse NO x over rural areas, therefore overestimating rural levels of both NO x and O 3 . Regional net ozone production is too high in coarse grid models, because production of O 3 is more efficient per molecule of NO x in the low-concentration regime of rural areas than in heavily polluted plumes from major emission sources. Ozone levels simulated by this model are shown to agree with observations in urban plumes and in rural regions. The model reproduces accurately average regional and peak ozone concentrations observed during a 4-day ozone episode. Computational costs for the model are reduced 25-to 100-fold as compared to fine-mesh models
Rothenberg, Daniel; Avramov, Alexander; Wang, Chien
2018-06-01
Interactions between aerosol particles and clouds contribute a great deal of uncertainty to the scientific community's understanding of anthropogenic climate forcing. Aerosol particles serve as the nucleation sites for cloud droplets, establishing a direct linkage between anthropogenic particulate emissions and clouds in the climate system. To resolve this linkage, the community has developed parameterizations of aerosol activation which can be used in global climate models to interactively predict cloud droplet number concentrations (CDNCs). However, different activation schemes can exhibit different sensitivities to aerosol perturbations in different meteorological or pollution regimes. To assess the impact these different sensitivities have on climate forcing, we have coupled three different core activation schemes and variants with the CESM-MARC (two-Moment, Multi-Modal, Mixing-state-resolving Aerosol model for Research of Climate (MARC) coupled with the National Center for Atmospheric Research's (NCAR) Community Earth System Model (CESM; version 1.2)). Although the model produces a reasonable present-day CDNC climatology when compared with observations regardless of the scheme used, ΔCDNCs between the present and preindustrial era regionally increase by over 100 % in zonal mean when using the most sensitive parameterization. These differences in activation sensitivity may lead to a different evolution of the model meteorology, and ultimately to a spread of over 0.8 W m-2 in global average shortwave indirect effect (AIE) diagnosed from the model, a range which is as large as the inter-model spread from the AeroCom intercomparison. Model-derived AIE strongly scales with the simulated preindustrial CDNC burden, and those models with the greatest preindustrial CDNC tend to have the smallest AIE, regardless of their ΔCDNC. This suggests that present-day evaluations of aerosol-climate models may not provide useful constraints on the magnitude of the AIE, which
Koneshov, V. N.; Nepoklonov, V. B.
2018-05-01
The development of studies on estimating the accuracy of the Earth's modern global gravity models in terms of the spherical harmonics of the geopotential in the problematic regions of the world is discussed. The comparative analysis of the results of reconstructing quasi-geoid heights and gravity anomalies from the different models is carried out for two polar regions selected within a radius of 1000 km from the North and South poles. The analysis covers nine recently developed models, including six high-resolution models and three lower order models, including the Russian GAOP2012 model. It is shown that the modern models determine the quasi-geoid heights and gravity anomalies in the polar regions with errors of 5 to 10 to a few dozen cm and from 3 to 5 to a few dozen mGal, respectively, depending on the resolution. The accuracy of the models in the Arctic is several times higher than in the Antarctic. This is associated with the peculiarities of gravity anomalies in every particular region and with the fact that the polar part of the Antarctic has been comparatively less explored by the gravity methods than the polar Arctic.
Lange, K.; Geppert, M.; Saka-Helmhout, A.U.; Becker-Ritterspach, F.
2015-01-01
In recent years, the notion of business models has gained momentum in management research. Scholars have discussed several barriers to changing business models in established firms. However, the national institutions of market economies have not yet been discussed as barriers, even though they can
Ellis, J.L.; Dijkstra, J.; Bannink, A.; Kebreab, E.; Archibeque, S.; Benchaar, C.; Beauchemin, K.; Nkrumah, D.J.; France, J.
2014-01-01
The purpose of this study was to evaluate prediction of methane emissions from finishing beef cattle using an extant mechanistic model with pH-independent or pH-dependent VFA stoichiometries, a recent stoichiometry adjustment for the use of monensin, and adaptation of the underlying model structure,
Schnotz, Wolfgang; Kurschner, Christian
2008-01-01
This article investigates whether different formats of visualizing information result in different mental models constructed in learning from pictures, whether the different mental models lead to different patterns of performance in subsequently presented tasks, and how these visualization effects can be modified by further external…
Energy Technology Data Exchange (ETDEWEB)
Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kramer, Klaas Jan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-07-01
Adoption of efficient end-use technologies is one of the key measures for reducing greenhouse gas (GHG) emissions. How to effectively analyze and manage the costs associated with GHG reductions becomes extremely important for the industry and policy makers around the world. Energy-climate (EC) models are often used for analyzing the costs of reducing GHG emissions for various emission-reduction measures, because an accurate estimation of these costs is critical for identifying and choosing optimal emission reduction measures, and for developing related policy options to accelerate market adoption and technology implementation. However, accuracies of assessing GHG-emission reduction costs by taking into account the adoption of energy efficiency technologies will depend on how well these end-use technologies are represented in integrated assessment models (IAM) and other energy-climate models. In this report, we first conduct a brief review of different representations of end-use technologies (mitigation measures) in various energy-climate models, followed by the problem statement, and a description of the basic concepts of quantifying the cost of conserved energy including integrating no-regrets options.
Energy Technology Data Exchange (ETDEWEB)
Jakob, Christian [Monash Univ., Melbourne, VIC (Australia)
2015-02-26
This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.
Directory of Open Access Journals (Sweden)
Yan Li
2016-02-01
Full Text Available Sparse climatic observations represent a major challenge for hydrological modeling of mountain catchments with implications for decision-making in water resources management. Employing elevation bands in the Soil and Water Assessment Tool-Sequential Uncertainty Fitting (SWAT2012-SUFI2 model enabled representation of precipitation and temperature variation with altitude in the Daning river catchment (Three Gorges Reservoir Region, China where meteorological inputs are limited in spatial extent and are derived from observations from relatively low lying locations. Inclusion of elevation bands produced better model performance for 1987–1993 with the Nash–Sutcliffe efficiency (NSE increasing by at least 0.11 prior to calibration. During calibration prediction uncertainty was greatly reduced. With similar R-factors from the earlier calibration iterations, a further 11% of observations were included within the 95% prediction uncertainty (95PPU compared to the model without elevation bands. For behavioral simulations defined in SWAT calibration using a NSE threshold of 0.3, an additional 3.9% of observations were within the 95PPU while the uncertainty reduced by 7.6% in the model with elevation bands. The calibrated model with elevation bands reproduced observed river discharges with the performance in the calibration period changing to “very good” from “poor” without elevation bands. The output uncertainty of calibrated model with elevation bands was satisfactory, having 85% of flow observations included within the 95PPU. These results clearly demonstrate the requirement to account for orographic effects on precipitation and temperature in hydrological models of mountainous catchments.
International Nuclear Information System (INIS)
Bottoni, M.; Lyczkowski, R.; Ahuja, S.
1995-01-01
Numerical simulation of subcooled boiling in one-dimensional geometry with the Homogeneous Equilibrium Model (HEM) may yield difficulties related to the very low sonic velocity associated with the HEM. These difficulties do not arise with subcritical flow. Possible solutions of the problem include introducing a relaxation of the vapor production rate. Three-dimensional simulations of subcooled boiling in bundle geometry typical of fast reactors can be performed by using two systems of conservation equations, one for the HEM and the other for a Separated Phases Model (SPM), with a smooth transition between the two models
Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.
2018-04-01
underestimated. On the other hand, many G2 models are able to represent most of large-scale circulation over Indo-Pacific region associated with El Niño and hence provide more realistic ENSO-ISM teleconnections. Therefore, this study advocates the importance of representation/simulation of large-scale circulation patterns during El Niño years in coupled models in order to capture El Niño-monsoon teleconnections well.
Topography exerts critical controls on many hydrologic, geomorphologic, and environmental biophysical processes. Unfortunately many watershed modeling systems use topography only to define basin boundaries and stream channels and do not explicitly account for the topographic controls on processes su...
Spectral nudging – a scale-selective interior constraint technique – is commonly used in regional climate models to maintain consistency with large-scale forcing while permitting mesoscale features to develop in the downscaled simulations. Several studies have demonst...
DEFF Research Database (Denmark)
Willett, Wesley; Jansen, Yvonne; Dragicevic, Pierre
2017-01-01
We introduce embedded data representations, the use of visual and physical representations of data that are deeply integrated with the physical spaces, objects, and entities to which the data refers. Technologies like lightweight wireless displays, mixed reality hardware, and autonomous vehicles...
Energy Technology Data Exchange (ETDEWEB)
Tselioudis, George [Columbia Univ., New York, NY (United States)
2016-03-04
From its location on the subtropics-midlatitude boundary, the Azores is influenced by both the subtropical high pressure and the midlatitude baroclinic storm regimes, and therefore experiences a wide range of cloud structures, from fair-weather scenes to stratocumulus sheets to deep convective systems. This project combined three types of data sets to study cloud variability in the Azores: a satellite analysis of cloud regimes, a reanalysis characterization of storminess, and a 19-month field campaign that occurred on Graciosa Island. Combined analysis of the three data sets provides a detailed picture of cloud variability and the respective dynamic influences, with emphasis on low clouds that constitute a major uncertainty source in climate model simulations. The satellite cloud regime analysis shows that the Azores cloud distribution is similar to the mean global distribution and can therefore be used to evaluate cloud simulation in global models. Regime analysis of low clouds shows that stratocumulus decks occur under the influence of the Azores high-pressure system, while shallow cumulus clouds are sustained by cold-air outbreaks, as revealed by their preference for post-frontal environments and northwesterly flows. An evaluation of CMIP5 climate model cloud regimes over the Azores shows that all models severely underpredict shallow cumulus clouds, while most models also underpredict the occurrence of stratocumulus cloud decks. It is demonstrated that carefully selected case studies can be related through regime analysis to climatological cloud distributions, and a methodology is suggested utilizing process-resolving model simulations of individual cases to better understand cloud-dynamics interactions and attempt to explain and correct climate model cloud deficiencies.
Group and representation theory
Vergados, J D
2017-01-01
This volume goes beyond the understanding of symmetries and exploits them in the study of the behavior of both classical and quantum physical systems. Thus it is important to study the symmetries described by continuous (Lie) groups of transformations. We then discuss how we get operators that form a Lie algebra. Of particular interest to physics is the representation of the elements of the algebra and the group in terms of matrices and, in particular, the irreducible representations. These representations can be identified with physical observables. This leads to the study of the classical Lie algebras, associated with unitary, unimodular, orthogonal and symplectic transformations. We also discuss some special algebras in some detail. The discussion proceeds along the lines of the Cartan-Weyl theory via the root vectors and root diagrams and, in particular, the Dynkin representation of the roots. Thus the representations are expressed in terms of weights, which are generated by the application of the elemen...
Introduction to representation theory
Etingof, Pavel; Hensel, Sebastian; Liu, Tiankai; Schwendner, Alex
2011-01-01
Very roughly speaking, representation theory studies symmetry in linear spaces. It is a beautiful mathematical subject which has many applications, ranging from number theory and combinatorics to geometry, probability theory, quantum mechanics, and quantum field theory. The goal of this book is to give a "holistic" introduction to representation theory, presenting it as a unified subject which studies representations of associative algebras and treating the representation theories of groups, Lie algebras, and quivers as special cases. Using this approach, the book covers a number of standard topics in the representation theories of these structures. Theoretical material in the book is supplemented by many problems and exercises which touch upon a lot of additional topics; the more difficult exercises are provided with hints. The book is designed as a textbook for advanced undergraduate and beginning graduate students. It should be accessible to students with a strong background in linear algebra and a basic k...
Rasool, Quazi Z.; Zhang, Rui; Lash, Benjamin; Cohan, Daniel S.; Cooter, Ellen J.; Bash, Jesse O.; Lamsal, Lok N.
2016-01-01
Modeling of soil nitric oxide (NO) emissions is highly uncertain and may misrepresent its spatial and temporal distribution. This study builds upon a recently introduced parameterization to improve the timing and spatial distribution of soil NO emission estimates in the Community Multiscale Air Quality (CMAQ) model. The parameterization considers soil parameters, meteorology, land use, and mineral nitrogen (N) availability to estimate NO emissions. We incorporate daily year-specific fertilizer data from the Environmental Policy Integrated Climate (EPIC) agricultural model to replace the annual generic data of the initial parameterization, and use a 12km resolution soil biome map over the continental USA. CMAQ modeling for July 2011 shows slight differences in model performance in simulating fine particulate matter and ozone from Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network (CASTNET) sites and NO2 columns from Ozone Monitoring Instrument (OMI) satellite retrievals. We also simulate how the change in soil NO emissions scheme affects the expected O3 response to projected emissions reductions.
Dehghan, A.; Mariani, Z.; Gascon, G.; Bélair, S.; Milbrandt, J.; Joe, P. I.; Crawford, R.; Melo, S.
2017-12-01
Environment and Climate Change Canada (ECCC) is implementing a 2.5-km resolution version of the Global Environmental Multiscale (GEM) model over the Canadian Arctic. Radiosonde observations were used to evaluate the numerical representation of surface-based temperature inversion which is a major feature in the Arctic region. Arctic surface-based inversions are often created by imbalance between radiative cooling processes at surface and warm air advection above. This can have a significant effect on vertical mixing of pollutants and moisture, and ultimately, on cloud formation. It is therefore important to correctly predict the existence of surface inversions along with their characteristics (i.e., intensity and depth). Previous climatological studies showed that the frequency and intensity of surface-based inversions are larger during colder months in the Arctic. Therefore, surface-based inversions were estimated using radiosonde measurements during winter (December 2015 to February 2016) at Iqaluit (Nunavut, Canada). Results show that the inversion intensity can exceed 10 K with depths as large as 1 km. Preliminary evaluation of GEM outputs reveals that the model tends to underestimate the intensity of near-surface inversions, and in some cases, the model failed to predict an inversion. This study presents the factors contributing to this bias including surface temperature and snow cover.
Energy Technology Data Exchange (ETDEWEB)
Sathaye, J.; Xu, T.; Galitsky, C.
2010-08-15
Adoption of efficient end-use technologies is one of the key measures for reducing greenhouse gas (GHG) emissions. How to effectively analyze and manage the costs associated with GHG reductions becomes extremely important for the industry and policy makers around the world. Energy-climate (EC) models are often used for analyzing the costs of reducing GHG emissions for various emission-reduction measures, because an accurate estimation of these costs is critical for identifying and choosing optimal emission reduction measures, and for developing related policy options to accelerate market adoption and technology implementation. However, accuracies of assessing of GHG-emission reduction costs by taking into account the adoption of energy efficiency technologies will depend on how well these end-use technologies are represented in integrated assessment models (IAM) and other energy-climate models.
Norton, P. A., II
2015-12-01
The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.
Nazemi, A.; Wheater, H. S.
2015-01-01
Human activities have caused various changes to the Earth system, and hence the interconnections between human activities and the Earth system should be recognized and reflected in models that simulate Earth system processes. One key anthropogenic activity is water resource management, which determines the dynamics of human-water interactions in time and space and controls human livelihoods and economy, including energy and food production. There are immediate needs to include water resource management in Earth system models. First, the extent of human water requirements is increasing rapidly at the global scale and it is crucial to analyze the possible imbalance between water demands and supply under various scenarios of climate change and across various temporal and spatial scales. Second, recent observations show that human-water interactions, manifested through water resource management, can substantially alter the terrestrial water cycle, affect land-atmospheric feedbacks and may further interact with climate and contribute to sea-level change. Due to the importance of water resource management in determining the future of the global water and climate cycles, the World Climate Research Program's Global Energy and Water Exchanges project (WRCP-GEWEX) has recently identified gaps in describing human-water interactions as one of the grand challenges in Earth system modeling (GEWEX, 2012). Here, we divide water resource management into two interdependent elements, related firstly to water demand and secondly to water supply and allocation. In this paper, we survey the current literature on how various components of water demand have been included in large-scale models, in particular land surface and global hydrological models. Issues of water supply and allocation are addressed in a companion paper. The available algorithms to represent the dominant demands are classified based on the demand type, mode of simulation and underlying modeling assumptions. We discuss
Chen, Zhongzhou; Gladding, Gary
2014-01-01
Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's intuition,…
Luong, Thang; Castro, Christopher; Nguyen, Truong; Cassell, William; Chang, Hsin-I
2018-01-01
A commonly noted problem in the simulation of warm season convection in the North American monsoon region has been the inability of atmospheric models at the meso-β scales (10 s to 100 s of kilometers) to simulate organized convection, principally
DEFF Research Database (Denmark)
Knudsen, Hans
1995-01-01
A model of the 2×3-phase synchronous machine is presented using a new transformation based on the eigenvectors of the stator inductance matrix. The transformation fully decouples the stator inductance matrix, and this leads to an equivalent diagram of the machine with no mutual couplings...
Tran, Thang H.; Baba, Yoshihiro; Somu, Vijaya B.; Rakov, Vladimir A.
2017-12-01
The finite difference time domain (FDTD) method in the 2-D cylindrical coordinate system was used to compute the nearly full-frequency-bandwidth vertical electric field and azimuthal magnetic field waveforms produced on the ground surface by lightning return strokes. The lightning source was represented by the modified transmission-line model with linear current decay with height, which was implemented in the FDTD computations as an appropriate vertical phased-current-source array. The conductivity of atmosphere was assumed to increase exponentially with height, with different conductivity profiles being used for daytime and nighttime conditions. The fields were computed at distances ranging from 50 to 500 km. Sky waves (reflections from the ionosphere) were identified in computed waveforms and used for estimation of apparent ionospheric reflection heights. It was found that our model reproduces reasonably well the daytime electric field waveforms measured at different distances and simulated (using a more sophisticated propagation model) by Qin et al. (2017). Sensitivity of model predictions to changes in the parameters of atmospheric conductivity profile, as well as influences of the lightning source characteristics (current waveshape parameters, return-stroke speed, and channel length) and ground conductivity were examined.
Gelderloos, L.J.; Chrupala, Grzegorz
2016-01-01
We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover
Covariant representations of nuclear *-algebras
International Nuclear Information System (INIS)
Moore, S.M.
1978-01-01
Extensions of the Csup(*)-algebra theory for covariant representations to nuclear *-algebra are considered. Irreducible covariant representations are essentially unique, an invariant state produces a covariant representation with stable vacuum, and the usual relation between ergodic states and covariant representations holds. There exist construction and decomposition theorems and a possible relation between derivations and covariant representations
Exploration of solids based on representation systems
Directory of Open Access Journals (Sweden)
Publio Suárez Sotomonte
2011-01-01
Full Text Available This article refers to some of the findings of a research project implemented as a teaching strategy to generate environments for the learning of platonic and archimedean solids, with a group of eighth grade students. This strategy was based on the meaningful learning approach and on the use of representation systems using the ontosemiotic approach in mathematical education, as a framework for the construction of mathematical concepts. This geometry teaching strategy adopts the stages of exploration, representation-modeling, formal construction and study of applications. It uses concrete, physical and tangible materials for origami, die making, and structures for the construction of threedimensional solids considered external tangible solid representation systems, as well as computer based educational tools to design dynamic geometry environments as intangible external representation systems.These strategies support both the imagination and internal systems of representation, fundamental to the comprehension of geometry concepts.
Zehe, E.; Klaus, J.
2011-12-01
one of those four setups for simulating transport of Isoproturon, which was applied the day before the irrigation experiment, and tested different parameter combinations to characterise adsorption according to the footprint data base. Simulations could, however, only reproduce the observed event based leaching behaviour, when we allowed for retardation coefficients that were very close to one. This finding is consistent with observations various field observations. We conclude: a) A realistic representation of dominating structures and their topology is of key importance for predicting preferential water and mass flows at tile drained hillslopes. b) Parameter uncertainty and equifinality could be reduced, but a system inherent equifinality in a 2-dimensional Richards based model has to be accepted.
Directory of Open Access Journals (Sweden)
G. W. Mann
2012-05-01
Full Text Available In the most advanced aerosol-climate models it is common to represent the aerosol particle size distribution in terms of several log-normal modes. This approach, motivated by computational efficiency, makes assumptions about the shape of the particle distribution that may not always capture the properties of global aerosol. Here, a global modal aerosol microphysics module (GLOMAP-mode is evaluated and improved by comparing against a sectional version (GLOMAP-bin and observations in the same 3-D global offline chemistry transport model. With both schemes, the model captures the main features of the global particle size distribution, with sub-micron aerosol approximately unimodal in continental regions and bi-modal in marine regions. Initial bin-mode comparisons showed that the current values for two size distribution parameter settings in the modal scheme (mode widths and inter-modal separation sizes resulted in clear biases compared to the sectional scheme. By adjusting these parameters in the modal scheme, much better agreement is achieved against the bin scheme and observations. Annual mean surface-level mass of sulphate, sea-salt, black carbon (BC and organic carbon (OC are within 25% in the two schemes in nearly all regions. Surface level concentrations of condensation nuclei (CN, cloud condensation nuclei (CCN, surface area density and condensation sink also compare within 25% in most regions. However, marine CCN concentrations between 30° N and 30° S are systematically 25–60% higher in the modal model, which we attribute to differences in size-resolved particle growth or cloud-processing. Larger differences also exist in regions or seasons dominated by biomass burning and in free-troposphere and high-latitude regions. Indeed, in the free-troposphere, GLOMAP-mode BC is a factor 2–4 higher than GLOMAP-bin, likely due to differences in size-resolved scavenging. Nevertheless, in most parts of the atmosphere, we conclude that bin
Energy Technology Data Exchange (ETDEWEB)
Fast, Jerome D.; Berg, Larry K.; Zhang, Kai; Easter, Richard C.; Ferrare, Richard A.; Hair, John; Hostetler, Chris A.; Liu, Ying; Ortega, Ivan; Sedlacek, Art; Shilling, John E.; Shrivastava, ManishKumar B.; Springston, Stephen R.; Tomlinson, Jason M.; Volkamer, Rainer M.; Wilson, Jacqueline M.; Zaveri, Rahul A.; Zelenyuk-Imre, Alla
2016-08-22
The ability of the Weather Research and Forecasting model with chemistry (WRF-Chem) version 3.7 and the Community Atmosphere Model version 5.3 (CAM5) in simulating profiles of aerosol properties is quantified using extensive in situ and remote sensing measurements from the Two Column Aerosol Project (TCAP) conducted during July of 2012. TCAP was supported by the U.S. Department of Energy’s Atmospheric Radiation Measurement program and was designed to obtain observations within two atmospheric columns; one fixed over Cape Cod, Massachusetts and the other several hundred kilometers over the ocean. The performance is quantified using most of the available aircraft and surface measurements during July, and two days are examined in more detail to identify the processes responsible for the observed aerosol layers. The higher resolution WRF-Chem model produced more aerosol mass in the free troposphere than the coarser resolution CAM5 model so that the fraction of aerosol optical thickness above the residual layer from WRF-Chem was more consistent with lidar measurements. We found that the free troposphere layers are likely due to mean vertical motions associated with synoptic-scale convergence that lifts aerosols from the boundary layer. The vertical displacement and the time period associated with upward transport in the troposphere depend on the strength of the synoptic system and whether relatively high boundary layer aerosol concentrations are present where convergence occurs. While a parameterization of subgrid scale convective clouds applied in WRF-Chem modulated the concentrations of aerosols aloft, it did not significantly change the overall altitude and depth of the layers.
Directory of Open Access Journals (Sweden)
Nikolay M. Bogoliubov
2009-04-01
Full Text Available The basic model of the non-equilibrium low dimensional physics the so-called totally asymmetric exclusion process is related to the 'crystalline limit' (q → ∞ of the SU_q(2 quantum algebra. Using the quantum inverse scattering method we obtain the exact expression for the time-dependent stationary correlation function of the totally asymmetric simple exclusion process on a one dimensional lattice with the periodic boundary conditions.
Energy Technology Data Exchange (ETDEWEB)
Shubov, Marianna A., E-mail: marianna.shubov@gmail.com [University of New Hampshire, Department of Mathematics and Statistics (United States)
2016-06-15
We consider a well known model of a piezoelectric energy harvester. The harvester is designed as a beam with a piezoceramic layer attached to its top face (unimorph configuration). A pair of thin perfectly conductive electrodes is covering the top and the bottom faces of the piezoceramic layer. These electrodes are connected to a resistive load. The model is governed by a system consisting of two equations. The first of them is the equation of the Euler–Bernoulli model for the transverse vibrations of the beam and the second one represents the Kirchhoff’s law for the electric circuit. Both equations are coupled due to the direct and converse piezoelectric effects. The boundary conditions for the beam equations are of clamped-free type. We represent the system as a single operator evolution equation in a Hilbert space. The dynamics generator of this system is a non-selfadjoint operator with compact resolvent. Our main result is an explicit asymptotic formula for the eigenvalues of this generator, i.e., we perform the modal analysis for electrically loaded (not short-circuit) system. We show that the spectrum splits into an infinite sequence of stable eigenvalues that approaches a vertical line in the left half plane and possibly of a finite number of unstable eigenvalues. This paper is the first in a series of three works. In the second one we will prove that the generalized eigenvectors of the dynamics generator form a Riesz basis (and, moreover, a Bari basis) in the energy space. In the third paper we will apply the results of the first two to control problems for this model.
International Nuclear Information System (INIS)
Shubov, Marianna A.
2016-01-01
We consider a well known model of a piezoelectric energy harvester. The harvester is designed as a beam with a piezoceramic layer attached to its top face (unimorph configuration). A pair of thin perfectly conductive electrodes is covering the top and the bottom faces of the piezoceramic layer. These electrodes are connected to a resistive load. The model is governed by a system consisting of two equations. The first of them is the equation of the Euler–Bernoulli model for the transverse vibrations of the beam and the second one represents the Kirchhoff’s law for the electric circuit. Both equations are coupled due to the direct and converse piezoelectric effects. The boundary conditions for the beam equations are of clamped-free type. We represent the system as a single operator evolution equation in a Hilbert space. The dynamics generator of this system is a non-selfadjoint operator with compact resolvent. Our main result is an explicit asymptotic formula for the eigenvalues of this generator, i.e., we perform the modal analysis for electrically loaded (not short-circuit) system. We show that the spectrum splits into an infinite sequence of stable eigenvalues that approaches a vertical line in the left half plane and possibly of a finite number of unstable eigenvalues. This paper is the first in a series of three works. In the second one we will prove that the generalized eigenvectors of the dynamics generator form a Riesz basis (and, moreover, a Bari basis) in the energy space. In the third paper we will apply the results of the first two to control problems for this model.
Manea Marinela – Daniela
2011-01-01
The term fair value is spread within the sphere of international standards without reference to any detailed guidance on how to apply. However, specialized tangible assets, which are rarely sold, the rule IAS 16 "Intangible assets " makes it possible to estimate fair value using an income approach or a replacement cost or depreciation. The following material is intended to identify potential modeling of fair value as an income-based approach, appealing to techniques used by professional evalu...
Directory of Open Access Journals (Sweden)
K. Zhang
2012-10-01
Full Text Available This paper introduces and evaluates the second version of the global aerosol-climate model ECHAM-HAM. Major changes have been brought into the model, including new parameterizations for aerosol nucleation and water uptake, an explicit treatment of secondary organic aerosols, modified emission calculations for sea salt and mineral dust, the coupling of aerosol microphysics to a two-moment stratiform cloud microphysics scheme, and alternative wet scavenging parameterizations. These revisions extend the model's capability to represent details of the aerosol lifecycle and its interaction with climate. Nudged simulations of the year 2000 are carried out to compare the aerosol properties and global distribution in HAM1 and HAM2, and to evaluate them against various observations. Sensitivity experiments are performed to help identify the impact of each individual update in model formulation.
Results indicate that from HAM1 to HAM2 there is a marked weakening of aerosol water uptake in the lower troposphere, reducing the total aerosol water burden from 75 Tg to 51 Tg. The main reason is the newly introduced κ-Köhler-theory-based water uptake scheme uses a lower value for the maximum relative humidity cutoff. Particulate organic matter loading in HAM2 is considerably higher in the upper troposphere, because the explicit treatment of secondary organic aerosols allows highly volatile oxidation products of the precursors to be vertically transported to regions of very low temperature and to form aerosols there. Sulfate, black carbon, particulate organic matter and mineral dust in HAM2 have longer lifetimes than in HAM1 because of weaker in-cloud scavenging, which is in turn related to lower autoconversion efficiency in the newly introduced two-moment cloud microphysics scheme. Modification in the sea salt emission scheme causes a significant increase in the ratio (from 1.6 to 7.7 between accumulation mode and coarse mode emission fluxes of
Temporal Representation in Semantic Graphs
Energy Technology Data Exchange (ETDEWEB)
Levandoski, J J; Abdulla, G M
2007-08-07
A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.
Computing Visible-Surface Representations,
1985-03-01
Terzopoulos N00014-75-C-0643 9. PERFORMING ORGANIZATION NAME AMC ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK Artificial Inteligence Laboratory AREA A...Massachusetts Institute of lechnolog,. Support lbr the laboratory’s Artificial Intelligence research is provided in part by the Advanced Rtccarcl Proj...dynamically maintaining visible surface representations. Whether the intention is to model human vision or to design competent artificial vision systems
International Nuclear Information System (INIS)
Tognevi, Amen
2012-01-01
The concrete structures of nuclear power plants can be subjected to moderate thermo-hydric loadings characterized by temperatures of the order of hundred of degrees in service conditions as well as in accidental ones. These loadings can be at the origin of important disorders, in particular cracking which accelerate hydric transfers in the structure. In the framework of the study of durability of these structures, a coupled thermo-hydro-mechanical model denoted THMs has been developed at Laboratoire d'Etude du Comportement des Betons et des Argiles (LECBA) of CEA Saclay in order to perform simulations of the concrete behavior submitted to such loadings. In this work, we focus on the improvement in the model THMs in one hand of the assessment of the mechanical and hydro-mechanical parameters of the unsaturated micro-cracked material and in the other hand of the description of cracking in terms of opening and propagation. The first part is devoted to the development of a model based on a multi-scale description of cement-based materials starting from the scale of the main hydrated products (portlandite, ettringite, C-S-H etc.) to the macroscopic scale of the cracked material. The investigated parameters are obtained at each scale of the description by applying analytical homogenization techniques. The second part concerns a fine numerical description of cracking. To this end, we choose to use combined finite element and discrete element methods