WorldWideScience

Sample records for model properly accounts

  1. Explanations, mechanisms, and developmental models: Why the nativist account of early perceptual learning is not a proper mechanistic model

    Directory of Open Access Journals (Sweden)

    Radenović Ljiljana

    2013-01-01

    Full Text Available In the last several decades a number of studies on perceptual learning in early infancy have suggested that even infants seem to be sensitive to the way objects move and interact in the world. In order to explain the early emergence of infants’ sensitivity to causal patterns in the world some psychologists have proposed that core knowledge of objects and causal relations is innate (Leslie & Keeble 1987, Carey & Spelke, 1994; Keil, 1995; Spelke et al., 1994. The goal of this paper is to examine the nativist developmental model by investigating the criteria that a mechanistic model needs to fulfill if it is to be explanatory. Craver (2006 put forth a number of such criteria and developed a few very useful distinctions between explanation sketches and proper mechanistic explanations. By applying these criteria to the nativist developmental model I aim to show, firstly, that nativists only partially characterize the phenomenon at stake without giving us the details of when and under which conditions perception and attention in early infancy take place. Secondly, nativist start off with a description of the phenomena to be explained (even if it is only a partial description but import into it a particular theory of perception that requires further empirical evidence and further defense on its own. Furthermore, I argue that innate knowledge is a good candidate for a filler term (a term that is used to name the still unknown processes and parts of the mechanism and is likely to become redundant. Recent extensive research on early intermodal perception indicates that the mechanism enabling the perception of regularities and causal patterns in early infancy is grounded in our neurophysiology. However, this mechanism is fairly basic and does not involve highly sophisticated cognitive structures or innate core knowledge. I conclude with a remark that a closer examination of the mechanisms involved in early perceptual learning indicates that the nativism

  2. How to conduct a proper sensitivity analysis in life cycle assessment: taking into account correlations within LCI data and interactions within the LCA calculation model.

    Science.gov (United States)

    Wei, Wei; Larrey-Lassalle, Pyrene; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis

    2015-01-06

    Sensitivity analysis (SA) is a significant tool for studying the robustness of results and their sensitivity to uncertainty factors in life cycle assessment (LCA). It highlights the most important set of model parameters to determine whether data quality needs to be improved, and to enhance interpretation of results. Interactions within the LCA calculation model and correlations within Life Cycle Inventory (LCI) input parameters are two main issues among the LCA calculation process. Here we propose a methodology for conducting a proper SA which takes into account the effects of these two issues. This study first presents the SA in an uncorrelated case, comparing local and independent global sensitivity analysis. Independent global sensitivity analysis aims to analyze the variability of results because of the variation of input parameters over the whole domain of uncertainty, together with interactions among input parameters. We then apply a dependent global sensitivity approach that makes minor modifications to traditional Sobol indices to address the correlation issue. Finally, we propose some guidelines for choosing the appropriate SA method depending on the characteristics of the model and the goals of the study. Our results clearly show that the choice of sensitivity methods should be made according to the magnitude of uncertainty and the degree of correlation.

  3. Aeroservoelastic modeling with proper orthogonal decomposition

    Science.gov (United States)

    Carlson, Henry A.; Verberg, Rolf; Harris, Charles A.

    2017-02-01

    A physics-based, reduced-order, aeroservoelastic model of an F-18 aircraft has been developed using the method of proper orthogonal decomposition (POD), introduced to the field of fluid mechanics by Lumley. The model is constructed with data from high-dimensional, high-fidelity aeroservoelastic computational fluid dynamics (CFD-ASE) simulations that couple equations of motion of the flow to a modal model of the aircraft structure. Through POD modes, the reduced-order model (ROM) predicts both the structural dynamics and the coupled flow dynamics, offering much more information than typically employed, low-dimensional models based on system identification are capable of providing. ROM accuracy is evaluated through direct comparisons between predictions of the flow and structural dynamics with predictions from the parent, the CFD-ASE model. The computational overhead of the ROM is six orders of magnitude lower than that of the CFD-ASE model—accurately predicting the coupled dynamics from simulations of an F-18 fighter aircraft undergoing flutter testing over a wide range of transonic and supersonic flight speeds on a single processor in 1.073 s.

  4. The Army Did Not Properly Account For and Manage Force Provider Equipment in Afghanistan

    Science.gov (United States)

    2014-07-31

    July 31, 2014 MEMORANDUM FOR AUDITOR GENERAL, DEPARTMENT OF THE ARMY SUBJECT: The Army Did Not Properly Account For and Manage Force Provider...transferred to another unit during unit rotations . Finally, the unit uses TPE planner in PBUSE to determine the disposition of their FP equipment

  5. Proper Versus Improper Mixtures in the ESR Model

    CERN Document Server

    Garola, Claudio

    2011-01-01

    The interpretation of mixtures is problematic in quantum mechanics (QM) because of nonobjectivity of properties. The ESR model restores objectivity reinterpreting quantum probabilities as conditional on detection and embodying the mathematical formalism of QM into a broader noncontextual (hence local) framework. We have recently provided a Hilbert space representation of the generalized observables that appear in the ESR model. We show here that each proper mixture is represented by a family of density operators parametrized by the macroscopic properties characterizing the physical system $\\Omega$ that is considered and that each improper mixture is represented by a single density operator which coincides with the operator that represents it in QM. The new representations avoid the problems mentioned above and entail some predictions that differ from the predictions of QM. One can thus contrive experiments for distinguishing empirically proper from improper mixtures, hence for confirming or disproving the ESR...

  6. Proper Orthogonal Decomposition as Surrogate Model for Aerodynamic Optimization

    Directory of Open Access Journals (Sweden)

    Valentina Dolci

    2016-01-01

    Full Text Available A surrogate model based on the proper orthogonal decomposition is developed in order to enable fast and reliable evaluations of aerodynamic fields. The proposed method is applied to subsonic turbulent flows and the proper orthogonal decomposition is based on an ensemble of high-fidelity computations. For the construction of the ensemble, fractional and full factorial planes together with central composite design-of-experiment strategies are applied. For the continuous representation of the projection coefficients in the parameter space, response surface methods are employed. Three case studies are presented. In the first case, the boundary shape of the problem is deformed and the flow past a backward facing step with variable step slope is studied. In the second case, a two-dimensional flow past a NACA 0012 airfoil is considered and the surrogate model is constructed in the (Mach, angle of attack parameter space. In the last case, the aerodynamic optimization of an automotive shape is considered. The results demonstrate how a reduced-order model based on the proper orthogonal decomposition applied to a small number of high-fidelity solutions can be used to generate aerodynamic data with good accuracy at a low cost.

  7. Modeling habitat dynamics accounting for possible misclassification

    Science.gov (United States)

    Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.

    2012-01-01

    Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.

  8. Modelling in Accounting. Theoretical and Practical Dimensions

    Directory of Open Access Journals (Sweden)

    Teresa Szot-Gabryś

    2010-10-01

    Full Text Available Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic measurements areas which have not been hitherto covered by any accounting system (it applies, for example, to small businesses, agricultural farms, human capital, which requires the development of an appropriate theoretical and practical model. The article illustrates the issue of modelling in accounting based on the example of an accounting model developed for small businesses, i.e. economic entities which are not obliged by law to keep accounting records.

  9. Implementing a trustworthy cost-accounting model.

    Science.gov (United States)

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  10. General spherical anisotropic Jeans models of stellar kinematics: including proper motions and radial velocities

    CERN Document Server

    Cappellari, Michele

    2015-01-01

    Cappellari (2008) presented a flexible and efficient method to model the stellar kinematics of anisotropic axisymmetric and spherical stellar systems. The spherical formalism could be used to model the line-of-sight velocity second moments allowing for essentially arbitrary radial variation in the anisotropy and general luminous and total density profiles. Here we generalize the spherical formalism by providing the expressions for all three components of the projected second moments, including the two proper motion components. A reference implementation is now included in the public JAM package available at http://purl.org/cappellari/software

  11. Investigating Coherent Structures in the Standard Turbulence Models using Proper Orthogonal Decomposition

    Science.gov (United States)

    Eliassen, Lene; Andersen, Søren

    2016-09-01

    The wind turbine design standards recommend two different methods to generate turbulent wind for design load analysis, the Kaimal spectra combined with an exponential coherence function and the Mann turbulence model. The two turbulence models can give very different estimates of fatigue life, especially for offshore floating wind turbines. In this study the spatial distributions of the two turbulence models are investigated using Proper Orthogonal Decomposition, which is used to characterize large coherent structures. The main focus has been on the structures that contain the most energy, which are the lowest POD modes. The Mann turbulence model generates coherent structures that stretches in the horizontal direction for the longitudinal component, while the structures found in the Kaimal model are more random in their shape. These differences in the coherent structures at lower frequencies for the two turbulence models can be the reason for differences in fatigue life estimates for wind turbines.

  12. Modeling in Accounting, an Imperative Process?

    Directory of Open Access Journals (Sweden)

    Robu Sorin-Adrian

    2014-08-01

    Full Text Available The approach of this topic suggested to us by the fact that currently, it persists a controversy regarding the elements that influence decisively the qualitative characteristics of useful financial information. From these elements, we remark accounting models and concepts of capital maintenance in terms of the accounting result, which can be under the influence of factors such as subjectivity or even lack of neutrality. Therefore, in formulating the response to the question that is the title of the paper, we will start from the fact that the financial statements prepared by the accounting systems must be the result of processing after appropriate models, which ultimately can respond as good as possible to external user’s requirements and internal, in particular the knowledge of the financial position and performance of economic entities.

  13. Investigating Coherent Structures in the Standard Turbulence Models using Proper Orthogonal Decomposition

    DEFF Research Database (Denmark)

    Eliassen, Lene; Andersen, Søren Juhl

    2016-01-01

    The wind turbine design standards recommend two different methods to generate turbulent wind for design load analysis, the Kaimal spectra combined with an exponential coherence function and the Mann turbulence model. The two turbulence models can give very different estimates of fatigue life......, especially for offshore floating wind turbines. In this study the spatial distributions of the two turbulence models are investigated using Proper Orthogonal Decomposition, which is used to characterize large coherent structures. The main focus has been on the structures that contain the most energy, which...... are the lowest POD modes. The Mann turbulence model generates coherent structures that stretches in the horizontal direction for the longitudinal component, while the structures found in the Kaimal model are more random in their shape. These differences in the coherent structures at lower frequencies for the two...

  14. Media Accountability Systems: Models, proposals and outlooks

    Directory of Open Access Journals (Sweden)

    Luiz Martins da Silva

    2007-06-01

    Full Text Available This paper analyzes one of the basic actions of SOS-Imprensa, the mechanism to assure Media Accountability with the goal of proposing a synthesis of models for the Brazilian reality. The article aims to address the possibilities of creating and improving mechanisms to stimulate the democratic press process and to mark out and assure freedom of speech and personal rights with respect to the media. Based on the Press Social Responsibility Theory, the hypothesis is that the experiences analyzed (Communication Council, Press Council, Ombudsman and Readers Council are alternatives for accountability, mediation and arbitration, seeking visibility, trust and public support in favor of fairer media.

  15. Modelling the 3D morphology and proper motions of the planetary nebula NGC 6302

    CERN Document Server

    Uscanga, L; Esquivel, A; Raga, A C; Boumis, P; Cantó, J

    2014-01-01

    We present 3D hydrodynamical simulations of an isotropic fast wind interacting with a previously ejected toroidally-shaped slow wind in order to model both the observed morphology and the kinematics of the planetary nebula (PN) NGC 6302. This source, also known as the Butterfly nebula, presents one of the most complex morphologies ever observed in PNe. From our numerical simulations, we have obtained an intensity map for the H$\\alpha$ emission to make a comparison with the Hubble Space Telescope (HST) observations of this object. We have also carried out a proper motion (PM) study from our numerical results, in order to compare with previous observational studies. We have found that the two interacting stellar wind model reproduces well the morphology of NGC 6302, and while the PM in the models are similar to the observations, our results suggest that an acceleration mechanism is needed to explain the Hubble-type expansion found in HST observations.

  16. Model Reduction Using Proper Orthogonal Decomposition and Predictive Control of Distributed Reactor System

    Directory of Open Access Journals (Sweden)

    Alejandro Marquez

    2013-01-01

    Full Text Available This paper studies the application of proper orthogonal decomposition (POD to reduce the order of distributed reactor models with axial and radial diffusion and the implementation of model predictive control (MPC based on discrete-time linear time invariant (LTI reduced-order models. In this paper, the control objective is to keep the operation of the reactor at a desired operating condition in spite of the disturbances in the feed flow. This operating condition is determined by means of an optimization algorithm that provides the optimal temperature and concentration profiles for the system. Around these optimal profiles, the nonlinear partial differential equations (PDEs, that model the reactor are linearized, and afterwards the linear PDEs are discretized in space giving as a result a high-order linear model. POD and Galerkin projection are used to derive the low-order linear model that captures the dominant dynamics of the PDEs, which are subsequently used for controller design. An MPC formulation is constructed on the basis of the low-order linear model. The proposed approach is tested through simulation, and it is shown that the results are good with regard to keep the operation of the reactor.

  17. Fusion of expertise among accounting accounting faculty. Towards an expertise model for academia in accounting.

    NARCIS (Netherlands)

    Njoku, Jonathan C.; van der Heijden, Beatrice; Inanga, Eno L.

    2010-01-01

    This paper aims to portray an accounting faculty expert. It is argued that neither the academic nor the professional orientation alone appears adequate in developing accounting faculty expertise. The accounting faculty expert is supposed to develop into a so-called ‘flexpert’ (Van der Heijden, 2003)

  18. Development of Boundary Condition Independent Reduced Order Thermal Models using Proper Orthogonal Decomposition

    Science.gov (United States)

    Raghupathy, Arun; Ghia, Karman; Ghia, Urmila

    2008-11-01

    Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.

  19. A stabilized proper orthogonal decomposition reduced-order model for large scale quasigeostrophic ocean circulation

    CERN Document Server

    San, Omer

    2014-01-01

    In this paper, a stabilized proper orthogonal decomposition (POD) reduced-order model (ROM) is presented for the barotropic vorticity equation. We apply the POD-ROM model to mid-latitude simplified oceanic basins, which are standard prototypes of more realistic large-scale ocean dynamics. A mode dependent eddy viscosity closure scheme is used to model the effects of the discarded POD modes. A sensitivity analysis with respect to the free eddy viscosity stabilization parameter is performed for various POD-ROMs with different numbers of POD modes. The POD-ROM results are validated against the Munk layer resolving direct numerical simulations using a fully conservative fourth-order Arakawa scheme. A comparison with the standard Galerkin POD-ROM without any stabilization is also included in our investigation. Significant improvements in the accuracy over the standard Galerkin model are shown for a four-gyre ocean circulation problem. This first step in the numerical assessment of the POD-ROM shows that it could r...

  20. Dimension invariants for groups admitting a cocompact model for proper actions

    DEFF Research Database (Denmark)

    Degrijse, Dieter Dries; Martínez-Pérez, Conchita

    2016-01-01

    Let G be a group that admits a cocompact classifying space for proper actions X. We derive a formula for the Bredon cohomological dimension for proper actions of G in terms of the relative cohomology with compact support of certain pairs of subcomplexes of X. We use this formula to compute the Br...

  1. Low-dimensional modelling of a transient cylinder wake using double proper orthogonal decomposition

    Science.gov (United States)

    Siegel, Stefan G.; Seidel, J.?Rgen; Fagley, Casey; Luchtenburg, D. M.; Cohen, Kelly; McLaughlin, Thomas

    For the systematic development of feedback flow controllers, a numerical model that captures the dynamic behaviour of the flow field to be controlled is required. This poses a particular challenge for flow fields where the dynamic behaviour is nonlinear, and the governing equations cannot easily be solved in closed form. This has led to many versions of low-dimensional modelling techniques, which we extend in this work to represent better the impact of actuation on the flow. For the benchmark problem of a circular cylinder wake in the laminar regime, we introduce a novel extension to the proper orthogonal decomposition (POD) procedure that facilitates mode construction from transient data sets. We demonstrate the performance of this new decomposition by applying it to a data set from the development of the limit cycle oscillation of a circular cylinder wake simulation as well as an ensemble of transient forced simulation results. The modes obtained from this decomposition, which we refer to as the double POD (DPOD) method, correctly track the changes of the spatial modes both during the evolution of the limit cycle and when forcing is applied by transverse translation of the cylinder. The mode amplitudes, which are obtained by projecting the original data sets onto the truncated DPOD modes, can be used to construct a dynamic mathematical model of the wake that accurately predicts the wake flow dynamics within the lock-in region at low forcing amplitudes. This low-dimensional model, derived using nonlinear artificial neural network based system identification methods, is robust and accurate and can be used to simulate the dynamic behaviour of the wake flow. We demonstrate this ability not just for unforced and open-loop forced data, but also for a feedback-controlled simulation that leads to a 90% reduction in lift fluctuations. This indicates the possibility of constructing accurate dynamic low-dimensional models for feedback control by using unforced and transient

  2. Reduced-order model for underwater target identification using proper orthogonal decomposition

    Science.gov (United States)

    Ramesh, Sai Sudha; Lim, Kian Meng

    2017-03-01

    Research on underwater acoustics has seen major development over the past decade due to its widespread applications in domains such as underwater communication/navigation (SONAR), seismic exploration and oceanography. In particular, acoustic signatures from partially or fully buried targets can be used in the identification of buried mines for mine counter measures (MCM). Although there exist several techniques to identify target properties based on SONAR images and acoustic signatures, these methods first employ a feature extraction method to represent the dominant characteristics of a data set, followed by the use of an appropriate classifier based on neural networks or the relevance vector machine. The aim of the present study is to demonstrate the applications of proper orthogonal decomposition (POD) technique in capturing dominant features of a set of scattered pressure signals, and subsequent use of the POD modes and coefficients in the identification of partially buried underwater target parameters such as its location, size and material density. Several numerical examples are presented to demonstrate the performance of the system identification method based on POD. Although the present study is based on 2D acoustic model, the method can be easily extended to 3D models and thereby enables cost-effective representations of large-scale data.

  3. Hybrid turbulence models for atmospheric flow: A proper comparison with RANS models

    Directory of Open Access Journals (Sweden)

    Bautista Mary C.

    2015-01-01

    Full Text Available A compromise between the required accuracy and the need for affordable simulations for the wind industry might be achieved with the use of hybrid turbulence models. Detached-Eddy Simulation (DES [1] is a hybrid technique that yields accurate results only if it is used according to its original formulation [2]. Due to its particular characteristics (i.e., the type of mesh required, the modeling of the atmospheric flow might always fall outside the original scope of DES. An enhanced version of DES called Simplify Improved Delayed Detached-Eddy Simulation (SIDDES [3] can overcome this and other disadvantages of DES. In this work the neutrally stratified atmospheric flow over a flat terrain with homogeneous roughness will be analyzed using a Reynolds-Averaged Navier–Stokes (RANS model called k – ω SST (shear stress transport [4], and the hybrids k – ω SST-DES and k – ω SST-SIDDES models. An obvious test is to validate these hybrid approaches and asses their advantages and disadvantages over the pure RANS model. However, for several reasons the technique to drive the atmospheric flow is generally different for RANS and LES or hybrid models. The flow in a RANS simulation is usually driven by a constant shear stress imposed at the top boundary [5], therefore modeling only the atmospheric surface layer. On the contrary the LES and hybrid simulations are usually driven by a constant pressure gradient, thus a whole atmospheric boundary layer is simulated. Rigorously, this represents two different simulated cases making the model comparison not trivial. Nevertheless, both atmospheric flow cases are studied with the mentioned models. The results prove that a simple comparison of the time average turbulent quantities obtained by RANS and hybrid simulations is not easily achieved. The RANS simulations yield consistent results for the atmospheric surface layer case, while the hybrid model results are not correct. As for the atmospheric boundary

  4. MODEL OF MANAGEMENT ACCOUNTING FOR MERCHANDISES SECTOR COMPANIES

    Directory of Open Access Journals (Sweden)

    Glăvan Elena Mariana

    2013-04-01

    Full Text Available Changes in Romanian accounting system have been articulated with priority to financial accounting, without leaving aside management accounting. Changes in management accounting covered a few general indicative directions giving to managers enhanced skills regarding the organisation and function of this branch of the accounting system. Roumanian regulations offered solutions[16] regarding the accounts and records in managerial accounting, but all these elements are recommendations. In the Roumanian chart of accounts there is a specific class of accounts, class number 9-accounts for management accounting, but this class of accounts is an optional one. In fact there are a lot of models applied by companies, but all are based on the accounts that are mentioned in class 9. After an analysis of Romanian accounting literature, we realized that an important part of studies focuses on management accounting models applied to manufacturing companies. As a result, we conceived a model for management accounting specific for marchendises companies. In our model there are two groups of accounts: one from chart of accounts (class 9 and another group composed by accounts proposed by authors. We think that our model could provide more information about each cost object, regarding sales, acquisition cost of sales, distribution cost, administration cost, total cost of sales, contribution margin and profitability. Our model was exemplified by a case study applied to a wholesales company.

  5. DMFCA Model as a Possible Way to Detect Creative Accounting and Accounting Fraud in an Enterprise

    Directory of Open Access Journals (Sweden)

    Jindřiška Kouřilová

    2013-05-01

    Full Text Available The quality of reported accounting data as well as the quality and behaviour of their users influence the efficiency of an enterprise’s management. Its assessment could therefore be changed as well. To identify creative accounting and fraud, several methods and tools were used. In this paper we would like to present our proposal of the DMFCA (Detection model Material Flow Cost Accounting balance model based on environmental accounting and the MFCA (Material Flow Cost Accounting as its method. The following balance areas are included: material, financial and legislative. Using the analysis of strengths and weaknesses of the model, its possible use within a production and business company were assessed. Its possible usage to the detection of some creative accounting techniques was also assessed. The Model is developed in details for practical use and describing theoretical aspects.

  6. Developing a model of proper governance for removing interaction barriers between universities of medical sciences andindustries

    Directory of Open Access Journals (Sweden)

    shiva madahian

    2017-01-01

    Full Text Available Background and goal: The interaction between university and industry, due to its highly constructive and positive effects on technical, economic and social changes, was traditionally at the center of policy makers’ and planners’ attention. The aim of the present study was to explain barriers and challenges existing in the interaction between medical sciences universities and industry. Method: This present descriptive-correlational study used measuring method fto investigate the interaction among Medical Sciences University (School of Public Health. 1468 individuals participated in this study. Using Morgan scale, 321 people were selected as the sample. Two questionnaires were prepared by the researcher. The proper governance questionnaire contains political, economic, social, legal and cultural dimensions composed of 69 questions. The barriers between university and industry questionnaire covering 3 dimensions of individual interaction barriers, organizational interaction barriers and environmental interaction barriers is composed of 40 questions. Data analysis was done using SPSS, version 21. Results: Based on factor analysis of the data, the main dimension of proper governance respectively was cultural factors and among various factors of barriers between university and industry, environmental interaction dimension was considered as the most important one. Moreover, the results showed that there was a direct and meaningful relationship between dimensions of proper governance and interaction between university and industry variable. Conclusion: Based on the results of the present study, considering culture and cultural differences can help improve the interaction between university and industry.

  7. Display of the information model accounting system

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2011-12-01

    Full Text Available This paper presents the accounting information system in public companies, business technology matrix and data flow diagram. The paper describes the purpose and goals of the accounting process, matrix sub-process and data class. Data flow in the accounting process and the so-called general ledger module are described in detail. Activities of the financial statements and determining the financial statements of the companies are mentioned as well. It is stated how the general ledger module should function and what characteristics it must have. Line graphs will depict indicators of the company’s business success, indebtedness and company’s efficiency coefficients based on financial balance reports, and profit and loss report.

  8. COMPARATIVE STUDY ON ACCOUNTING MODELS "CASH" AND "ACCRUAL"

    OpenAIRE

    Tatiana Danescu; Luminita Rus

    2013-01-01

    Accounting, as a source of information, can recognize the economic transactionstaking into account the time of payment or receipt thereof, as soon as they occur. There are twobasic models of accounting: accrual basis and cash basis. In the cash accounting method thetransactions are recorded only when cash is received or paid, shall not make the difference betweenthe purchase of an asset and the payment of expenditure - both of which are considered"payments". Accrual accounting achieves this d...

  9. THE ROLE AND THE IMPORTANCE IN CHOOSING THE PROPER MANAGERIAL ACCOUNTING CONCEPTS REGARDING THE NEED FOR INFORMATION ON THE DECISION MAKING FACTORS WITHIN THE COMPANIES

    Directory of Open Access Journals (Sweden)

    Delia David

    2014-09-01

    Full Text Available Both the theory and modern practice of the management accounting took over two general concepts regarding its organizational process, structuring accounting in either an integrated system or a dualist one. We aim at emphasizing the characteristics, the role and the importance of these concepts in regards to the calculation process and the accounting entries of the costs generated by economic entities with an eye at gaining profit. Choosing one of the previously mentioned concepts must be done taking into consideration the specific of the company in question as well as the information necessary to the manager who needs to find out the optimum solution in order to achieve the rehabilitation and efficiency of the activity which is supposed to be carried out. The subject of this work is approached both theoretically and practically, relying on the following research methods: the comparison method, the observation method and the case study method.

  10. Model Reduction Based on Proper Generalized Decomposition for the Stochastic Steady Incompressible Navier--Stokes Equations

    KAUST Repository

    Tamellini, L.

    2014-01-01

    In this paper we consider a proper generalized decomposition method to solve the steady incompressible Navier-Stokes equations with random Reynolds number and forcing term. The aim of such a technique is to compute a low-cost reduced basis approximation of the full stochastic Galerkin solution of the problem at hand. A particular algorithm, inspired by the Arnoldi method for solving eigenproblems, is proposed for an efficient greedy construction of a deterministic reduced basis approximation. This algorithm decouples the computation of the deterministic and stochastic components of the solution, thus allowing reuse of preexisting deterministic Navier-Stokes solvers. It has the remarkable property of only requiring the solution of m uncoupled deterministic problems for the construction of an m-dimensional reduced basis rather than M coupled problems of the full stochastic Galerkin approximation space, with m l M (up to one order of magnitudefor the problem at hand in this work). © 2014 Society for Industrial and Applied Mathematics.

  11. A holistic model for Islamic accountants and its value added

    OpenAIRE

    El-Halaby, Sherif; Hussainey, Khaled

    2015-01-01

    Purpose – The core objective for this study is introduce the holistic model for Islamic accountants through exploring the perspectives of Muslim scholars; Islamic sharia and AAOIFI ethical standards. The study also contributes to existing literature by exploring the main added value of Muslim accountant towards stakeholders through investigates the main roles of an Islamic accountants. Design/methodology/approach – The paper critically reviews historical debates about Islamic accounting and t...

  12. Using proper regression methods for fitting the Langmuir model to sorption data

    Science.gov (United States)

    The Langmuir model, originally developed for the study of gas sorption to surfaces, is one of the most commonly used models for fitting phosphorus sorption data. There are good theoretical reasons, however, against applying this model to describe P sorption to soils. Nevertheless, the Langmuir model...

  13. Taking individual scaling differences into account by analyzing profile data with the Mixed Assessor Model

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Schlich, Pascal; Skovgaard, Ib

    2015-01-01

    are deduced that include scaling difference in the error term to the proper extent. A meta study of 8619 sensory attributes from 369 sensory profile data sets from SensoBase (www.sensobase.fr) is conducted. In 45.3% of all attributes scaling heterogeneity is present (P-value ...) that properly takes this into account by a simple inclusion of the product averages as a covariate in the modeling and allowing the covariate regression coefficients to depend on the assessor. This gives a more powerful analysis by removing the scaling difference from the error term and proper confidence limits.......9% of the attributes having a product difference P-value in an intermediate range by the traditional approach, the new approach resulted in a clearly more significant result for 42.3% of these cases. Overall, the new approach claimed significant product difference (P-value

  14. Investigation of proper modeling of very dense granular flows in the recirculation system of CFBs

    Institute of Scientific and Technical Information of China (English)

    Aristeidis Nikolopoulos; Nikos Nikolopoulos; Nikos Varveris; Sotirios Karellas; Panagiotis Grammelis; Emmanuel Kakaras

    2012-01-01

    The aim of this paper is the development of new models and/or the improvement of existing numerical models,used for simulating granular flow in CFB (circulating fluidized bed) recirculation systems.Most recent models follow the TFM (two-fluid model) methodology,but they cannot effectively simulate the inter-particle friction forces in the recirculation system,because the respective stress tensor does not incorporate compressibility of flow due to change of effective particle density.As a consequence,the induced normal and shear stresses are not modeled appropriately during the flow of the granular phase in the CFB recirculation system.The failure of conventional models,such as that of von Mises/Coulomb,is mainly caused by false approximation of the yield criterion which is not applicable to the CFB recirculation system.The present work adopts an alternative yield function,used for the first time in TFM Eulerian modeling.The proposed model is based on the Pitman-Schaeffer-Gray-Stiles yield criterion.Both the temporal deformation of the solid granular phase and the repose angle that the granular phase forms are more accurately simulated by this model.The numerical results of the proposed model agree well with experimental data,implying that frictional forces are efficiently simulated by the new model.

  15. Accountancy Modeling on Intangible Fixed Assets in Terms of the Main Provisions of International Accounting Standards

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2014-12-01

    Full Text Available Intangible fixed assets are of great importance in terms of progress of economic units. In recent years new approaches have been developed, additions to old standards so that intangible assets have gained a reputation both in the economic environment and in academia. We intend to develop a practical study on the main accounting approaches of the accounting modeling of the intangibles that impact on a company's brand development research PRORESEARCH SRL.

  16. The financial accounting model from a system dynamics' perspective

    NARCIS (Netherlands)

    Melse, E.

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal

  17. The financial accounting model from a system dynamics' perspective

    NARCIS (Netherlands)

    Melse, E.

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal confl

  18. The financial accounting model from a system dynamics' perspective

    NARCIS (Netherlands)

    Melse, E.

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal confl

  19. Linear indices in nonlinear structural equation models : best fitting proper indices and other composites

    NARCIS (Netherlands)

    Dijkstra, T.K.; Henseler, J.

    2011-01-01

    The recent advent of nonlinear structural equation models with indices poses a new challenge to the measurement of scientific constructs. We discuss, exemplify and add to a family of statistical methods aimed at creating linear indices, and compare their suitability in a complex path model with line

  20. What is a Proper Resolution of Weather Radar Precipitation Estimates for Urban Drainage Modelling?

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Rasmussen, Michael R.; Thorndahl, Søren Liedtke

    2012-01-01

    The resolution of distributed rainfall input for drainage models is the topic of this paper. The study is based on data from high resolution X-band weather radar used together with an urban drainage model of a medium size Danish village. The flow, total run-off volume and CSO volume are evaluated...

  1. Yuan Exchange Rate 'Properly Adjusted'

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

      The currency exchange rate was "properly adjusted" this year and takes into account effects on the country's neighbors and the world, Premier Wen Jiabao said at a regional meeting in Malaysia.……

  2. Modelling studies to proper size a hydrogen generator for fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Maggio, G.; Recupero, V.; Di Leonardo, R.; Lagana, M. [Istituto CNR-TAE, Lucia, Messina (Italy)

    1996-12-31

    Based upon an extensive survey of literature a mathematical model has been developed to study the temperature profile along the catalytic bed of a reactor for the methane partial oxidation. The model allowed a preliminary design of a 5 Nm{sup 3} syngas/h prototype to be integrated with second generation fuel cells as hydrogen generator (in the framework of the EC-JOU2 contract). This design was based on some target features, including the choice of a GHSV (gas hour space velocity) equal to 80000 h{sup -1}, a catalyst particle size of 1/8inches, a molar air/methane ratio of 2.7 (i.e. O{sub 2}/CH{sub 4}=0.53), a linear velocity in the catalytic bed of about 2 m/sec, and an inert/catalyst ratio 3:1. Starting from this data, the work has been concerned with the identification of the controlling regime (kinetic or diffusional), and then with the estimation of the gas composition and temperature profiles along the reactor. A comparison between experimental and model results has also been accomplished.

  3. A low-cost, goal-oriented ‘compact proper orthogonal decomposition’ basis for model reduction of static systems

    KAUST Repository

    Carlberg, Kevin

    2010-12-10

    A novel model reduction technique for static systems is presented. The method is developed using a goal-oriented framework, and it extends the concept of snapshots for proper orthogonal decomposition (POD) to include (sensitivity) derivatives of the state with respect to system input parameters. The resulting reduced-order model generates accurate approximations due to its goal-oriented construction and the explicit \\'training\\' of the model for parameter changes. The model is less computationally expensive to construct than typical POD approaches, since efficient multiple right-hand side solvers can be used to compute the sensitivity derivatives. The effectiveness of the method is demonstrated on a parameterized aerospace structure problem. © 2010 John Wiley & Sons, Ltd.

  4. Can Ising model and/or QKPZ equation properly describe reactive-wetting interface dynamics?

    Science.gov (United States)

    Efraim, Yael; Taitelbaum, Haim

    2009-09-01

    The reactive-wetting process, e.g. spreading of a liquid droplet on a reactive substrate is known as a complex, non-linear process with high sensitivity to minor fluctuations. The dynamics and geometry of the interface (triple line) between the materials is supposed to shed light on the main mechanisms of the process. We recently studied a room temperature reactive-wetting system of a small (˜ 150 μm) Hg droplet that spreads on a thin (˜ 4000 Å) Ag substrate. We calculated the kinetic roughening exponents (growth and roughness), as well as the persistence exponent of points on the advancing interface. In this paper we address the question whether there exists a well-defined model to describe the interface dynamics of this system, by performing two sets of numerical simulations. The first one is a simulation of an interface propagating according to the QKPZ equation, and the second one is a landscape of an Ising chain with ferromagnetic interactions in zero temperature. We show that none of these models gives a full description of the dynamics of the experimental reactivewetting system, but each one of them has certain common growth properties with it. We conjecture that this results from a microscopic behavior different from the macroscopic one. The microscopic mechanism, reflected by the persistence exponent, resembles the Ising behavior, while in the macroscopic scale, exemplified by the growth exponent, the dynamics looks more like the QKPZ dynamics.

  5. On the proper Mach number and ratio of specific heats for modeling the Venus bow shock

    Science.gov (United States)

    Tatrallyay, M.; Russell, C. T.; Luhmann, J. G.; Barnes, A.; Mihalov, J. D.

    1984-01-01

    Observational data from the Pioneer Venus Orbiter are used to investigate the physical characteristics of the Venus bow shock, and to explore some general issues in the numerical simulation of collisionless shocks. It is found that since equations from gas-dynamic (GD) models of the Venus shock cannot in general replace MHD equations, it is not immediately obvious what the optimum way is to describe the desired MHD situation with a GD code. Test case analysis shows that for quasi-perpendicular shocks it is safest to use the magnetospheric Mach number as an input to the GD code. It is also shown that when comparing GD predicted temperatures with MHD predicted temperatures total energy should be compared since the magnetic energy density provides a significant fraction of the internal energy of the MHD fluid for typical solar wind parameters. Some conclusions are also offered on the properties of the terrestrial shock.

  6. Chaotic vibrations of circular cylindrical shells: Galerkin versus reduced-order models via the proper orthogonal decomposition method

    Science.gov (United States)

    Amabili, M.; Sarkar, A.; Païdoussis, M. P.

    2006-03-01

    The geometric nonlinear response of a water-filled, simply supported circular cylindrical shell to harmonic excitation in the spectral neighbourhood of the fundamental natural frequency is investigated. The response is investigated for a fixed excitation frequency by using the excitation amplitude as bifurcation parameter for a wide range of variation. Bifurcation diagrams of Poincaré maps obtained from direct time integration and calculation of the Lyapunov exponents and Lyapunov dimension have been used to study the system. By increasing the excitation amplitude, the response undergoes (i) a period-doubling bifurcation, (ii) subharmonic response, (iii) quasi-periodic response and (iv) chaotic behaviour with up to 16 positive Lyapunov exponents (hyperchaos). The model is based on Donnell's nonlinear shallow-shell theory, and the reference solution is obtained by the Galerkin method. The proper orthogonal decomposition (POD) method is used to extract proper orthogonal modes that describe the system behaviour from time-series response data. These time-series have been obtained via the conventional Galerkin approach (using normal modes as a projection basis) with an accurate model involving 16 degrees of freedom (dofs), validated in previous studies. The POD method, in conjunction with the Galerkin approach, permits to build a lower-dimensional model as compared to those obtainable via the conventional Galerkin approach. Periodic and quasi-periodic response around the fundamental resonance for fixed excitation amplitude, can be very successfully simulated with a 3-dof reduced-order model. However, in the case of large variation of the excitation, even a 5-dof reduced-order model is not fully accurate. Results show that the POD methodology is not as "robust" as the Galerkin method.

  7. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Liliana Feleaga

    2006-06-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  8. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Niculae Feleaga

    2006-04-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  9. Projection on Proper elements for code control: Verification, numerical convergence, and reduced models. Application to plasma turbulence simulations

    Science.gov (United States)

    Cartier-Michaud, T.; Ghendrih, P.; Sarazin, Y.; Abiteboul, J.; Bufferand, H.; Dif-Pradalier, G.; Garbet, X.; Grandgirard, V.; Latu, G.; Norscini, C.; Passeron, C.; Tamain, P.

    2016-02-01

    The Projection on Proper elements (PoPe) is a novel method of code control dedicated to (1) checking the correct implementation of models, (2) determining the convergence of numerical methods, and (3) characterizing the residual errors of any given solution at very low cost. The basic idea is to establish a bijection between a simulation and a set of equations that generate it. Recovering equations is direct and relies on a statistical measure of the weight of the various operators. This method can be used in any number of dimensions and any regime, including chaotic ones. This method also provides a procedure to design reduced models and quantify its ratio of cost to benefit. PoPe is applied to a kinetic and a fluid code of plasma turbulence.

  10. Model reduction in coupled groundwater-surface water systems - potentials and limitations of the applied proper orthogonal decomposition (POD) method

    Science.gov (United States)

    Gosses, Moritz; Moore, Catherine; Wöhling, Thomas

    2016-04-01

    The complexity of many groundwater-surface water models often results in long model run times even on today's computer systems. This becomes even more problematic in combination with the necessity of (many) repeated model runs for parameter estimation and later model purposes like predictive uncertainty analysis or monitoring network optimization. Model complexity reduction is a promising approach to reduce the computational effort of physically-based models. Its impact on the conservation of uncertainty as determined by the (more) complex model is not well known, though. A potential under-estimation of predictive uncertainty has, however, a significant impact on model applications such as model-based monitoring network optimization. Can we use model reduction techniques to significantly reduce run times of highly complex groundwater models and yet estimate accurate uncertainty levels? Our planned research project hopes to assess this question and apply model reduction to non-linear groundwater systems. Several encouraging model simplification methods have been developed in recent years. To analyze their respective performance, we will choose three different model reduction methods and apply them to both synthetic and real-world test cases to benchmark their computational efficiency and prediction accuracy. The three methods for benchmarking will be proper orthogonal decomposition (POD) (following Siade et al. 2010), the eigenmodel method (Sahuquillo et al. 1983) and inversion-based upscaling (Doherty and Christensen, 2011). In a further step, efficient model reduction methods for application to non-linear groundwater-surface water systems will be developed and applied to monitoring network optimization. In a first step we present here one variant of the implementation and benchmarking of the POD method. POD reduces model complexity by working in a subspace of the model matrices resulting from spatial discretization with the same significant eigenvalue spectrum

  11. Stationary flow fields prediction of variable physical domain based on proper orthogonal decomposition and kriging surrogate model

    Institute of Scientific and Technical Information of China (English)

    Qiu Yasong; Bai Junqiang

    2015-01-01

    In this paper a new flow field prediction method which is independent of the governing equations, is developed to predict stationary flow fields of variable physical domain. Predicted flow fields come from linear superposition of selected basis modes generated by proper orthogonal decomposition (POD). Instead of traditional projection methods, kriging surrogate model is used to calculate the superposition coefficients through building approximate function relationships between profile geometry parameters of physical domain and these coefficients. In this context, the problem which troubles the traditional POD-projection method due to viscosity and compress-ibility has been avoided in the whole process. Moreover, there are no constraints for the inner prod-uct form, so two forms of simple ones are applied to improving computational efficiency and cope with variable physical domain problem. An iterative algorithm is developed to determine how many basis modes ranking front should be used in the prediction. Testing results prove the feasibility of this new method for subsonic flow field, but also prove that it is not proper for transonic flow field because of the poor predicted shock waves.

  12. Stationary flow fields prediction of variable physical domain based on proper orthogonal decomposition and kriging surrogate model

    Directory of Open Access Journals (Sweden)

    Qiu Yasong

    2015-02-01

    Full Text Available In this paper a new flow field prediction method which is independent of the governing equations, is developed to predict stationary flow fields of variable physical domain. Predicted flow fields come from linear superposition of selected basis modes generated by proper orthogonal decomposition (POD. Instead of traditional projection methods, kriging surrogate model is used to calculate the superposition coefficients through building approximate function relationships between profile geometry parameters of physical domain and these coefficients. In this context, the problem which troubles the traditional POD-projection method due to viscosity and compressibility has been avoided in the whole process. Moreover, there are no constraints for the inner product form, so two forms of simple ones are applied to improving computational efficiency and cope with variable physical domain problem. An iterative algorithm is developed to determine how many basis modes ranking front should be used in the prediction. Testing results prove the feasibility of this new method for subsonic flow field, but also prove that it is not proper for transonic flow field because of the poor predicted shock waves.

  13. The importance of accounting for the uncertainty of published prognostic model estimates.

    Science.gov (United States)

    Young, Tracey A; Thompson, Simon

    2004-01-01

    Reported is the importance of properly reflecting uncertainty associated with prognostic model estimates when calculating the survival benefit of a treatment or technology, using liver transplantation as an example. Monte Carlo simulation techniques were used to account for the uncertainty of prognostic model estimates using the standard errors of the regression coefficients and their correlations. These methods were applied to patients with primary biliary cirrhosis undergoing liver transplantation using a prognostic model from a historic cohort who did not undergo transplantation. The survival gain over 4 years from transplantation was estimated. Ignoring the uncertainty in the prognostic model, the estimated survival benefit of liver transplantation was 16.7 months (95 percent confidence interval [CI], 13.5 to 20.1), and was statistically significant (p important that the precision of regression coefficients is available for users of published prognostic models. Ignoring this additional information substantially underestimates uncertainty, which can then impact misleadingly on policy decisions.

  14. Accounting for Epistemic Uncertainty in PSHA: Logic Tree and Ensemble Model

    Science.gov (United States)

    Taroni, M.; Marzocchi, W.; Selva, J.

    2014-12-01

    The logic tree scheme is the probabilistic framework that has been widely used in the last decades to take into account epistemic uncertainties in probabilistic seismic hazard analysis (PSHA). Notwithstanding the vital importance for PSHA to incorporate properly the epistemic uncertainties, we argue that the use of the logic tree in a PSHA context has conceptual and practical drawbacks. Despite some of these drawbacks have been reported in the past, a careful evaluation of their impact on PSHA is still lacking. This is the goal of the present work. In brief, we show that i) PSHA practice does not meet the assumptions that stand behind the logic tree scheme; ii) the output of a logic tree is often misinterpreted and/or misleading, e.g., the use of percentiles (median included) in a logic tree scheme raises theoretical difficulties from a probabilistic point of view; iii) in case the assumptions that stand behind a logic tree are actually met, this leads to several problems in testing any PSHA model. We suggest a different strategy - based on ensemble modeling - to account for epistemic uncertainties in a more proper probabilistic framework. Finally, we show that in many PSHA practical applications, the logic tree is de facto loosely applied to build sound ensemble models.

  15. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.

    2000-01-01

    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest

  16. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.

    2000-01-01

    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest for

  17. Accountability Analysis of Electronic Commerce Protocols by Finite Automaton Model

    Institute of Scientific and Technical Information of China (English)

    Xie Xiao-yao; Zhang Huan-guo

    2004-01-01

    The accountability of electronic commerce protocols is an important aspect to insures security of electronic transaction. This paper proposes to use Finite Automaton (FA) model as a new kind of framework to analyze the trans action protocols in the application of electronic commerce.

  18. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.

    2000-01-01

    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest for

  19. The Two-Step Student Teaching Model: Training for Accountability.

    Science.gov (United States)

    Corlett, Donna

    This model of student teaching preparation was developed in collaboration with public schools to focus on systematic experience in teaching and training for accountability in the classroom. In the two-semester plan, students begin with teacher orientation and planning days, serve as teacher aides, attend various methods courses, teach several…

  20. Driving Strategic Risk Planning With Predictive Modelling For Managerial Accounting

    DEFF Research Database (Denmark)

    Nielsen, Steen; Pontoppidan, Iens Christian

    for managerial accounting and shows how it can be used to determine the impact of different types of risk assessment input parameters on the variability of important outcome measures. The purpose is to: (i) point out the theoretical necessity of a stochastic risk framework; (ii) present a stochastic framework......Currently, risk management in management/managerial accounting is treated as deterministic. Although it is well-known that risk estimates are necessarily uncertain or stochastic, until recently the methodology required to handle stochastic risk-based elements appear to be impractical and too...... mathematical. The ultimate purpose of this paper is to “make the risk concept procedural and analytical” and to argue that accountants should now include stochastic risk management as a standard tool. Drawing on mathematical modelling and statistics, this paper methodically develops risk analysis approach...

  1. Quantum-like models cannot account for the conjunction fallacy

    CERN Document Server

    Boyer-Kassem, Thomas; Guerci, Eric

    2016-01-01

    Human agents happen to judge that a conjunction of two terms is more probable than one of the terms, in contradiction with the rules of classical probabilities---this is the conjunction fallacy. One of the most discussed accounts of this fallacy is currently the quantum-like explanation, which relies on models exploiting the mathematics of quantum mechanics. The aim of this paper is to investigate the empirical adequacy of major such quantum-like models. We first argue that they can be tested in three different ways, in a question order effect configuration which is different from the traditional conjunction fallacy experiment. We then carry out our proposed experiment, with varied methodologies from experimental economics. The experimental results we get are at odds with the predictions of the quantum-like models. This strongly suggests that the quantum-like account of the conjunction fallacy fails. Future possible research paths are discussed.

  2. Application of a predictive Bayesian model to environmental accounting.

    Science.gov (United States)

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  3. Proper orientation of cacti

    OpenAIRE

    Araujo, Julio; Havet, Frédéric; Linhares Sales, Claudia; Silva, Ana

    2016-01-01

    International audience; An orientation of a graph G is proper if two adjacent vertices have different in-degrees. The proper-orientation number − → χ (G) of a graph G is the minimum maximum in-degree of a proper orientation of G. In [1], the authors ask whether the proper orientation number of a planar graph is bounded. We prove that every cactus admits a proper orientation with maximum in-degree at most 7. We also prove that the bound 7 is tight by showing a cactus having no proper orientati...

  4. Modeling of Accounting Doctoral Thesis with Emphasis on Solution for Financial Problems

    Directory of Open Access Journals (Sweden)

    F. Mansoori

    2015-02-01

    Full Text Available By passing the instruction period and increase of graduate students and also research budget, knowledge of accounting in Iran entered to the field of research in a way that number of accounting projects has been implemented in the real world. Because of that different experience in implementing the accounting standards were achieved. So, it was expected the mentioned experiences help to solve the financial problems in country, in spite of lots of efforts which were done for researching; we still have many financial and accounting problems in our country. PHD projects could be considered as one of the important solutions to improve the University subjects including accounting. PHD projects are considered as team work job and it will be legitimate by supervisor teams in universities.It is obvious that applied projects should solve part of the problems in accounting field but unfortunately it is not working in the real world. The question which came in to our mind is how come that the out put of the applied and knowledge base projects could not make the darkness of the mentioned problems clear and also why politicians in difficult situations prefer to use their own previous experiences in important decision makings instead of using the consultant’s knowledge base suggestions.In this research I’m going to study, the reasons behind that prevent the applied PHD projects from success in real world which relates to the point of view that consider the political suggestions which are out put of knowledge base projects are not qualified enough for implementation. For this purpose, the indicators of an applied PHD project were considered and 110 vise people were categorized the mentioned indicators and then in a comprehensive study other applied PHD accounting projects were compared to each other.As result, in this study problems of the studied researches were identified and a proper and applied model for creating applied research was developed.

  5. Accounting for Business Models: Increasing the Visibility of Stakeholders

    Directory of Open Access Journals (Sweden)

    Colin Haslam

    2015-01-01

    Full Text Available Purpose: This paper conceptualises a firm’s business model employing stakeholder theory as a central organising element to help inform the purpose and objective(s of business model financial reporting and disclosure. Framework: Firms interact with a complex network of primary and secondary stakeholders to secure the value proposition of a firm’s business model. This value proposition is itself a complex amalgam of value creating, value capturing and value manipulating arrangements with stakeholders. From a financial accounting perspective the purpose of the value proposition for a firm’s business model is to sustain liquidity and solvency as a going concern. Findings: This article argues that stakeholder relations impact upon the financial viability of a firm’s business model value proposition. However current financial reporting by function of expenses and the central organising objectives of the accounting conceptual framework conceal firm-stakeholder relations and their impact on reported financials. Practical implications: The practical implication of our paper is that ‘Business Model’ financial reporting would require a reorientation in the accounting conceptual framework that defines the objectives and purpose of financial reporting. This reorientation would involve reporting about stakeholder relations and their impact on a firms financials not simply reporting financial information to ‘investors’. Social Implications: Business model financial reporting has the potential to be stakeholder inclusive because the numbers and narratives reported by firms in their annual financial statements will increase the visibility of stakeholder relations and how these are being managed. What is original/value of paper: This paper’s original perspective is that it argues that a firm’s business model is structured out of stakeholder relations. It presents the firm’s value proposition as the product of value creating, capturing and

  6. Optimal control design that accounts for model mismatch errors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T.J. [Sandia National Labs., Albuquerque, NM (United States); Hull, D.G. [Texas Univ., Austin, TX (United States). Dept. of Aerospace Engineering and Engineering Mechanics

    1995-02-01

    A new technique is presented in this paper that reduces the complexity of state differential equations while accounting for modeling assumptions. The mismatch controls are defined as the differences between the model equations and the true state equations. The performance index of the optimal control problem is formulated with a set of tuning parameters that are user-selected to tune the control solution in order to achieve the best results. Computer simulations demonstrate that the tuned control law outperforms the untuned controller and produces results that are comparable to a numerically-determined, piecewise-linear optimal controller.

  7. Accounting for microbial habitats in modeling soil organic matter dynamics

    Science.gov (United States)

    Chenu, Claire; Garnier, Patricia; Nunan, Naoise; Pot, Valérie; Raynaud, Xavier; Vieublé, Laure; Otten, Wilfred; Falconer, Ruth; Monga, Olivier

    2017-04-01

    The extreme heterogeneity of soils constituents, architecture and inhabitants at the microscopic scale is increasingly recognized. Microbial communities exist and are active in a complex 3-D physical framework of mineral and organic particles defining pores of various sizes, more or less inter-connected. This results in a frequent spatial disconnection between soil carbon, energy sources and the decomposer organisms and a variety of microhabitats that are more or less suitable for microbial growth and activity. However, current biogeochemical models account for C dynamics at the macroscale (cm, m) and consider time- and spatially averaged relationships between microbial activity and soil characteristics. Different modelling approaches have intended to account for this microscale heterogeneity, based either on considering aggregates as surrogates for microbial habitats, or pores. Innovative modelling approaches are based on an explicit representation of soil structure at the fine scale, i.e. at µm to mm scales: pore architecture and their saturation with water, localization of organic resources and of microorganisms. Three recent models are presented here, that describe the heterotrophic activity of either bacteria or fungi and are based upon different strategies to represent the complex soil pore system (Mosaic, LBios and µFun). These models allow to hierarchize factors of microbial activity in soil's heterogeneous architecture. Present limits of these approaches and challenges are presented, regarding the extensive information required on soils at the microscale and to up-scale microbial functioning from the pore to the core scale.

  8. Accommodating environmental variation in population models: metaphysiological biomass loss accounting.

    Science.gov (United States)

    Owen-Smith, Norman

    2011-07-01

    1. There is a pressing need for population models that can reliably predict responses to changing environmental conditions and diagnose the causes of variation in abundance in space as well as through time. In this 'how to' article, it is outlined how standard population models can be modified to accommodate environmental variation in a heuristically conducive way. This approach is based on metaphysiological modelling concepts linking populations within food web contexts and underlying behaviour governing resource selection. Using population biomass as the currency, population changes can be considered at fine temporal scales taking into account seasonal variation. Density feedbacks are generated through the seasonal depression of resources even in the absence of interference competition. 2. Examples described include (i) metaphysiological modifications of Lotka-Volterra equations for coupled consumer-resource dynamics, accommodating seasonal variation in resource quality as well as availability, resource-dependent mortality and additive predation, (ii) spatial variation in habitat suitability evident from the population abundance attained, taking into account resource heterogeneity and consumer choice using empirical data, (iii) accommodating population structure through the variable sensitivity of life-history stages to resource deficiencies, affecting susceptibility to oscillatory dynamics and (iv) expansion of density-dependent equations to accommodate various biomass losses reducing population growth rate below its potential, including reductions in reproductive outputs. Supporting computational code and parameter values are provided. 3. The essential features of metaphysiological population models include (i) the biomass currency enabling within-year dynamics to be represented appropriately, (ii) distinguishing various processes reducing population growth below its potential, (iii) structural consistency in the representation of interacting populations and

  9. Discrete Model of Ideological Struggle Accounting for Migration

    CERN Document Server

    Vitanov, Nikolay K; Rotundo, Giulia

    2012-01-01

    A discrete in time model of ideological competition is formulated taking into account population migration. The model is based on interactions between global populations of non-believers and followers of different ideologies. The complex dynamics of the attracting manifolds is investigated. Conversion from one ideology to another by means of (i) mass media influence and (ii) interpersonal relations is considered. Moreover a different birth rate is assumed for different ideologies, the rate being assumed to be positive for the reference population, made of initially non-believers. Ideological competition can happen in one or several regions in space. In the latter case, migration of non-believers and adepts is allowed; this leads to an enrichment of the ideological dynamics. Finally, the current ideological situation in the Arab countries and China is commented upon from the point of view of the presently developed mathematical model. The massive forced conversion by Ottoman Turks in the Balkans is briefly dis...

  10. Accounting models and devolution in the Italian public sector

    Directory of Open Access Journals (Sweden)

    Aldo Pavan

    2006-06-01

    Full Text Available In the 1990s Italy started a public sector administrative reform process consistent, in general terms, with the New Public Management movement. In particular, changes have been introduced in the budgeting and accounting systems of the State, municipalities, health care bodies, etc. In the same years an institutional reform also started and a strong power devolution process began to be realised; a shift to a federal form of the State seems to be the goal. Stating form the challenges coming from the devolution process, the article questions 1 if it is possible to find some shared features in theh reformed accounting systems of the different public sector organisation categories, and to shape in this way on or more accounting Italian models, and 2 if these models have an information capacity adequate to sustain the information needs- in terms of accountability, government co-ordination and decision making- emerging from the devolution process. The information needs in a devolved environment are recognised; eleven budgeting and accounting systems are analysed and compared. The issue of the consistency level existing between accountign and institutional reforms is also discussed.En la Italia de los años 90, se inició un proceso de reforma administrativa del sector público en consonancia, en términos generales, con el movimineto New Public Management. En concreto, se han introducido modificaciones en los sistemas contables y presupuestarios del Estado, de las corporaciones locales y de las instituciones sanitarias. Durante el mismo periodo se empreendió una reforma de carácter constitucional cuyo objetivo último parecía ser la constitución de un estado federal. A partir de los desafíos que supone todo proceso de descentralización, el artículo abre dos interrogantes: 1 la posibilidad de encontrar rasgos comunes en los sitemas contables reformados de los distintos niveles organizativos del sector público, con el fin de confirmar uno o

  11. Accounting for Trust: A Conceptual Model for the Determinants of Trust in the Australian Public Accountant – SME Client Relationship

    Directory of Open Access Journals (Sweden)

    Michael Cherry

    2016-06-01

    Full Text Available This paper investigates trust as it relates to the relationship between Australia’s public accountants and their SME clients. It describes the contribution of the accountancy profession to the SME market, as well as the key challenges faced by accountants and their SME clients. Following the review of prior scholarly studies, a working definition of trust as it relates to this important relationship is also developed and presented. A further consequence of prior academic work is the development of a comprehensive conceptual model to describe the determinants of trust in the Australian public accountant – SME client relationship, which requires testing via empirical studies.

  12. Relations between Balance Sheet Policy and Accounting Policy in the Context of Different Accounting Models

    Directory of Open Access Journals (Sweden)

    Rafał Grabowski

    2010-12-01

    Full Text Available In Polish professional literature the terms balance sheet policy (in German: bilanzpolitik and accounting policy are commonly used. The problem raised by the author is defined by the fact that there exist at least a few perspectives of their meaning and their relations with each other. In some opinions balance sheet policy and accounting policy represent the same issues. In other opinions there appear differences between the two, however, there is no consensus as to the nature of the differences. The lack of clarity with regard to the explanation methods of balance sheet policy and accounting policy and their relations represents a research problem for theory and practice. The theory is required to codify the academic debate and systematize the terminology while in practice it is the management board that holds responsibility for a financial statement which is determined by accounting policy adopted by the entity. In this working paper the author has tried to point out the substance of balance sheet policy and accounting policy as well as to provide explanation of existing differences between them. Although the topic has already been discussed in professional literature, there have been no attempts to explain the substance of the two policies and their relations by making reference to their origin, ie. accounting approaches from which they evolved.

  13. A parametric ribcage geometry model accounting for variations among the adult population.

    Science.gov (United States)

    Wang, Yulong; Cao, Libo; Bai, Zhonghao; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2016-09-06

    The objective of this study is to develop a parametric ribcage model that can account for morphological variations among the adult population. Ribcage geometries, including 12 pair of ribs, sternum, and thoracic spine, were collected from CT scans of 101 adult subjects through image segmentation, landmark identification (1016 for each subject), symmetry adjustment, and template mesh mapping (26,180 elements for each subject). Generalized procrustes analysis (GPA), principal component analysis (PCA), and regression analysis were used to develop a parametric ribcage model, which can predict nodal locations of the template mesh according to age, sex, height, and body mass index (BMI). Two regression models, a quadratic model for estimating the ribcage size and a linear model for estimating the ribcage shape, were developed. The results showed that the ribcage size was dominated by the height (p=0.000) and age-sex-interaction (p=0.007) and the ribcage shape was significantly affected by the age (p=0.0005), sex (p=0.0002), height (p=0.0064) and BMI (p=0.0000). Along with proper assignment of cortical bone thickness, material properties and failure properties, this parametric ribcage model can directly serve as the mesh of finite element ribcage models for quantifying effects of human characteristics on thoracic injury risks.

  14. Meander migration modeling accounting for the effect of riparian vegetation

    Science.gov (United States)

    Eke, E.; Parker, G.

    2010-12-01

    A numerical model is proposed to study the development of meandering rivers so as to reproduce patterns of both migration and spatial/temporal width variation pattern observed in nature. The model comprises of: a) a depth-averaged channel hydrodynamic/morphodynamic model developed using a two-parameter perturbation expansion technique that considers perturbations induced by curvature and spatial channel width variation and b) a bank migration model which separately considers bank erosional and depositional processes. Unlike most previous meandering river models where channel migration is characterized only in terms of bank erosion, channel dynamics are here defined at channel banks which are allowed to migrate independently via deposition/erosion based on the local flow field and bank characteristics. A bank erodes (deposits) if the near bank Shields stress computed from the flow field is greater (less) than a specified threshold. This threshold Shields number is equivalent to the formative Shields stress characterizing bankfull flow. Excessive bank erosion is controlled by means of natural armoring provided by cohesive/rooted slump blocks produced when a stream erodes into the lower non-cohesive part of a composite bank. Bank deposition is largely due to sediment trapping by vegetation; resultant channel narrowing is related to both a natural rate of vegetal encroachment and flow characteristics. This new model allows the channel freedom to vary in width both spatially and in time as it migrates, so accounting for the bi-directional coupling between vegetation and flow dynamics and reproducing more realistic planform geometries. Preliminary results based on the model are presented.

  15. Generalized Stefan models accounting for a discontinuous temperature field

    Science.gov (United States)

    Danescu, A.

    We construct a class of generalized Stefan models able to account for a discontinuous temperature field across a nonmaterial interface. The resulting theory introduces a constitutive scalar interfacial field, denoted by /lineθ and called the equivalent temperature of the interface. A classical procedure, based on the interfacial dissipation inequality, relates the interfacial energy release to the interfacial mass flux and restricts the equivalent temperature of the interface. We show that previously proposed theories are obtained as particular cases when /lineθ = ⪉θ > or /lineθ = ⪉(1)/(θ )>-1 or, more generally, when /lineθ = ⪉θ r ⪉ 1/θ1-r-1 for 0<= r<= 1. We study in a particular constitutive framework the solidification of an under-cooled liquid and we are able to give a sufficient condition for the existence of travelling wave solutions.

  16. Natural Resource Accounting Systems and Environmental Policy Modeling

    OpenAIRE

    Richard Cabe; Johnson, Stanley R

    1990-01-01

    Natural Resource Accounting (RCA) combines national income and product accounting concepts with analysis of natural resource and environmental issues. This paper considers this approach for the RCA Appraisal required by the Soil and Water Resources Conservation Act. Recent natural resource accounting literature is examined in light of requirements of the RCA Appraisal. The paper provides a critique of the economic content of the Second RCA Appraisal and develops a natural resource accounting ...

  17. Accounting for Water Insecurity in Modeling Domestic Water Demand

    Science.gov (United States)

    Galaitsis, S. E.; Huber-lee, A. T.; Vogel, R. M.; Naumova, E.

    2013-12-01

    Water demand management uses price elasticity estimates to predict consumer demand in relation to water pricing changes, but studies have shown that many additional factors effect water consumption. Development scholars document the need for water security, however, much of the water security literature focuses on broad policies which can influence water demand. Previous domestic water demand studies have not considered how water security can affect a population's consumption behavior. This study is the first to model the influence of water insecurity on water demand. A subjective indicator scale measuring water insecurity among consumers in the Palestinian West Bank is developed and included as a variable to explore how perceptions of control, or lack thereof, impact consumption behavior and resulting estimates of price elasticity. A multivariate regression model demonstrates the significance of a water insecurity variable for data sets encompassing disparate water access. When accounting for insecurity, the R-squaed value improves and the marginal price a household is willing to pay becomes a significant predictor for the household quantity consumption. The model denotes that, with all other variables held equal, a household will buy more water when the users are more water insecure. Though the reasons behind this trend require further study, the findings suggest broad policy implications by demonstrating that water distribution practices in scarcity conditions can promote consumer welfare and efficient water use.

  18. Capture-recapture survival models taking account of transients

    Science.gov (United States)

    Pradel, R.; Hines, J.E.; Lebreton, J.D.; Nichols, J.D.

    1997-01-01

    The presence of transient animals, common enough in natural populations, invalidates the estimation of survival by traditional capture- recapture (CR) models designed for the study of residents only. Also, the study of transit is interesting in itself. We thus develop here a class of CR models to describe the presence of transients. In order to assess the merits of this approach we examme the bias of the traditional survival estimators in the presence of transients in relation to the power of different tests for detecting transients. We also compare the relative efficiency of an ad hoc approach to dealing with transients that leaves out the first observation of each animal. We then study a real example using lazuli bunting (Passerina amoena) and, in conclusion, discuss the design of an experiment aiming at the estimation of transience. In practice, the presence of transients is easily detected whenever the risk of bias is high. The ad hoc approach, which yields unbiased estimates for residents only, is satisfactory in a time-dependent context but poorly efficient when parameters are constant. The example shows that intermediate situations between strict 'residence' and strict 'transience' may exist in certain studies. Yet, most of the time, if the study design takes into account the expected length of stay of a transient, it should be possible to efficiently separate the two categories of animals.

  19. Islamic Theoretical Intertemporal Model of the Current Account

    Directory of Open Access Journals (Sweden)

    Hassan Belkacem Ghassan

    2016-06-01

    Full Text Available This paper aims to develop an Islamic intertemporal model of the current account based on the prevailing theoretical and empirical literature of PVMCA (Obstfeld and Rogoff, 1996, Cerrato et al., 2014. The proposed model is based on the budget constraint of the present and future consumption, which depends on the obligatory Zakat from the income and assets, the return rate on the owned assets, the inheritance linking previous to subsequent generation. Using logarithmic utility function, featured by a unitary elasticity of intertemporal substitution and a unitary coefficient of relative risk aversion, we show through Euler equation of consumption that there is an inverse relationship between consumption growth from the last age to the first one and the Zakat rate on assets. The outcomes of this result are that the Zakat on assets disciplines the consumer to have more rationality in consumption, and allows additional marginal assets for future generations. By assuming a unitary subjective discount rate, we indicate that the more the return rate on assets is high, the more the consumption growth between today and tomorrow will be fast. Through the budget constraint, if Zakat rate on the Zakatable assets is greater than Zakat rate on income, this leads to a relative expansion in private consumption of the wealthy group. Besides, we point out that an increase in return rate on assets, can drive to increasing or decreasing current consumption, because the substitution and income effects work in opposite ways.

  20. A tulajdonnevek pszicho- és neurolingvisztikája. Vizsgálati szempontok és modellek a tulajdonnevek feldolgozásáról [The psycho- and neurolinguistics of proper names. Aspects and models of analysis on processing proper names

    Directory of Open Access Journals (Sweden)

    Reszegi, Katalin

    2014-12-01

    Full Text Available This paper provides an overview of the results of psycho- and neurolinguistic examinations into the mental processes involving proper names (i.e. storing, processing, retrieving proper names. We can denote entities of various types with the help of proper names, and although most of these types are universal, there are in fact some cultural differences. In the fields of science concerned, that is, in psycho- and neurolinguistics and in neuropsychology, attention is given almost exclusively to anthroponyms; mental and neurological features of toponyms and other name types are much less examined. Processing names is generally believed to display more difficulties than processing common nouns, and these difficulties present themselves more and more strongly with age. In connection with the special identifying function and semantic features of proper names, many researchers assume that we process the two groups of words in different ways. This paper, reflecting also on these assumptions, summarizes and explains the results of research into a selective anomia affecting monolingual speakers (word-finding disturbances; b localization; c reaction time measurement; and d speech disfluency concerning proper names (especially the “tip of the tongue phenomenon”. The author also presents the models of processing proper names, examining to what degree these models can be reconciled with our knowledge of the acquisition of proper names. Finally, the results and possible explanations of the small amount of research into the representation and processing of proper names by bilingual speakers are discussed.

  1. Expert System Models in the Companies' Financial and Accounting Domain

    CERN Document Server

    Mates, D; Bostan, I; Grosu, V

    2010-01-01

    The present paper is based on studying, analyzing and implementing the expert systems in the financial and accounting domain of the companies, describing the use method of the informational systems that can be used in the multi-national companies, public interest institutions, and medium and small dimension economical entities, in order to optimize the managerial decisions and render efficient the financial-accounting functionality. The purpose of this paper is aimed to identifying the economical exigencies of the entities, based on the already used accounting instruments and the management software that could consent the control of the economical processes and patrimonial assets.

  2. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    Validation in chemometrics is presented using the exemplar context of multivariate calibration/prediction. A phenomenological analysis of common validation practices in data analysis and chemometrics leads to formulation of a set of generic Principles of Proper Validation (PPV), which is based...

  3. PROMOTIONS: PROper MOTION Software

    Science.gov (United States)

    Caleb Wherry, John; Sahai, R.

    2009-05-01

    We report on the development of a software tool (PROMOTIONS) to streamline the process of measuring proper motions of material in expanding nebulae. Our tool makes use of IDL's widget programming capabilities to design a unique GUI that is used to compare images of the objects from two epochs. The software allows us to first orient and register the images to a common frame of reference and pixel scale, using field stars in each of the images. We then cross-correlate specific morphological features in order to determine their proper motions, which consist of the proper motion of the nebula as a whole (PM-neb), and expansion motions of the features relative to the center. If the central star is not visible (quite common in bipolar nebulae with dense dusty waists), point-symmetric expansion is assumed and we use the average motion of high-quality symmetric pairs of features on opposite sides of the nebular center to compute PM-neb. This is then subtracted out to determine the individual movements of these and additional features relative to the nebular center. PROMOTIONS should find wide applicability in measuring proper motions in astrophysical objects such as the expanding outflows/jets commonly seen around young and dying stars. We present first results from using PROMOTIONS to successfully measure proper motions in several pre-planetary nebulae (transition objects between the red giant and planetary nebula phases), using images taken 7-10 years apart with the WFPC2 and ACS instruments on board HST. The authors are grateful to NASA's Undergradute Scholars Research Program (USRP) for supporting this research.

  4. Henig Proper Efficient Points and Generalized Henig Proper Efficient Points

    Institute of Scientific and Technical Information of China (English)

    Jing Hui QIU

    2009-01-01

    Applying the theory of locally convex spaces to vector optimization,we investigate the relationship between Henig proper efficient points and generalized Henig proper efficient points. In particular,we obtain a sufficient and necessary condition for generalized Henig proper efficient points to be Henig proper efficient points. From this,we derive several convenient criteria for judging Henig proper efficient points.

  5. Proper Islamic Consumption

    DEFF Research Database (Denmark)

    Fischer, Johan

    '[T]his book is an excellent study that is lucidly written, strongly informed by theory, rich in ethnography, and empirically grounded. It has blazed a new trail in employing the tools of both religious studies and cultural studies to dissect the complex subject of “proper Islamic consumption...... because it is the Malay‐dominated state which has been crucial in generating and shaping a particular kind of modernity in order to address the problems posed for nation‐building by a quite radical form of ethnic pluralism.' Reviewed by V.T. (Terry) King, University of Leeds, ASEASUK News 46, 2009   'In...... spite of a long line of social theory analyzing the spiritual in the economic, and vice versa, very little of the recent increase in scholarship on Islam addresses its relationship with capitalism. Johan Fischer’s book,Proper Islamic Consumption, begins to fill this gap. […] Fischer’s detailed...

  6. Spectral proper orthogonal decomposition

    CERN Document Server

    Sieber, Moritz; Paschereit, Christian Oliver

    2015-01-01

    The identification of coherent structures from experimental or numerical data is an essential task when conducting research in fluid dynamics. This typically involves the construction of an empirical mode base that appropriately captures the dominant flow structures. The most prominent candidates are the energy-ranked proper orthogonal decomposition (POD) and the frequency ranked Fourier decomposition and dynamic mode decomposition (DMD). However, these methods fail when the relevant coherent structures occur at low energies or at multiple frequencies, which is often the case. To overcome the deficit of these "rigid" approaches, we propose a new method termed Spectral Proper Orthogonal Decomposition (SPOD). It is based on classical POD and it can be applied to spatially and temporally resolved data. The new method involves an additional temporal constraint that enables a clear separation of phenomena that occur at multiple frequencies and energies. SPOD allows for a continuous shifting from the energetically ...

  7. Models and Rules of Evaluation in International Accounting

    OpenAIRE

    Liliana Feleaga; Niculae Feleaga

    2006-01-01

    The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is freque...

  8. MODELING OF A STRUCTURED PLAN OF ACCOUNTS IN PROCEDURES OF INSOLVENCY AND BANKRUPTCY

    Directory of Open Access Journals (Sweden)

    Chalenko R. V.

    2014-02-01

    Full Text Available The article details the problems of constructing a structured plan of accounts in bankruptcy and insolvency proceedings. The proposed model is based on two principal positions, first structured chart of accounts has its own dimension, and secondly, it is built on the principles of architectonics. Architectonics constructing structured chart of accounts allows you to integrate managerial, strategic, transactional accounting and making accounting transparent and efficient

  9. Accounting for heterogeneity of public lands in hedonic property models

    Science.gov (United States)

    Charlotte Ham; Patricia A. Champ; John B. Loomis; Robin M. Reich

    2012-01-01

    Open space lands, national forests in particular, are usually treated as homogeneous entities in hedonic price studies. Failure to account for the heterogeneous nature of public open spaces may result in inappropriate inferences about the benefits of proximate location to such lands. In this study the hedonic price method is used to estimate the marginal values for...

  10. Accounting for Recoil Effects in Geochronometers: A New Model Approach

    Science.gov (United States)

    Lee, V. E.; Huber, C.

    2012-12-01

    dated grain is a major control on the magnitude of recoil loss, the first feature is the ability to calculate recoil effects on isotopic compositions for realistic, complex grain shapes and surface roughnesses. This is useful because natural grains may have irregular shapes that do not conform to simple geometric descriptions. Perhaps more importantly, the surface area over which recoiled nuclides are lost can be significantly underestimated when grain surface roughness is not accounted for, since the recoil distances can be of similar characteristic lengthscales to surface roughness features. The second key feature is the ability to incorporate dynamical geologic processes affecting grain surfaces in natural settings, such as dissolution and crystallization. We describe the model and its main components, and point out implications for the geologically-relevant chronometers mentioned above.

  11. Nonlinear model-based control of the Czochralski process III: Proper choice of manipulated variables and controller parameter scheduling

    Science.gov (United States)

    Neubert, M.; Winkler, J.

    2012-12-01

    This contribution continues an article series [1,2] about the nonlinear model-based control of the Czochralski crystal growth process. The key idea of the presented approach is to use a sophisticated combination of nonlinear model-based and conventional (linear) PI controllers for tracking of both, crystal radius and growth rate. Using heater power and pulling speed as manipulated variables several controller structures are possible. The present part tries to systematize the properties of the materials to be grown in order to get unambiguous decision criteria for a most profitable choice of the controller structure. For this purpose a material specific constant M called interface mobility and a more process specific constant S called system response number are introduced. While the first one summarizes important material properties like thermal conductivity and latent heat the latter one characterizes the process by evaluating the average axial thermal gradients at the phase boundary and the actual growth rate at which the crystal is grown. Furthermore these characteristic numbers are useful for establishing a scheduling strategy for the PI controller parameters in order to improve the controller performance. Finally, both numbers give a better understanding of the general thermal system dynamics of the Czochralski technique.

  12. Accounting for choice of measurement scale in extreme value modeling

    OpenAIRE

    Wadsworth, J. L.; Tawn, J. A.; Jonathan, P.

    2010-01-01

    We investigate the effect that the choice of measurement scale has upon inference and extrapolation in extreme value analysis. Separate analyses of variables from a single process on scales which are linked by a nonlinear transformation may lead to discrepant conclusions concerning the tail behavior of the process. We propose the use of a Box--Cox power transformation incorporated as part of the inference procedure to account parametrically for the uncertainty surrounding the scale of extrapo...

  13. Proper Islamic Consumption

    DEFF Research Database (Denmark)

    Fischer, Johan

    ”. It is a must-read for researchers and students alike, especially those who want to pursue their study on the middle class, Islam and consumption.' Reviewed by Prof. Abdul Rahman Embong, Asian Anthropology    'This volume does make an important contribution to our understanding of the responses of socially...... spite of a long line of social theory analyzing the spiritual in the economic, and vice versa, very little of the recent increase in scholarship on Islam addresses its relationship with capitalism. Johan Fischer’s book,Proper Islamic Consumption, begins to fill this gap. […] Fischer’s detailed...

  14. Characterizations of proper actions

    Science.gov (United States)

    Biller, Harald

    2004-03-01

    Three kinds of proper actions of increasing strength are defined. We prove that the three definitions specialize to the definitions by Bourbaki, by Palais and by Baum, Connes and Higson in their respective settings. The third of these, which thus turns out to be the strongest, originally only concerns actions of second countable locally compact groups on metrizable spaces. In this situation, it is shown to coincide with the other two definitions if the total space locally has the Lindelöf property and the orbit space is regular.

  15. PoPe (Projection on Proper elements) for code control: verification, numerical convergence and reduced models. Application to plasma turbulence simulations

    CERN Document Server

    Cartier-Michaud, T; Sarazin, Y; Abiteboul, J; Bufferand, H; Dif-Pradalier, G; Garbet, X; Grandgirard, V; Latu, G; Norscini, C; Passeron, C; Tamain, P

    2015-01-01

    The Projection on Proper elements (PoPe) is a novel method of code control dedicated to 1) checking the correct implementation of models, 2) determining the convergence of numerical methods and 3) characterizing the residual errors of any given solution at very low cost. The basic idea is to establish a bijection between a simulation and a set of equations that generate it. Recovering equations is direct and relies on a statistical measure of the weight of the various operators. This method can be used in any dimensions and any regime, including chaotic ones. This method also provides a procedure to design reduced models and quantify the ratio costs to benefits. PoPe is applied to a kinetic and a fluid code of plasma turbulence.

  16. Hubble Space Telescope Proper Motion (HSTPROMO) Catalogs of Galactic Globular Clusters. V. The Rapid Rotation of 47 Tuc Traced and Modeled in Three Dimensions

    Science.gov (United States)

    Bellini, A.; Bianchini, P.; Varri, A. L.; Anderson, J.; Piotto, G.; van der Marel, R. P.; Vesperini, E.; Watkins, L. L.

    2017-08-01

    High-precision proper motions of the globular cluster 47 Tuc have allowed us to measure for the first time the cluster rotation in the plane of the sky and the velocity anisotropy profile from the cluster core out to about 13‧. These profiles are coupled with prior measurements along the line of sight (LOS) and the surface brightness profile and fit all together with self-consistent models specifically constructed to describe quasi-relaxed stellar systems with realistic differential rotation, axisymmetry, and pressure anisotropy. The best-fit model provides an inclination angle i between the rotation axis and the LOS direction of 30° and is able to simultaneously reproduce the full three-dimensional kinematics and structure of the cluster, while preserving a good agreement with the projected morphology. Literature models based solely on LOS measurements imply a significantly different inclination angle (i = 45°), demonstrating that proper motions play a key role in constraining the intrinsic structure of 47 Tuc. Our best-fit global dynamical model implies an internal rotation higher than previous studies have shown and suggests a peak of the intrinsic V/σ ratio of ∼0.9 at around two half-light radii, with a nonmonotonic intrinsic ellipticity profile reaching values up to 0.45. Our study unveils a new degree of dynamical complexity in 47 Tuc, which may be leveraged to provide new insights into the formation and evolution of globular clusters. Based on archival observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.

  17. Calculating proper transfer prices

    Energy Technology Data Exchange (ETDEWEB)

    Dorkey, F.C. (Meliora Research Associates, Rochester, NY (United States)); Jarrell, G.A. (Univ. of Rochester, NY (United States))

    1991-01-01

    This article deals with developing a proper transfer pricing method. Decentralization is as American as baseball. While managers laud the widespread benefits of both decentralization and baseball, they often greet the term transfer price policy with a yawn. Since transfer prices are as critical to the success of decentralized firms as good pitchers are to baseball teams, this is quite a mistake on the part of our managers. A transfer price is the price charged to one division for a product or service that another division produced or provided. In many, perhaps most, decentralized organizations, the transfer pricing policies actually used are grossly inefficient and sacrifice the potential advantages of decentralization. Experience shows that far too many companies have transfer pricing policies that cost them significantly in foregone growth and profits.

  18. 76 FR 29249 - Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications

    Science.gov (United States)

    2011-05-20

    ... participate in the Pioneer Accountable Care Organization Model for a period beginning in 2011 and ending...://innovations.cms.gov/areas-of-focus/seamless-and-coordinated-care-models/pioneer-aco . Application Submission... Accountable Care Organization Model or the application process. SUPPLEMENTARY INFORMATION: I. Background...

  19. 76 FR 34712 - Medicare Program; Pioneer Accountable Care Organization Model; Extension of the Submission...

    Science.gov (United States)

    2011-06-14

    ...: This notice extends the deadlines for the submission of the Pioneer Accountable Care Organization Model...-coordinated-care-models/pioneer-aco . Application Submission Deadline: Applications must be postmarked on or before August 19, 2011. The Pioneer Accountable Care Organization Model ] Application is available...

  20. MODELLING THE LESOTHO ECONOMY: A SOCIAL ACCOUNTING MATRIX APPROACH

    Directory of Open Access Journals (Sweden)

    Yonas Tesfamariam Bahta

    2013-07-01

    Full Text Available Using a 2000 Social Accounting Matrix (SAM for Lesotho, this paper investigates the key features of the Lesotho economy and the role played by the agricultural sector. A novel feature of the SAM is the elaborate disaggregation of the agricultural sector into finer subcategories. The fundamental importance of agriculture development emerges clearly from a descriptive review and from SAM multiplier analysis. It is dominant with respect to income generation and value of production. It contributes 23 percent of gross domestic product and 12 percent of the total value of production. It employs 26 percent of labour and 24 percent of capital. The construction sector has the highest open SAM output multiplier (1,588 and SAM output multiplier (1.767. The household multipliers indicate that in the rural and urban areas, agriculture and mining respectively generate most household income. Agriculture has the highest employment coefficient. Agriculture and mining sectors also have the largest employment multipliers in Lesotho.

  1. Resource Allocation Models and Accountability: A Jamaican Case Study

    Science.gov (United States)

    Nkrumah-Young, Kofi K.; Powell, Philip

    2008-01-01

    Higher education institutions (HEIs) may be funded privately, by the state or by a mixture of the two. Nevertheless, any state financing of HE necessitates a mechanism to determine the level of support and the channels through which it is to be directed; that is, a resource allocation model. Public funding, through resource allocation models,…

  2. Accountability: a missing construct in models of adherence behavior and in clinical practice.

    Science.gov (United States)

    Oussedik, Elias; Foy, Capri G; Masicampo, E J; Kammrath, Lara K; Anderson, Robert E; Feldman, Steven R

    2017-01-01

    Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients' motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8-12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability) to patients' autonomous internal desire to please a respected health care provider (autonomous accountability), the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura's Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well as the testing and refinement of adherence interventions that make use of this critical determinant of human behavior.

  3. Nonlinear model accounting for minor hysteresis of embedded SMA actuators

    Institute of Scientific and Technical Information of China (English)

    YANG Kai; GU Chenglin

    2007-01-01

    A quantitative index martensite fraction was used to describe the phase transformation degree of shape memory alloy (SMA).On the basis of the martensite fraction,a nonlinear analysis model for major and minor hysteresis loops was developed.The model adopted two exponential equations to calculate the martensite fractions for cooling and heating,respectively.The martensite fractions were derived as the relative parameters were adjusted timely according to continuous,common initial and common limit constraints.By use of the linear relationship between the curvature of embedded SMA actuator and SMA's martensite fraction,the curvature was determined.The results of the simulations and experiments prove the validity and veracity of the model.

  4. Modeling tools to Account for Ethanol Impacts on BTEX Plumes

    Science.gov (United States)

    Widespread usage of ethanol in gasoline leads to impacts at leak sites which differ from those of non-ethanol gasolines. The presentation reviews current research results on the distribution of gasoline and ethanol, biodegradation, phase separation and cosolvancy. Model results f...

  5. A Historical Account of the Hypodermic Model in Mass Communication.

    Science.gov (United States)

    Bineham, Jeffery L.

    1988-01-01

    Critiques different historical conceptions of mass communication research. Argues that the different conceptions of the history of mass communication research, and of the hypodermic model (viewing the media as an all-powerful and direct influence on society), influence the theoretical and methodological choices made by mass media scholars. (MM)

  6. Applying the International Medical Graduate Program Model to Alleviate the Supply Shortage of Accounting Doctoral Faculty

    Science.gov (United States)

    HassabElnaby, Hassan R.; Dobrzykowski, David D.; Tran, Oanh Thikie

    2012-01-01

    Accounting has been faced with a severe shortage in the supply of qualified doctoral faculty. Drawing upon the international mobility of foreign scholars and the spirit of the international medical graduate program, this article suggests a model to fill the demand in accounting doctoral faculty. The underlying assumption of the suggested model is…

  7. Low-dimensional models for the nonlinear vibration analysis of cylindrical shells based on a perturbation procedure and proper orthogonal decomposition

    Science.gov (United States)

    Gonçalves, P. B.; Silva, F. M. A.; Del Prado, Z. J. G. N.

    2008-08-01

    In formulating mathematical models for dynamical systems, obtaining a high degree of qualitative correctness (i.e. predictive capability) may not be the only objective. The model must be useful for its intended application, and models of reduced complexity are attractive in many cases where time-consuming numerical procedures are required. This paper discusses the derivation of discrete low-dimensional models for the nonlinear vibration analysis of thin cylindrical shells. In order to understand the peculiarities inherent to this class of structural problems, the nonlinear vibrations and dynamic stability of a circular cylindrical shell subjected to static and dynamic loads are analyzed. This choice is based on the fact that cylindrical shells exhibit a highly nonlinear behavior under both static and dynamic loads. Geometric nonlinearities due to finite-amplitude shell motions are considered by using Donnell's nonlinear shallow-shell theory. A perturbation procedure, validated in previous studies, is used to derive a general expression for the nonlinear vibration modes and the discretized equations of motion are obtained by the Galerkin method using modal expansions for the displacements that satisfy all the relevant boundary and symmetry conditions. Next, the model is analyzed via the Karhunen-Loève expansion to investigate the relative importance of each mode obtained by the perturbation solution on the nonlinear response and total energy of the system. The responses of several low-dimensional models are compared. It is shown that rather low-dimensional but properly selected models can describe with good accuracy the response of the shell up to very large vibration amplitudes.

  8. 76 FR 33306 - Medicare Program; Pioneer Accountable Care Organization Model, Request for Applications; Correction

    Science.gov (United States)

    2011-06-08

    ... Care Organization Model: Request for Applications.'' FOR FURTHER INFORMATION CONTACT: Maria Alexander... http://innovations.cms.gov/areas-of-focus/seamless-and-coordinated-care-models/pioneer-aco... HUMAN SERVICES Centers for Medicare & Medicaid Services Medicare Program; Pioneer Accountable...

  9. Towards accounting for dissolved iron speciation in global ocean models

    Directory of Open Access Journals (Sweden)

    A. Tagliabue

    2011-10-01

    Full Text Available The trace metal iron (Fe is now routinely included in state-of-the-art ocean general circulation and biogeochemistry models (OGCBMs because of its key role as a limiting nutrient in regions of the world ocean important for carbon cycling and air-sea CO2 exchange. However, the complexities of the seawater Fe cycle, which impact its speciation and bioavailability, are simplified in such OGCBMs due to gaps in understanding and to avoid high computational costs. In a similar fashion to inorganic carbon speciation, we outline a means by which the complex speciation of Fe can be included in global OGCBMs in a reasonably cost-effective manner. We construct an Fe speciation model based on hypothesised relationships between rate constants and environmental variables (temperature, light, oxygen, pH, salinity and assumptions regarding the binding strengths of Fe complexing organic ligands and test hypotheses regarding their distributions. As a result, we find that the global distribution of different Fe species is tightly controlled by spatio-temporal environmental variability and the distribution of Fe binding ligands. Impacts on bioavailable Fe are highly sensitive to assumptions regarding which Fe species are bioavailable and how those species vary in space and time. When forced by representations of future ocean circulation and climate we find large changes to the speciation of Fe governed by pH mediated changes to redox kinetics. We speculate that these changes may exert selective pressure on phytoplankton Fe uptake strategies in the future ocean. In future work, more information on the sources and sinks of ocean Fe ligands, their bioavailability, the cycling of colloidal Fe species and kinetics of Fe-surface coordination reactions would be invaluable. We hope our modeling approach can provide a means by which new observations of Fe speciation can be tested against hypotheses of the processes present in governing the ocean Fe cycle in an

  10. Towards accounting for dissolved iron speciation in global ocean models

    Directory of Open Access Journals (Sweden)

    A. Tagliabue

    2011-03-01

    Full Text Available The trace metal iron (Fe is now routinely included in state-of-the-art ocean general circulation and biogeochemistry models (OGCBMs because of its key role as a limiting nutrient in regions of the world ocean important for carbon cycling and air-sea CO2 exchange. However, the complexities of the seawater Fe cycle, which impact its speciation and bioavailability, are highly simplified in such OGCBMs to avoid high computational costs. In a similar fashion to inorganic carbon speciation, we outline a means by which the complex speciation of Fe can be included in global OGCBMs in a reasonably cost-effective manner. We use our Fe speciation to suggest the global distribution of different Fe species is tightly controlled by environmental variability (temperature, light, oxygen and pH and the assumptions regarding Fe binding ligands. Impacts on bioavailable Fe are highly sensitive to assumptions regarding which Fe species are bioavailable. When forced by representations of future ocean circulation and climate we find large changes to the speciation of Fe governed by pH mediated changes to redox kinetics. We speculate that these changes may exert selective pressure on phytoplankton Fe uptake strategies in the future ocean. We hope our modeling approach can also be used as a ''test bed'' for exploring our understanding of Fe speciation at the global scale.

  11. A mathematical model of sentimental dynamics accounting for marital dissolution.

    Directory of Open Access Journals (Sweden)

    José-Manuel Rey

    Full Text Available BACKGROUND: Marital dissolution is ubiquitous in western societies. It poses major scientific and sociological problems both in theoretical and therapeutic terms. Scholars and therapists agree on the existence of a sort of second law of thermodynamics for sentimental relationships. Effort is required to sustain them. Love is not enough. METHODOLOGY/PRINCIPAL FINDINGS: Building on a simple version of the second law we use optimal control theory as a novel approach to model sentimental dynamics. Our analysis is consistent with sociological data. We show that, when both partners have similar emotional attributes, there is an optimal effort policy yielding a durable happy union. This policy is prey to structural destabilization resulting from a combination of two factors: there is an effort gap because the optimal policy always entails discomfort and there is a tendency to lower effort to non-sustaining levels due to the instability of the dynamics. CONCLUSIONS/SIGNIFICANCE: These mathematical facts implied by the model unveil an underlying mechanism that may explain couple disruption in real scenarios. Within this framework the apparent paradox that a union consistently planned to last forever will probably break up is explained as a mechanistic consequence of the second law.

  12. MODEL OF ACCOUNTING ENGINEERING IN VIEW OF EARNINGS MANAGEMENT IN POLAND

    Directory of Open Access Journals (Sweden)

    Leszek Michalczyk

    2012-10-01

    Full Text Available The article introduces the theoretical foundations of the author’s original concept of accounting engineering. We assume a theoretical premise whereby accounting engineering is understood as a system of accounting practice utilising differences in economic events resultant from the use of divergent accounting methods. Unlike, for instance, creative or praxeological accounting, accounting engineering is composed only, and under all circumstances, of lawful activities and adheres to the current regulations of the balance sheet law. The aim of the article is to construct a model of accounting engineering exploiting taking into account differences inherently present in variant accounting. These differences result in disparate financial results of identical economic events. Given the fact that regardless of which variant is used in accounting, all settlements are eventually equal to one another, a new class of differences emerges - the accounting engineering potential. It is transferred to subsequent reporting (balance sheet periods. In the end, the profit “made” in a given period reduces the financial result of future periods. This effect is due to the “transfer” of costs from one period to another. Such actions may have sundry consequences and are especially dangerous whenever many individuals are concerned with the profit of a given company, e.g. on a stock exchange. The reverse may be observed when a company is privatised and its value is being intentionally reduced by a controlled recording of accounting provisions, depending on the degree to which they are justified. The reduction of a company’s goodwill in Balcerowicz’s model of no-tender privatisation allows to justify the low value of the purchased company. These are only some of many manifestations of variant accounting which accounting engineering employs. A theoretical model of the latter is presented in this article.

  13. A Case Study of the Accounting Models for the Participants in an Emissions Trading Scheme

    Directory of Open Access Journals (Sweden)

    Marius Deac

    2013-10-01

    Full Text Available As emissions trading schemes are becoming more popular across the world, accounting has to keep up with these new economic developments. The absence of guidance regarding the accounting for greenhouse gases (GHGs emissions generated by the withdrawal of IFRIC 3- Emission Rights - is the main reason why there is a diversity of accounting practices. This diversity of accounting methods makes the financial statements of companies that are taking part in emissions trading schemes like EU ETS, difficult to compare. The present paper uses a case study that assumes the existence of three entities that have chosen three different accounting methods: the IFRIC 3 cost model, the IFRIC 3 revaluation model and the “off balance sheet” approach. This illustrates how the choice of an accounting method regarding GHGs emissions influences their interim and annual reports through the chances in the companies’ balance sheet and financial results.

  14. A 3D finite-strain-based constitutive model for shape memory alloys accounting for thermomechanical coupling and martensite reorientation

    Science.gov (United States)

    Wang, Jun; Moumni, Ziad; Zhang, Weihong; Xu, Yingjie; Zaki, Wael

    2017-06-01

    The paper presents a finite-strain constitutive model for shape memory alloys (SMAs) that accounts for thermomechanical coupling and martensite reorientation. The finite-strain formulation is based on a two-tier, multiplicative decomposition of the deformation gradient into thermal, elastic, and inelastic parts, where the inelastic deformation is further split into phase transformation and martensite reorientation components. A time-discrete formulation of the constitutive equations is proposed and a numerical integration algorithm is presented featuring proper symmetrization of the tensor variables and explicit formulation of the material and spatial tangent operators involved. The algorithm is used for finite element analysis of SMA components subjected to various loading conditions, including uniaxial, non-proportional, isothermal and adiabatic loading cases. The analysis is carried out using the FEA software Abaqus by means of a user-defined material subroutine, which is then utilized to simulate a SMA archwire undergoing large strains and rotations.

  15. Dynamic model of production enterprises based on accounting registers and its identification

    Science.gov (United States)

    Sirazetdinov, R. T.; Samodurov, A. V.; Yenikeev, I. A.; Markov, D. S.

    2016-06-01

    The report focuses on the mathematical modeling of economic entities based on accounting registers. Developed the dynamic model of financial and economic activity of the enterprise as a system of differential equations. Created algorithms for identification of parameters of the dynamic model. Constructed and identified the model of Russian machine-building enterprises.

  16. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    DEFF Research Database (Denmark)

    Jensen, Ninna Reitzel; Schomacker, Kristian Juul

    2015-01-01

    and unit-linked insurance. By use of a two-account model, we are able to illustrate general concepts without making the model too abstract. To allow for complicated financial markets without dramatically increasing the mathematical complexity, we focus on economic scenarios. We illustrate the use of our......Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death...... product types. This enables comparison of participating life insurance products and unit-linked insurance products, thus building a bridge between the two different ways of formalizing life insurance products. Finally, our model distinguishes itself from the existing literature by taking into account...

  17. An Integrative Model of the Strategic Management Accounting at the Enterprises of Chemical Industry

    Directory of Open Access Journals (Sweden)

    Aleksandra Vasilyevna Glushchenko

    2016-06-01

    Full Text Available Currently, the issues of information and analytical support of strategic management enabling to take timely and high-quality management decisions, are extremely relevant. Conflicting and poor information, haphazard collected in the practice of large companies from unreliable sources, affects the effective implementation of their development strategies and carries the threat of risk, by the increasing instability of the external environment. Thus chemical industry is one of the central places in the industry of Russia and, of course, has its specificity in the formation of the informationsupport system. Such an information system suitable for the development and implementation of strategic directions, changes in recognized competitive advantages of strategic management accounting. The issues of the lack of requirements for strategic accounting information, its inconsistency in the result of simultaneous accumulation in different parts and using different methods of calculation and assessment of indicators is impossible without a well-constructed model of organization of strategic management accounting. The purpose of this study is to develop such a model, the implementation of which will allow realizing the possibility of achieving strategic goals by harmonizing information from the individual objects of the strategic account to increase the functional effectiveness of management decisions with a focus on strategy. Case study was based on dialectical logic and methods of system analysis, and identifying causal relationships in building a model of strategic management accounting that contributes to the forecasts of its development. The study proposed to implement an integrative model of organization of strategic management accounting. The purpose of a phased implementation of this model defines the objects and tools of strategic management accounting. Moreover, it is determined that from the point of view of increasing the usefulness of management

  18. Modeling the Hellenic karst catchments with the Sacramento Soil Moisture Accounting model

    Science.gov (United States)

    Katsanou, K.; Lambrakis, N.

    2017-01-01

    Karst aquifers are very complex due to the presence of dual porosity. Rain-runoff hydrological models are frequently used to characterize these aquifers and assist in their management. The calibration of such models requires knowledge of many parameters, whose quality can be directly related to the quality of the simulation results. The Sacramento Soil Moisture Accounting (SAC-SMA) model includes a number of physically based parameters that permit accurate simulations and predictions of the rain-runoff relationships. Due to common physical characteristics of mature karst structures, expressed by sharp recession limbs of the runoff hydrographs, the calibration of the model becomes relatively simple, and the values of the parameters range within narrow bands. The most sensitive parameters are those related to groundwater storage regulated by the zone of the epikarst. The SAC-SMA model was calibrated for data from the mountainous part of the Louros basin, north-western Greece, which is considered to be representative of such geological formations. Visual assessment of the hydrographs as statistical outcomes revealed that the SAC-SMA model simulated the timing and magnitude of the peak flow and the shape of recession curves well.

  19. Modeling the Hellenic karst catchments with the Sacramento Soil Moisture Accounting model

    Science.gov (United States)

    Katsanou, K.; Lambrakis, N.

    2017-05-01

    Karst aquifers are very complex due to the presence of dual porosity. Rain-runoff hydrological models are frequently used to characterize these aquifers and assist in their management. The calibration of such models requires knowledge of many parameters, whose quality can be directly related to the quality of the simulation results. The Sacramento Soil Moisture Accounting (SAC-SMA) model includes a number of physically based parameters that permit accurate simulations and predictions of the rain-runoff relationships. Due to common physical characteristics of mature karst structures, expressed by sharp recession limbs of the runoff hydrographs, the calibration of the model becomes relatively simple, and the values of the parameters range within narrow bands. The most sensitive parameters are those related to groundwater storage regulated by the zone of the epikarst. The SAC-SMA model was calibrated for data from the mountainous part of the Louros basin, north-western Greece, which is considered to be representative of such geological formations. Visual assessment of the hydrographs as statistical outcomes revealed that the SAC-SMA model simulated the timing and magnitude of the peak flow and the shape of recession curves well.

  20. Extension of the hard-sphere particle-wall collision model to account for particle deposition.

    Science.gov (United States)

    Kosinski, Pawel; Hoffmann, Alex C

    2009-06-01

    Numerical simulations of flows of fluids with granular materials using the Eulerian-Lagrangian approach involve the problem of modeling of collisions: both between the particles and particles with walls. One of the most popular techniques is the hard-sphere model. This model, however, has a major drawback in that it does not take into account cohesive or adhesive forces. In this paper we develop an extension to a well-known hard-sphere model for modeling particle-wall interactions, making it possible to account for adhesion. The model is able to account for virtually any physical interaction, such as van der Waals forces or liquid bridging. In this paper we focus on the derivation of the new model and we show some computational results.

  1. Mutual Calculations in Creating Accounting Models: A Demonstration of the Power of Matrix Mathematics in Accounting Education

    Science.gov (United States)

    Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg

    2016-01-01

    The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…

  2. Mutual Calculations in Creating Accounting Models: A Demonstration of the Power of Matrix Mathematics in Accounting Education

    Science.gov (United States)

    Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg

    2016-01-01

    The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…

  3. Proper Motion of Components in 4C 39.25

    Science.gov (United States)

    Guirado, J. C.; Marcaide, J. M.; Alberdi, A.; Elosegui, P.; Ratner, M. I.; Shapiro, I. I.; Kilger, R.; Mantovani, F.; Venturi, T.; Rius, A.; hide

    1995-01-01

    From a series of simultaneous 8.4 and 2.3 GHz VLBI observations of the quasar 4C 39.25 phase referenced to the radio source 0920+390, carried out in 1990-1992, we have measured the proper motion of component b in 4C 39.25: mu(sub alpha) = 90 +/- 43 (mu)as/yr, mu(sub beta) = 7 +/- 68 (mu)as/yr, where the quoted uncertainties account for the contribution of the statistical standard deviation and the errors assumed for the parameters related to the geometry of the interferometric array, the atmosphere, and the source structure. This proper motion is consistent with earlier interpretations of VLBI hybrid mapping results, which showed an internal motion of this component with respect to other structural components. Our differential astrometry analyses show component b to be the one in motion. Our results thus further constrain models of this quasar.

  4. A Teacher Accountability Model for Overcoming Self-Exclusion of Pupils

    Science.gov (United States)

    Jamal, Abu-Hussain; Tilchin, Oleg; Essawi, Mohammad

    2015-01-01

    Self-exclusion of pupils is one of the prominent challenges of education. In this paper we propose the TERA model, which shapes the process of creating formative accountability of teachers to overcome the self-exclusion of pupils. Development of the model includes elaboration and integration of interconnected model components. The TERA model…

  5. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections.

    Science.gov (United States)

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-06-18

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model.

  6. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections

    Science.gov (United States)

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-01-01

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model

  7. A novel Atoh1 "self-terminating" mouse model reveals the necessity of proper Atoh1 level and duration for hair cell differentiation and viability.

    Directory of Open Access Journals (Sweden)

    Ning Pan

    Full Text Available Atonal homolog1 (Atoh1 is a bHLH transcription factor essential for inner ear hair cell differentiation. Targeted expression of Atoh1 at various stages in development can result in hair cell differentiation in the ear. However, the level and duration of Atoh1 expression required for proper hair cell differentiation and maintenance remain unknown. We generated an Atoh1 conditional knockout (CKO mouse line using Tg(Atoh1-cre, in which the cre expression is driven by an Atoh1 enhancer element that is regulated by Atoh1 protein to "self-terminate" its expression. The mutant mice show transient, limited expression of Atoh1 in all hair cells in the ear. In the organ of Corti, reduction and delayed deletion of Atoh1 result in progressive loss of almost all the inner hair cells and the majority of the outer hair cells within three weeks after birth. The remaining cells express hair cell marker Myo7a and attract nerve fibers, but do not differentiate normal stereocilia bundles. Some Myo7a-positive cells persist in the cochlea into adult stages in the position of outer hair cells, flanked by a single row of pillar cells and two to three rows of disorganized Deiters cells. Gene expression analyses of Atoh1, Barhl1 and Pou4f3, genes required for survival and maturation of hair cells, reveal earlier and higher expression levels in the inner compared to the outer hair cells. Our data show that Atoh1 is crucial for hair cell mechanotransduction development, viability, and maintenance and also suggest that Atoh1 expression level and duration may play a role in inner vs. outer hair cell development. These genetically engineered Atoh1 CKO mice provide a novel model for establishing critical conditions needed to regenerate viable and functional hair cells with Atoh1 therapy.

  8. Accountability: a missing construct in models of adherence behavior and in clinical practice

    Directory of Open Access Journals (Sweden)

    Oussedik E

    2017-07-01

    Full Text Available Elias Oussedik,1 Capri G Foy,2 E J Masicampo,3 Lara K Kammrath,3 Robert E Anderson,1 Steven R Feldman1,4,5 1Center for Dermatology Research, Department of Dermatology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 2Department of Social Sciences and Health Policy, Wake Forest School of Medicine, Winston-Salem, NC, USA; 3Department of Psychology, Wake Forest University, Winston-Salem, NC, USA; 4Department of Pathology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 5Department of Public Health Sciences, Wake Forest School of Medicine, Winston-Salem, NC, USA Abstract: Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients’ motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8–12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability to patients’ autonomous internal desire to please a respected health care provider (autonomous accountability, the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura’s Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well

  9. Accounting for the influence of vegetation and landscape improves model transferability in a tropical savannah region

    Science.gov (United States)

    Gao, Hongkai; Hrachowitz, Markus; Sriwongsitanon, Nutchanart; Fenicia, Fabrizio; Gharari, Shervan; Savenije, Hubert H. G.

    2016-10-01

    Understanding which catchment characteristics dominate hydrologic response and how to take them into account remains a challenge in hydrological modeling, particularly in ungauged basins. This is even more so in nontemperate and nonhumid catchments, where—due to the combination of seasonality and the occurrence of dry spells—threshold processes are more prominent in rainfall runoff behavior. An example is the tropical savannah, the second largest climatic zone, characterized by pronounced dry and wet seasons and high evaporative demand. In this study, we investigated the importance of landscape variability on the spatial variability of stream flow in tropical savannah basins. We applied a stepwise modeling approach to 23 subcatchments of the Upper Ping River in Thailand, where gradually more information on landscape was incorporated. The benchmark is represented by a classical lumped model (FLEXL), which does not account for spatial variability. We then tested the effect of accounting for vegetation information within the lumped model (FLEXLM), and subsequently two semidistributed models: one accounting for the spatial variability of topography-based landscape features alone (FLEXT), and another accounting for both topographic features and vegetation (FLEXTM). In cross validation, each model was calibrated on one catchment, and then transferred with its fitted parameters to the remaining catchments. We found that when transferring model parameters in space, the semidistributed models accounting for vegetation and topographic heterogeneity clearly outperformed the lumped model. This suggests that landscape controls a considerable part of the hydrological function and explicit consideration of its heterogeneity can be highly beneficial for prediction in ungauged basins in tropical savannah.

  10. Matrix Representation of the Kaliningrad Regional Accounts System: Experimental Development and Modelling Prospects

    Directory of Open Access Journals (Sweden)

    Soldatova S.

    2015-12-01

    Full Text Available This article addresses the task of creating a regional Social Accounting Matrix (SAM in the Kaliningrad region. Analyzing the behavior of economic systems of national and sub-national levels in the changing environment is one of the main objectives of macroeconomic research. Matrices are used in examining the flow of financial resources, which makes it possible to conduct a comprehensive analysis of commodity and cash flows at the regional level. The study identifies key data sources for matrix development and presents its main results: the data sources for the accounts development and filling the social accounting matrix are identified, regional accounts consolidated, the structure of regional matrix devised, and the multiplier of the regional social accounting matrix calculated. An important aspect of this approach is the set target, which determines the composition of matrix accounts representing different aspects of regional performance. The calculated multiplier suggests the possibility of modelling of a socioeconomic system for the region using a social accounting matrix. The regional modelling approach ensures the matrix compliance with the methodological requirements of the national system.

  11. Matrix Representation of the Kaliningrad Regional Accounts System: Experimental Development and Modelling Prospects

    Directory of Open Access Journals (Sweden)

    Soldatova S.

    2015-08-01

    Full Text Available This article addresses the task of creating a regional Social Accounting Matrix (SAM in the Kaliningrad region. Analyzing the behavior of economic systems of national and sub-national levels in the changing environment is one of the main objectives of macroeconomic research. Matrices are used in examining the flow of financial resources, which makes it possible to conduct a comprehensive analysis of commodity and cash flows at the regional level. The study identifies key data sources for matrix development and presents its main results: the data sources for the accounts development and filling the social accounting matrix are identified, regional accounts consolidated, the structure of regional matrix devised, and the multiplier of the regional social accounting matrix calculated. An important aspect of this approach is the set target, which determines the composition of matrix accounts representing different aspects of regional performance. The calculated multiplier suggests the possibility of modelling of a socioeconomic system for the region using a social accounting matrix. The regional modelling approach ensures the matrix compliance with the methodological requirements of the national system

  12. Matrix Representation of the Kaliningrad Regional Accounts System: Experimental Development and Modelling Prospects

    Directory of Open Access Journals (Sweden)

    Soldatova Svetlana

    2015-09-01

    Full Text Available This article addresses the task of creating a regional Social Accounting Matrix (SAM in the Kaliningrad region. Analyzing the behavior of economic systems of national and sub-national levels in the changing environment is one of the main objectives of macroeconomic research. Matrices are used in examining the flow of financial resources, which makes it possible to conduct a comprehensive analysis of commodity and cash flows at the regional level. The study identifies key data sources for matrix development and presents its main results: the data sources for the accounts development and filling the social accounting matrix are identified, regional accounts consolidated, the structure of regional matrix devised, and the multiplier of the regional social accounting matrix calculated. An important aspect of this approach is the set target, which determines the composition of matrix accounts representing different aspects of regional performance. The calculated multiplier suggests the possibility of modelling of a socioeconomic system for the region using a social accounting matrix. The regional modelling approach ensures the matrix compliance with the methodological requirements of the national system

  13. Development and application of a large scale river system model for National Water Accounting in Australia

    Science.gov (United States)

    Dutta, Dushmanta; Vaze, Jai; Kim, Shaun; Hughes, Justin; Yang, Ang; Teng, Jin; Lerat, Julien

    2017-04-01

    Existing global and continental scale river models, mainly designed for integrating with global climate models, are of very coarse spatial resolutions and lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing water accounts, which have become increasingly important for water resources planning and management at regional and national scales. A continental scale river system model called Australian Water Resource Assessment River System model (AWRA-R) has been developed and implemented for national water accounting in Australia using a node-link architecture. The model includes major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. Two key components of the model are an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. The results in the Murray-Darling Basin shows highly satisfactory performance of the model with median daily Nash-Sutcliffe Efficiency (NSE) of 0.64 and median annual bias of less than 1% for the period of calibration (1970-1991) and median daily NSE of 0.69 and median annual bias of 12% for validation period (1992-2014). The results have demonstrated that the performance of the model is less satisfactory when the key processes such as overbank flow, groundwater seepage and irrigation diversion are switched off. The AWRA-R model, which has been operationalised by the Australian Bureau of Meteorology for continental scale water accounting, has contributed to improvements in the national water account by substantially reducing accounted different volume (gain/loss).

  14. A simulation model of hospital management based on cost accounting analysis according to disease.

    Science.gov (United States)

    Tanaka, Koji; Sato, Junzo; Guo, Jinqiu; Takada, Akira; Yoshihara, Hiroyuki

    2004-12-01

    Since a little before 2000, hospital cost accounting has been increasingly performed at Japanese national university hospitals. At Kumamoto University Hospital, for instance, departmental costs have been analyzed since 2000. And, since 2003, the cost balance has been obtained according to certain diseases for the preparation of Diagnosis-Related Groups and Prospective Payment System. On the basis of these experiences, we have constructed a simulation model of hospital management. This program has worked correctly at repeated trials and with satisfactory speed. Although there has been room for improvement of detailed accounts and cost accounting engine, the basic model has proved satisfactory. We have constructed a hospital management model based on the financial data of an existing hospital. We will later improve this program from the viewpoint of construction and using more various data of hospital management. A prospective outlook may be obtained for the practical application of this hospital management model.

  15. Towards ecosystem accounting: a comprehensive approach to modelling multiple hydrological ecosystem services

    Science.gov (United States)

    Duku, C.; Rathjens, H.; Zwart, S. J.; Hein, L.

    2015-10-01

    Ecosystem accounting is an emerging field that aims to provide a consistent approach to analysing environment-economy interactions. One of the specific features of ecosystem accounting is the distinction between the capacity and the flow of ecosystem services. Ecohydrological modelling to support ecosystem accounting requires considering among others physical and mathematical representation of ecohydrological processes, spatial heterogeneity of the ecosystem, temporal resolution, and required model accuracy. This study examines how a spatially explicit ecohydrological model can be used to analyse multiple hydrological ecosystem services in line with the ecosystem accounting framework. We use the Upper Ouémé watershed in Benin as a test case to demonstrate our approach. The Soil Water and Assessment Tool (SWAT), which has been configured with a grid-based landscape discretization and further enhanced to simulate water flow across the discretized landscape units, is used to simulate the ecohydrology of the Upper Ouémé watershed. Indicators consistent with the ecosystem accounting framework are used to map and quantify the capacities and the flows of multiple hydrological ecosystem services based on the model outputs. Biophysical ecosystem accounts are subsequently set up based on the spatial estimates of hydrological ecosystem services. In addition, we conduct trend analysis statistical tests on biophysical ecosystem accounts to identify trends in changes in the capacity of the watershed ecosystems to provide service flows. We show that the integration of hydrological ecosystem services into an ecosystem accounting framework provides relevant information on ecosystems and hydrological ecosystem services at appropriate scales suitable for decision-making.

  16. Ensuring proper short-range and asymptotic behavior of the exchange-correlation Kohn-Sham potential by modeling with a statistical average of different orbital model potentials

    Energy Technology Data Exchange (ETDEWEB)

    Gritsenko, O.V.; Schipper, P.R.T.; Baerends, E.J.

    2000-01-20

    The long-range asymptotic behavior of the exchange-correlation Kohn-Sham (KS) potential {nu}{sub xc} and its relation to the exchange-correlation energy E{sub xc} are considered using various approaches. The line integral of {nu}{sub xc}([{rho}];r) yielding the exchange-correlation part {Delta}E{sub xc} of a relative energy {Delta}E of a finite system, shows that a uniform constant shift of {nu}{sub xc} never shows up in any physically meaningful energy difference {Delta}E. {nu}{sub xv} may thus be freely chosen to tend asymptotically to zero or to some nonzero constant. Possible choices of the asymptotics of the potential are discussed with reference to the theory of open systems with a fractional number of electrons. The authors adhere to the conventional choice {nu}{sub xc}({infinity}) = 0 for the asymptotics of the potential leading to {epsilon}{sub N} = {minus}I{sub p} for the energy {epsilon}{sub N} of the highest occupied orbital. A statistical average of orbital dependent model potentials is proposed as a way to model {nu}{sub xc}. An approximate potential {nu}{sub xco}{sup SAOP} with exact {minus}1/r asymptotics is developed using the statistical average of, on the one hand, a model potential {nu}{sub xc{sigma}}{sup Ei} for the highest occupied KS orbital {psi}{sub N{sigma}} and, on the other hand, a model potential {nu}{sub xc}{sup GLB} for other occupied orbitals. It is demonstrated for the well-studied case of the Ne atom, that calculations with the new model potential can, in principle, reproduce perfectly all energy characteristics.

  17. School Board Improvement Plans in Relation to the AIP Model of Educational Accountability: A Content Analysis

    Science.gov (United States)

    van Barneveld, Christina; Stienstra, Wendy; Stewart, Sandra

    2006-01-01

    For this study we analyzed the content of school board improvement plans in relation to the Achievement-Indicators-Policy (AIP) model of educational accountability (Nagy, Demeris, & van Barneveld, 2000). We identified areas of congruence and incongruence between the plans and the model. Results suggested that the content of the improvement…

  18. Accounting for imperfect forward modeling in geophysical inverse problems — Exemplified for crosshole tomography

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Holm Jacobsen, Bo

    2014-01-01

    forward models, can be more than an order of magnitude larger than the measurement uncertainty. We also found that the modeling error is strongly linked to the spatial variability of the assumed velocity field, i.e., the a priori velocity model.We discovered some general tools by which the modeling error...... synthetic ground-penetrating radar crosshole tomographic inverse problems. Ignoring the modeling error can lead to severe artifacts, which erroneously appear to be well resolved in the solution of the inverse problem. Accounting for the modeling error leads to a solution of the inverse problem consistent...

  19. On the Crab Proper Motion

    CERN Document Server

    Caraveo, P A; Caraveo, Patrizia A; Mignani, Roberto

    1998-01-01

    Owing to the dramatic evolution of telescopes as well as optical detectors in the last 20 yrs, we are now able to measure anew the proper motion of the Crab pulsar, after the classical result of Wyckoff and Murray (1977) in a time span 40 times shorter. The proper motion is aligned with the axis of symmetry of the inner Crab nebula and, presumably, with the pulsar spin axis.

  20. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  1. Optimization of Actuarial Model for Individual Account of Rural Social Pension Insurance

    Institute of Scientific and Technical Information of China (English)

    Wenxian; CAO

    2013-01-01

    This paper firstly analyzes different payment methods of individual account and the pension replacement rate under the pension payment method.Results show that it will be more scientific and reasonable for the individual account of new rural social pension insurance to adopt the actuarial model of payment according to proportion of income and periodic prestation at variable amount.The Guiding Opinions on New Rural Social Pension Insurance sets forth individual account should be paid at fixed amount,and the insured voluntarily selects payment level as per criteria set by the State.The monthly calculation and distribution amount of pension is the total amount of individual account divided by139.Therefore,it should start from continuation of policies and make adjustment of payment level in accordance with growth of per capita net income of rural residents.When condition permits,it is expected to realize transition to payment as per income proportion and periodic prestation at variable amount.

  2. Accounting assessment

    Directory of Open Access Journals (Sweden)

    Kafka S.М.

    2017-03-01

    Full Text Available The proper evaluation of accounting objects influences essentially upon the reliability of assessing the financial situation of a company. Thus, the problem in accounting estimate is quite relevant. The works of home and foreign scholars on the issues of assessment of accounting objects, regulatory and legal acts of Ukraine controlling the accounting and compiling financial reporting are a methodological basis for the research. The author uses the theoretical methods of cognition (abstraction and generalization, analysis and synthesis, induction and deduction and other methods producing conceptual knowledge for the synthesis of theoretical and methodological principles in the evaluation of assets accounting, liabilities and equity. The tabular presentation and information comparison methods are used for analytical researches. The article considers the modern approaches to the issue of evaluation of accounting objects and financial statements items. The expedience to keep records under historical value is proved and the articles of financial statements are to be presented according to the evaluation on the reporting date. In connection with the evaluation the depreciation of fixed assets is considered as a process of systematic return into circulation of the before advanced funds on the purchase (production, improvement of fixed assets and intangible assets by means of including the amount of wear in production costs. Therefore it is proposed to amortize only the actual costs incurred, i.e. not to depreciate the fixed assets received free of charge and surplus valuation of different kinds.

  3. Modelling characteristics of photovoltaic panels with thermal phenomena taken into account

    Science.gov (United States)

    Krac, Ewa; Górecki, Krzysztof

    2016-01-01

    In the paper a new form of the electrothermal model of photovoltaic panels is proposed. This model takes into account the optical, electrical and thermal properties of the considered panels, as well as electrical and thermal properties of the protecting circuit and thermal inertia of the considered panels. The form of this model is described and some results of measurements and calculations of mono-crystalline and poly-crystalline panels are presented.

  4. Shadow Segmentation and Augmentation Using á-overlay Models that Account for Penumbra

    DEFF Research Database (Denmark)

    Nielsen, Michael; Madsen, Claus B.

    2006-01-01

    that an augmented virtual object can cast an exact shadow. The penumbras (half-shadows) must be taken into account so that we can model the soft shadows.We hope to achieve this by modelling the shadow regions (umbra and penumbra alike) with a transparent overlay. This paper reviews the state-of-the-art shadow...... theories and presents two overlay models. These are analyzed analytically in relation to color theory and tangibility....

  5. A Neuronal Model of Predictive Coding Accounting for the Mismatch Negativity

    OpenAIRE

    Wacongne, Catherine; Changeux, Jean-Pierre; Dehaene, Stanislas

    2012-01-01

    International audience; The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spi...

  6. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    DEFF Research Database (Denmark)

    Jensen, Ninna Reitzel; Schomacker, Kristian Juul

    2015-01-01

    Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death......, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance...... model by conducting scenario analysis based on Monte Carlo simulation, but the model applies to scenarios in general and to worst-case and best-estimate scenarios in particular. In addition to easy computations, our model offers a common framework for the valuation of life insurance payments across...

  7. The Anachronism of the Local Public Accountancy Determinate by the Accrual European Model

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2009-01-01

    Full Text Available Placing the European accrual model upon cash accountancy model,presently used in Romania, at the level of the local communities, makespossible that the anachronism of the model to manifest itself on the discussion’sconcentration at the nominalization about the model’s inclusion in everydaypublic practice. The basis of the accrual model were first defined in the lawregarding the commercial societies adopted in Great Britain in 1985, when theydetermined that all income and taxes referring to the financial year “will betaken into consideration without any boundary to the reception or paymentdate.”1 The accrual model in accountancy needs the recording of the non-casheffects in transactions or financial events for their appearance periods and not inany generated cash, received or paid. The business development was the basisfor “sophistication” of the recordings of the transactions and financial events,being prerequisite for recording the debtors’ or creditors’ sums.

  8. Model Application of Accounting Information Systems of Spare Parts Sales and Purchase on Car Service Company

    Directory of Open Access Journals (Sweden)

    Lianawati Christian

    2015-12-01

    Full Text Available The purpose of this research is to analyze accounting information systems of sales and purchases of spare parts in general car service companies and to identify the problems encountered and the needs of necessary information. This research used literature study to collect data, field study with observation, and design using UML (Unified Modeling Language with activity diagrams, class diagrams, use case diagrams, database design, form design, display design, draft reports. The result achieved is an application model of accounting information systems of sales and purchases of spare parts in general car service companies. As a conclusion, the accounting information systems of sales and purchases provides ease for management to obtain information quickly and easily as well as the presentation of reports quickly and accurately.

  9. Extension of the gurson model accounting for the void size effect

    Institute of Scientific and Technical Information of China (English)

    Jie Wen; Keh-Chih Hwang; Yonggang Huang

    2005-01-01

    A continuum model of solids with cylindrical microvoids is proposed based on the Taylor dislocation model.The model is an extension of Gurson model in the sense that the void size effect is accounted for. Beside the void volume fraction f, the intrinsic material length l becomes a parameter representing voids since the void size comes into play in the Gurson model. Approximate yield functions in analytic forms are suggested for both solids with cylindrical microvoids and with spherical microvoids. The application to uniaxial tension curves shows a precise agreement between the approximate analytic yield function and the "exact" parametric form of integrals.

  10. Boltzmann babies in the proper time measure

    Energy Technology Data Exchange (ETDEWEB)

    Bousso, Raphael; Bousso, Raphael; Freivogel, Ben; Yang, I-Sheng

    2007-12-20

    After commenting briefly on the role of the typicality assumption in science, we advocate a phenomenological approach to the cosmological measure problem. Like any other theory, a measure should be simple, general, well defined, and consistent with observation. This allows us to proceed by elimination. As an example, we consider the proper time cutoff on a geodesic congruence. It predicts that typical observers are quantum fluctuations in the early universe, or Boltzmann babies. We sharpen this well-known youngness problem by taking into account the expansion and open spatial geometry of pocket universes. Moreover, we relate the youngness problem directly to the probability distribution for observables, such as the temperature of the cosmic background radiation. We consider a number of modifications of the proper time measure, but find none that would make it compatible with observation.

  11. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    Directory of Open Access Journals (Sweden)

    Ninna Reitzel Jensen

    2015-06-01

    Full Text Available Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance and unit-linked insurance. By use of a two-account model, we are able to illustrate general concepts without making the model too abstract. To allow for complicated financial markets without dramatically increasing the mathematical complexity, we focus on economic scenarios. We illustrate the use of our model by conducting scenario analysis based on Monte Carlo simulation, but the model applies to scenarios in general and to worst-case and best-estimate scenarios in particular. In addition to easy computations, our model offers a common framework for the valuation of life insurance payments across product types. This enables comparison of participating life insurance products and unit-linked insurance products, thus building a bridge between the two different ways of formalizing life insurance products. Finally, our model distinguishes itself from the existing literature by taking into account the Markov model for the state of the policyholder and, hereby, facilitating event risk.

  12. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    Science.gov (United States)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  13. Business Models, Accounting and Billing Concepts in Grid-Aware Networks

    Science.gov (United States)

    Kotrotsos, Serafim; Racz, Peter; Morariu, Cristian; Iskioupi, Katerina; Hausheer, David; Stiller, Burkhard

    The emerging Grid Economy, shall set new challenges for the network. More and more literature underlines the significance of network - awareness for efficient and effective grid services. Following this path to Grid evolution, this paper identifies some key challenges in the areas of business modeling, accounting and billing and proposes an architecture that addresses them.

  14. An analytical model for particulate reinforced composites (PRCs) taking account of particle debonding and matrix cracking

    Science.gov (United States)

    Jiang, Yunpeng

    2016-10-01

    In this work, a simple micromechanics-based model was developed to describe the overall stress-strain relations of particulate reinforced composites (PRCs), taking into account both particle debonding and matrix cracking damage. Based on the secant homogenization frame, the effective compliance tensor could be firstly given for the perfect composites without any damage. The progressive interface debonding damage is controlled by a Weibull probability function, and then the volume fraction of detached particles is involved in the equivalent compliance tensor to account for the impact of particle debonding. The matrix cracking was introduced in the present model to embody the stress softening stage in the deformation of PRCs. The analytical model was firstly verified by comparing with the corresponding experiment, and then parameter analyses were conducted. This modeling will shed some light on optimizing the microstructures in effectively improving the mechanical behaviors of PRCs.

  15. A WEAKLY NONLINEAR WATER WAVE MODEL TAKING INTO ACCOUNT DISPERSION OF WAVE PHASE VELOCITY

    Institute of Scientific and Technical Information of China (English)

    李瑞杰; 李东永

    2002-01-01

    This paper presents a weakly nonlinear water wave model using a mild slope equation and a new explicit formulation which takes into account dispersion of wave phase velocity, approximates Hedges' (1987) nonlinear dispersion relationship, and accords well with the original empirical formula. Comparison of the calculating results with those obtained from the experimental data and those obtained from linear wave theory showed that the present water wave model considering the dispersion of phase velocity is rational and in good agreement with experiment data.

  16. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    CERN Document Server

    Mitchell, Lewis

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are described; a time-constant model error treatment where the same model error statistical description is time-invariant, and a time-varying treatment where the assumed model error statistics is randomly sampled at each analysis step. We compare both methods with the standard method of dealing with model error through inflation and localization, and illustrate our results with numerical simulations on a low order nonlinear system exhibiting chaotic dynamics. The results show that the filter skill is significantly improved through th...

  17. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  18. Cost accounting models used for price-setting of health services: an international review.

    Science.gov (United States)

    Raulinajtys-Grzybek, Monika

    2014-12-01

    The aim of the article was to present and compare cost accounting models which are used in the area of healthcare for pricing purposes in different countries. Cost information generated by hospitals is further used by regulatory bodies for setting or updating prices of public health services. The article presents a set of examples from different countries of the European Union, Australia and the United States and concentrates on DRG-based payment systems as they primarily use cost information for pricing. Differences between countries concern the methodology used, as well as the data collection process and the scope of the regulations on cost accounting. The article indicates that the accuracy of the calculation is only one of the factors that determine the choice of the cost accounting methodology. Important aspects are also the selection of the reference hospitals, precise and detailed regulations and the existence of complex healthcare information systems in hospitals.

  19. THE ROLE OF TECHNICAL CONSUMPTION CALCULATION MODELS ON ACCOUNTING INFORMATION SYSTEMS OF PUBLIC UTILITIES SERVICES OPERATORS

    Directory of Open Access Journals (Sweden)

    GHEORGHE CLAUDIU FEIES

    2012-05-01

    Full Text Available After studying how the operators’ management works, an influence of the specific activities of public utilities on their financial accounting system can be noticed. The asymmetry of these systems is also present, resulting from organization and specific services, which implies a close link between the financial accounting system and the specialized technical department. The research methodology consists in observing specific activities of public utility operators and their influence on information system and analysis views presented in the context of published work in some journals. It analyses the impact of technical computing models used by public utility community services on the financial statements and therefore the information provided by accounting information system stakeholders.

  20. A selection model for accounting for publication bias in a full network meta-analysis.

    Science.gov (United States)

    Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia

    2014-12-30

    Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency.

  1. Modeling of vapor intrusion from hydrocarbon-contaminated sources accounting for aerobic and anaerobic biodegradation

    Science.gov (United States)

    Verginelli, Iason; Baciocchi, Renato

    2011-11-01

    A one-dimensional steady state vapor intrusion model including both anaerobic and oxygen-limited aerobic biodegradation was developed. The aerobic and anaerobic layer thickness are calculated by stoichiometrically coupling the reactive transport of vapors with oxygen transport and consumption. The model accounts for the different oxygen demand in the subsurface required to sustain the aerobic biodegradation of the compound(s) of concern and for the baseline soil oxygen respiration. In the case of anaerobic reaction under methanogenic conditions, the model accounts for the generation of methane which leads to a further oxygen demand, due to methane oxidation, in the aerobic zone. The model was solved analytically and applied, using representative parameter ranges and values, to identify under which site conditions the attenuation of hydrocarbons migrating into indoor environments is likely to be significant. Simulations were performed assuming a soil contaminated by toluene only, by a BTEX mixture, by Fresh Gasoline and by Weathered Gasoline. The obtained results have shown that for several site conditions oxygen concentration below the building is sufficient to sustain aerobic biodegradation. For these scenarios the aerobic biodegradation is the primary mechanism of attenuation, i.e. anaerobic contribution is negligible and a model accounting just for aerobic biodegradation can be used. On the contrary, in all cases where oxygen is not sufficient to sustain aerobic biodegradation alone (e.g. highly contaminated sources), anaerobic biodegradation can significantly contribute to the overall attenuation depending on the site specific conditions.

  2. Modeling of vapor intrusion from hydrocarbon-contaminated sources accounting for aerobic and anaerobic biodegradation.

    Science.gov (United States)

    Verginelli, Iason; Baciocchi, Renato

    2011-11-01

    A one-dimensional steady state vapor intrusion model including both anaerobic and oxygen-limited aerobic biodegradation was developed. The aerobic and anaerobic layer thickness are calculated by stoichiometrically coupling the reactive transport of vapors with oxygen transport and consumption. The model accounts for the different oxygen demand in the subsurface required to sustain the aerobic biodegradation of the compound(s) of concern and for the baseline soil oxygen respiration. In the case of anaerobic reaction under methanogenic conditions, the model accounts for the generation of methane which leads to a further oxygen demand, due to methane oxidation, in the aerobic zone. The model was solved analytically and applied, using representative parameter ranges and values, to identify under which site conditions the attenuation of hydrocarbons migrating into indoor environments is likely to be significant. Simulations were performed assuming a soil contaminated by toluene only, by a BTEX mixture, by Fresh Gasoline and by Weathered Gasoline. The obtained results have shown that for several site conditions oxygen concentration below the building is sufficient to sustain aerobic biodegradation. For these scenarios the aerobic biodegradation is the primary mechanism of attenuation, i.e. anaerobic contribution is negligible and a model accounting just for aerobic biodegradation can be used. On the contrary, in all cases where oxygen is not sufficient to sustain aerobic biodegradation alone (e.g. highly contaminated sources), anaerobic biodegradation can significantly contribute to the overall attenuation depending on the site specific conditions.

  3. VVV High Proper Motion Survey

    CERN Document Server

    Gromadzki, M; Folkes, S; Beamin, J C; Ramirez, K Pena; Borissova, J; Pinfield, D; Jones, H; Minniti, D; Ivanov, V D

    2013-01-01

    Here we present survey of proper motion stars towards the Galactic Bulge and an adjacent plane region base on VISTA-VVV data. The searching method based on cross-matching photometric Ks-band CASU catalogs. The most interesting discoveries are shown.

  4. The self-consistent field model for Fermi systems with account of three-body interactions

    Directory of Open Access Journals (Sweden)

    Yu.M. Poluektov

    2015-12-01

    Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.

  5. Modeling the interaction of electric current and tissue: importance of accounting for time varying electric properties.

    Science.gov (United States)

    Evans, Daniel J; Manwaring, Mark L

    2007-01-01

    Time varying computer models of the interaction of electric current and tissue are very valuable in helping to understand the complexity of the human body and biological tissue. The electrical properties of tissue, permittivity and conductivity, are vital to accurately modeling the interaction of the human tissue with electric current. Past models have represented the electric properties of the tissue as constant or temperature dependent. This paper presents time dependent electric properties that change as a result of tissue damage, temperature, blood flow, blood vessels, and tissue property. Six models are compared to emphasize the importance of accounting for these different tissue properties in the computer model. In particular, incorporating the time varying nature of the electric properties of human tissue into the model leads to a significant increase in tissue damage. An important feature of the model is the feedback loop created between the electric properties, tissue damage, and temperature.

  6. Meta-analysis of diagnostic tests accounting for disease prevalence: a new model using trivariate copulas.

    Science.gov (United States)

    Hoyer, A; Kuss, O

    2015-05-20

    In real life and somewhat contrary to biostatistical textbook knowledge, sensitivity and specificity (and not only predictive values) of diagnostic tests can vary with the underlying prevalence of disease. In meta-analysis of diagnostic studies, accounting for this fact naturally leads to a trivariate expansion of the traditional bivariate logistic regression model with random study effects. In this paper, a new model is proposed using trivariate copulas and beta-binomial marginal distributions for sensitivity, specificity, and prevalence as an expansion of the bivariate model. Two different copulas are used, the trivariate Gaussian copula and a trivariate vine copula based on the bivariate Plackett copula. This model has a closed-form likelihood, so standard software (e.g., SAS PROC NLMIXED) can be used. The results of a simulation study have shown that the copula models perform at least as good but frequently better than the standard model. The methods are illustrated by two examples.

  7. Accounting for anthropogenic actions in modeling of stream flow at the regional scale

    Science.gov (United States)

    David, C. H.; Famiglietti, J. S.

    2013-12-01

    The modeling of the horizontal movement of water from land to coasts at scales ranging from 10^5 km^2 to 10^6 km^2 has benefited from extensive research within the past two decades. In parallel, community technology for gathering/sharing surface water observations and datasets for describing the geography of terrestrial water bodies have recently had groundbreaking advancements. Yet, the fields of computational hydrology and hydroinformatics have barely started to work hand-in-hand, and much research remains to be performed before we can better understand the anthropogenic impact on surface water through combined observations and models. Here, we build on our existing river modeling approach that leverages community state-of-the-art tools such as atmospheric data from the second phase of the North American Land Data Assimilation System (NLDAS2), river networks from the enhanced National Hydrography Dataset (NHDPlus), and observations from the U.S. Geological Survey National Water Information System (NWIS) obtained through CUAHSI webservices. Modifications are made to our integrated observational/modeling system to include treatment for anthropogenic actions such as dams, pumping and divergences in river networks. Initial results of a study focusing on the entire State of California suggest that availability of data describing human alterations on natural river networks associated with proper representation of such actions in our models could help advance hydrology further. Snapshot from an animation of flow in California river networks. The full animation is available at: http://www.ucchm.org/david/rapid.htm.

  8. Efficient modeling of sun/shade canopy radiation dynamics explicitly accounting for scattering

    Directory of Open Access Journals (Sweden)

    P. Bodin

    2011-08-01

    Full Text Available The separation of global radiation (Rg into its direct (Rb and diffuse constituents (Rd is important when modeling plant photosynthesis because a high Rd:Rg ratio has been shown to enhance Gross Primary Production (GPP. To include this effect in vegetation models, the plant canopy must be separated into sunlit and shaded leaves, for example using an explicit 3-dimensional ray tracing model. However, because such models are often too intractable and computationally expensive for theoretical or large scale studies simpler sun-shade approaches are often preferred. A widely used and computationally efficient sun-shade model is a model originally developed by Goudriaan (1977 (GOU, which however does not explicitly account for radiation scattering.

    Here we present a new model based on the GOU model, but which in contrast explicitly simulates radiation scattering by sunlit leaves and the absorption of this radiation by the canopy layers above and below (2-stream approach. Compared to the GOU model our model predicts significantly different profiles of scattered radiation that are in better agreement with measured profiles of downwelling diffuse radiation. With respect to these data our model's performance is equal to a more complex and much slower iterative radiation model while maintaining the simplicity and computational efficiency of the GOU model.

  9. Evaluating the predictive abilities of community occupancy models using AUC while accounting for imperfect detection

    Science.gov (United States)

    Zipkin, Elise F.; Grant, Evan H. Campbell; Fagan, William F.

    2012-01-01

    The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multi-species hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions of species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation dataset. We found that wetland hydroperiod (the length of time that a wetland holds water) as well as the occurrence state in the prior year were generally the most important factors in determining occupancy. The model with only habitat covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multi-species models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for

  10. Accounting Models of the Human Factor and its Architecture in Scheduling and Acceptance of Administrative Solutions

    Science.gov (United States)

    2010-10-01

    terrorism or fighting, as for example in Bhopal, Goiânia, Chernobyl , Novosibirsk. General global trend is an extension of the tasks from military... animals . Accounting Models of the Human Factor and its Architecture in Scheduling and Acceptance of Administrative Solutions RTO-MP-HFM-202 P14 - 5...endemic infections, dangerous insects and animals . Vector equipment and protective equipment (Eq) describes the physiological and hygienic

  11. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect.

    Science.gov (United States)

    Carenzo, M; Pellicciotti, F; Mabillard, J; Reid, T; Brock, B W

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the

  12. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect

    Science.gov (United States)

    Carenzo, M.; Pellicciotti, F.; Mabillard, J.; Reid, T.; Brock, B. W.

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the

  13. Extending the simple linear regression model to account for correlated responses: an introduction to generalized estimating equations and multi-level mixed modelling.

    Science.gov (United States)

    Burton, P; Gurrin, L; Sly, P

    1998-06-15

    Much of the research in epidemiology and clinical science is based upon longitudinal designs which involve repeated measurements of a variable of interest in each of a series of individuals. Such designs can be very powerful, both statistically and scientifically, because they enable one to study changes within individual subjects over time or under varied conditions. However, this power arises because the repeated measurements tend to be correlated with one another, and this must be taken into proper account at the time of analysis or misleading conclusions may result. Recent advances in statistical theory and in software development mean that studies based upon such designs can now be analysed more easily, in a valid yet flexible manner, using a variety of approaches which include the use of generalized estimating equations, and mixed models which incorporate random effects. This paper provides a particularly simple illustration of the use of these two approaches, taking as a practical example the analysis of a study which examined the response of portable peak expiratory flow meters to changes in true peak expiratory flow in 12 children with asthma. The paper takes the reader through the relevant practicalities of model fitting, interpretation and criticism and demonstrates that, in a simple case such as this, analyses based upon these model-based approaches produce reassuringly similar inferences to standard analyses based upon more conventional methods.

  14. Bayesian model accounting for within-class biological variability in Serial Analysis of Gene Expression (SAGE

    Directory of Open Access Journals (Sweden)

    Brentani Helena

    2004-08-01

    Full Text Available Abstract Background An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE, "Digital Northern" or Massively Parallel Signature Sequencing (MPSS, is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. Results We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries" and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Conclusion Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.

  15. INTERNAL PROPER MOTIONS IN THE ESKIMO NEBULA

    Energy Technology Data Exchange (ETDEWEB)

    García-Díaz, Ma. T.; Gutiérrez, L.; Steffen, W.; López, J. A. [Instituto de Astronomía, Universidad Nacional Autónoma de México, Km 103 Carretera Tijuana-Ensenada, 22860 Ensenada, B.C. (Mexico); Beckman, J., E-mail: tere@astro.unam.mx, E-mail: leonel@astro.unam.mx, E-mail: wsteffen@astro.unam.mx, E-mail: jal@astro.unam.mx, E-mail: jeb@iac.es [Instituto de Astrofísica de Canarias, La Laguna, Tenerife (Spain)

    2015-01-10

    We present measurements of internal proper motions at more than 500 positions of NGC 2392, the Eskimo Nebula, based on images acquired with WFPC2 on board the Hubble Space Telescope at two epochs separated by 7.695 yr. Comparisons of the two observations clearly show the expansion of the nebula. We measured the amplitude and direction of the motion of local structures in the nebula by determining their relative shift during that interval. In order to assess the potential uncertainties in the determination of proper motions in this object, in general, the measurements were performed using two different methods, used previously in the literature. We compare the results from the two methods, and to perform the scientific analysis of the results we choose one, the cross-correlation method, because it is more reliable. We go on to perform a ''criss-cross'' mapping analysis on the proper motion vectors, which helps in the interpretation of the velocity pattern. By combining our results of the proper motions with radial velocity measurements obtained from high resolution spectroscopic observations, and employing an existing 3D model, we estimate the distance to the nebula to be 1.3 kpc.

  16. Accounting for crop rotations in acreage choice modeling: a tractable modeling framework

    OpenAIRE

    Carpentier, Alain; Gohin, Alexandre

    2014-01-01

    Crop rotation effects and constraints are major determinants of farmers’ crop choices. Crop rotations are also keystone elements of most environmentally friendly cropping systems. The aim of this paper is twofold. First, it proposes simple tools for investigating optimal dynamic crop acreage choices accounting for crop rotation effects and constraints in an uncertain context. Second, it illustrates the impacts of crop rotation effects and constraints on farmers’ acreage choices through simple...

  17. Current Account Imbalances and Economic Growth: a two-country model with real-financial linkages

    OpenAIRE

    Laura Barbosa de Carvalho

    2012-01-01

    This paper builds a two-country stock-flow consistent model by com- bining a debt-led economy that emits the international reserve currency with an export-led economy. The model has two major implications. First, an initial trade deficit in the debt-led country leads to a perma- nent imbalance in the current account, even when the exchange rate is at parity. Second, different re-balancing mechanisms, namely a currency depreciation or the reduction of the propensity to import in the debt-led c...

  18. Model of inventory replenishment in periodic review accounting for the occurrence of shortages

    Directory of Open Access Journals (Sweden)

    Stanisław Krzyżaniak

    2014-03-01

    Full Text Available Background: Despite the development of alternative concepts of goods flow management, the inventory management under conditions of random variations of demand is still an important issue, both from the point of view of inventory keeping and replenishment costs and the service level measured as the level of inventory availability. There is a number of inventory replenishment systems used in these conditions, but they are mostly developments of two basic systems: reorder point-based and periodic review-based. The paper deals with the latter system. Numerous researches indicate the need to improve the classical models describing that system, the reason being mainly the necessity to adapt the model better to the actual conditions. This allows a correct selection of parameters that control the used inventory replenishment system and - as a result - to obtain expected economic effects. Methods: This research aimed at building a model of the periodic review system to reflect the relations (observed during simulation tests between the volume of inventory shortages and the degree of accounting for so-called deferred demand, and the service level expressed as the probability of satisfying the demand in the review and the inventory replenishment cycle. The following model building and testing method has been applied: numerical simulation of inventory replenishment - detailed analysis of simulation results - construction of the model taking into account the regularities observed during the simulations - determination of principles of solving the system of relations creating the model - verification of the results obtained from the model using the results from simulation. Results: Presented are selected results of calculations based on classical formulas and using the developed model, which describe the relations between the service level and the parameters controlling the discussed inventory replenishment system. The results are compared to the simulation

  19. Generation of SEEAW asset accounts based on water resources management models

    Science.gov (United States)

    Pedro-Monzonís, María; Solera, Abel; Andreu, Joaquín

    2015-04-01

    One of the main challenges in the XXI century is related with the sustainable use of water. This is due to the fact that water is an essential element for the life of all who inhabit our planet. In many cases, the lack of economic valuation of water resources causes an inefficient water use. In this regard, society expects of policymakers and stakeholders maximise the profit produced per unit of natural resources. Water planning and the Integrated Water Resources Management (IWRM) represent the best way to achieve this goal. The System of Environmental-Economic Accounting for Water (SEEAW) is displayed as a tool for water allocation which enables the building of water balances in a river basin. The main concern of the SEEAW is to provide a standard approach which allows the policymakers to compare results between different territories. But building water accounts is a complex task due to the difficulty of the collection of the required data. Due to the difficulty of gauging the components of the hydrological cycle, the use of simulation models has become an essential tool extensively employed in last decades. The target of this paper is to present the building up of a database that enables the combined use of hydrological models and water resources models developed with AQUATOOL DSSS to fill in the SEEAW tables. This research is framed within the Water Accounting in a Multi-Catchment District (WAMCD) project, financed by the European Union. Its main goal is the development of water accounts in the Mediterranean Andalusian River Basin District, in Spain. This research pretends to contribute to the objectives of the "Blueprint to safeguard Europe's water resources". It is noteworthy that, in Spain, a large part of these methodological decisions are included in the Spanish Guideline of Water Planning with normative status guaranteeing consistency and comparability of the results.

  20. Accounting for anatomical noise in search-capable model observers for planar nuclear imaging.

    Science.gov (United States)

    Sen, Anando; Gifford, Howard C

    2016-01-01

    Model observers intended to predict the diagnostic performance of human observers should account for the effects of both quantum and anatomical noise. We compared the abilities of several visual-search (VS) and scanning Hotelling-type models to account for anatomical noise in a localization receiver operating characteristic (LROC) study involving simulated nuclear medicine images. Our VS observer invoked a two-stage process of search and analysis. The images featured lesions in the prostate and pelvic lymph nodes. Lesion contrast and the geometric resolution and sensitivity of the imaging collimator were the study variables. A set of anthropomorphic mathematical phantoms was imaged with an analytic projector based on eight parallel-hole collimators with different sensitivity and resolution properties. The LROC study was conducted with human observers and the channelized nonprewhitening, channelized Hotelling (CH) and VS model observers. The CH observer was applied in a "background-known-statistically" protocol while the VS observer performed a quasi-background-known-exactly task. Both of these models were applied with and without internal noise in the decision variables. A perceptual search threshold was also tested with the VS observer. The model observers without inefficiencies failed to mimic the average performance trend for the humans. The CH and VS observers with internal noise matched the humans primarily at low collimator sensitivities. With both internal noise and the search threshold, the VS observer attained quantitative agreement with the human observers. Computational efficiency is an important advantage of the VS observer.

  1. A Buffer Model Account of Behavioral and ERP Patterns in the Von Restorff Paradigm

    Directory of Open Access Journals (Sweden)

    Siri-Maria Kamp

    2016-06-01

    Full Text Available We combined a mechanistic model of episodic encoding with theories on the functional significance of two event-related potential (ERP components to develop an integrated account for the Von Restorff effect, which refers to the enhanced recall probability for an item that deviates in some feature from other items in its study list. The buffer model of Lehman and Malmberg (2009, 2013 can account for this effect such that items encountered during encoding enter an episodic buffer where they are actively rehearsed. When a deviant item is encountered, in order to re-allocate encoding resources towards this item the buffer is emptied from its prior content, a process labeled “compartmentalization”. Based on theories on their functional significance, the P300 component of the ERP may co-occur with this hypothesized compartmentalization process, while the frontal slow wave may index rehearsal. We derived predictions from this integrated model for output patterns in free recall, systematic variance in ERP components, as well as associations between the two types of measures in a dataset of 45 participants who studied and freely recalled lists of the Von Restorff type. Our major predictions were confirmed and the behavioral and physiological results were consistent with the predictions derived from the model. These findings demonstrate that constraining mechanistic models of episodic memory with brain activity patterns and generating predictions for relationships between brain activity and behavior can lead to novel insights into the relationship between the brain, the mind, and behavior.

  2. Regional Balance Model of Financial Flows through Sectoral Approaches System of National Accounts

    Directory of Open Access Journals (Sweden)

    Ekaterina Aleksandrovna Zaharchuk

    2017-03-01

    Full Text Available The main purpose of the study, the results of which are reflected in this article, is the theoretical and methodological substantiation of possibilities to build a regional balance model of financial flows consistent with the principles of the construction of the System of National Accounts (SNA. The paper summarizes the international experience of building regional accounts in the SNA as well as reflects the advantages and disadvantages of the existing techniques for constructing Social Accounting Matrix. The authors have proposed an approach to build the regional balance model of financial flows, which is based on the disaggregated tables of the formation, distribution and use of the added value of territory in the framework of institutional sectors of SNA (corporations, public administration, households. Within the problem resolution of the transition of value added from industries to sectors, the authors have offered an approach to the accounting of development, distribution and use of value added within the institutional sectors of the territories. The methods of calculation are based on the publicly available information base of statistics agencies and federal services. The authors provide the scheme of the interrelations of the indicators of the regional balance model of financial flows. It allows to coordinate mutually the movement of regional resources by the sectors of «corporation», «public administration» and «households» among themselves, and cash flows of the region — by the sectors and directions of use. As a result, they form a single account of the formation and distribution of territorial financial resources, which is a regional balance model of financial flows. This matrix shows the distribution of financial resources by income sources and sectors, where the components of the formation (compensation, taxes and gross profit, distribution (transfers and payments and use (final consumption, accumulation of value added are

  3. A comparison of Graham and Piotroski investment models using accounting information and efficacy measurement

    Directory of Open Access Journals (Sweden)

    Nusrat Jahan

    2016-03-01

    Full Text Available We examine the investment models of Benjamin Graham and Joseph Piotroski and compare the efficacy of these two models by running backtest, using screening rules and ranking systems built in Portfolio 123. Using different combinations of screening rules and ranking systems, we also examine the performance of Piotroski and Graham investment models. We find that the combination of Piotroski and Graham investment models performs better than S&P 500. We also find that the Piotroski screening with Graham ranking generates the highest average annualized return among different combinations of screening rules and ranking systems analyzed in this paper. Overall, our results show a profound impact of accounting information on investor’s decision making.

  4. Improved Mathematical Model of PMSM Taking Into Account Cogging Torque Oscillations

    Directory of Open Access Journals (Sweden)

    TUDORACHE, T.

    2012-08-01

    Full Text Available This paper presents an improved mathematical model of Permanent Magnet Synchronous Machine (PMSM that takes into account the Cogging Torque (CT oscillations that appear due to the mutual attraction between the Permanent Magnets (PMs and the anisotropic stator armature. The electromagnetic torque formula in the proposed model contains an analytical expression of the CT calibrated by Finite Element (FE analysis. The numerical calibration is carried out using a data fitting procedure based on the Simplex Downhill optimization algorithm. The proposed model is characterized by good accuracy and reduced computation effort, its performance being verified by comparison with the classical d-q model of the machine using Matlab/Simulink environment.

  5. A statistical RCL interconnect delay model taking account of process variations

    Institute of Scientific and Technical Information of China (English)

    Zhu Zhang-Ming; wan Da-Jing; Yang Yin-Tang; En Yun-Fei

    2011-01-01

    As the feature size of the CMOS integrated circuit continues to shrink, process variations have become a key factor affecting the interconnect performance. Based on the equivalent Elmore model and the use of the polynomial chaos theory and the Galerkin method, we propose a linear statistical RCL interconnect delay model, taking into account process variations by successive application of the linear approximation method. Based on a variety of nano-CMOS process parameters, HSPICE simulation results show that the maximum error of the proposed model is less than 3.5%.The proposed model is simple, of high precision, and can be used in the analysis and design of nanometer integrated circuit interconnect systems.

  6. MODELING ENERGY EXPENDITURE AND OXYGEN CONSUMPTION IN HUMAN EXPOSURE MODELS: ACCOUNTING FOR FATIGUE AND EPOC

    Science.gov (United States)

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled Individual's physical activity level as described in an activity diary. Activity level is quantified via standardized val...

  7. MODELING ENERGY EXPENDITURE AND OXYGEN CONSUMPTION IN HUMAN EXPOSURE MODELS: ACCOUNTING FOR FATIGUE AND EPOC

    Science.gov (United States)

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled Individual's physical activity level as described in an activity diary. Activity level is quantified via standardized val...

  8. Internet accounting dictionaries

    DEFF Research Database (Denmark)

    Nielsen, Sandro; Mourier, Lise

    2005-01-01

    An examination of existing accounting dictionaries on the Internet reveals a general need for a new type of dictionary. In contrast to the dictionaries now accessible, the future accounting dictionaries should be designed as proper Internet dictionaries based on a functional approach so they can...

  9. Statistical model of rough surface contact accounting for size-dependent plasticity and asperity interaction

    Science.gov (United States)

    Song, H.; Vakis, A. I.; Liu, X.; Van der Giessen, E.

    2017-09-01

    The work by Greenwood and Williamson (GW) has initiated a simple but effective method of contact mechanics: statistical modeling based on the mechanical response of a single asperity. Two main assumptions of the original GW model are that the asperity response is purely elastic and that there is no interaction between asperities. However, as asperities lie on a continuous substrate, the deformation of one asperity will change the height of all other asperities through deformation of the substrate and will thus influence subsequent contact evolution. Moreover, a high asperity contact pressure will result in plasticity, which below tens of microns is size dependent, with smaller being harder. In this paper, the asperity interaction effect is taken into account through substrate deformation, while a size-dependent plasticity model is adopted for individual asperities. The intrinsic length in the strain gradient plasticity (SGP) theory is obtained by fitting to two-dimensional discrete dislocation plasticity simulations of the flattening of a single asperity. By utilizing the single asperity response in three dimensions and taking asperity interaction into account, a statistical calculation of rough surface contact is performed. The effectiveness of the statistical model is addressed by comparison with full-detail finite element simulations of rough surface contact using SGP. Throughout the paper, our focus is on the difference of contact predictions based on size-dependent plasticity as compared to conventional size-independent plasticity.

  10. Modelling representative and coherent Danish farm types based on farm accountancy data for use in environmental assessments

    DEFF Research Database (Denmark)

    Dalgaard, Randi; Halberg, Niels; Kristensen, Ib Sillebak

    2006-01-01

    -oriented environmental assessment (e.g. greenhouse gas emissions per kg pork). The objective of this study was to establish a national agricultural model for estimating data on resource use, production and environmentally important emissions for a set of representative farm types. Every year a sample of farm accounts...... is established in order to report Danish agro-economical data to the ‘Farm Accountancy Data Network’ (FADN), and to produce ‘The annual Danish account statistics for agriculture’. The farm accounts are selected and weighted to be representative for the Danish agricultural sector, and similar samples of farm...... accounts are collected in most of the European countries. Based on a sample of 2138 farm accounts from year 1999 a national agricultural model, consisting of 31 farm types, was constructed. The farm accounts were grouped according to the major soil types, the number of working hours, the most important...

  11. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    Energy Technology Data Exchange (ETDEWEB)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO{sub 2} and NO{sub x} emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner.

  12. A simple model for predicting sprint-race times accounting for energy loss on the curve

    Science.gov (United States)

    Mureika, J. R.

    1997-11-01

    The mathematical model of J. Keller for predicting World Record race times, based on a simple differential equation of motion, predicted quite well the records of the day. One of its shortcoming is that it neglects to account for a sprinter's energy loss around a curve, a most important consideration particularly in the 200m--400m. An extension to Keller's work is considered, modeling the aforementioned energy loss as a simple function of the centrifugal force acting on the runner around the curve. Theoretical World Record performances for indoor and outdoor 200m are discussed, and the use of the model at 300m is investigated. Some predictions are made for possible 200m outdoor and indoor times as run by Canadian 100m WR holder Donovan Bailey, based on his 100m final performance at the 1996 Olympic Games in Atlanta.

  13. A Simple Model for Predicting Sprint Race Times Accounting for Energy Loss on the Curve

    CERN Document Server

    Mureika, J R

    1997-01-01

    The mathematical model of J. Keller for predicting World Record race times, based on a simple differential equation of motion, predicted quite well the records of the day. One of its shortcoming is that it neglects to account for a sprinter's energy loss around a curve, a most important consideration particularly in the 200m--400m. An extension to Keller's work is considered, modeling the aforementioned energy loss as a simple function of the centrifugal force acting on the runner around the curve. Theoretical World Record performances for indoor and outdoor 200m are discussed, and the use of the model at 300m is investigated. Some predictions are made for possible 200m outdoor and indoor times as run by Canadian 100m WR holder Donovan Bailey, based on his 100m final performance at the 1996 Olympic Games in Atlanta.

  14. Climate projections of future extreme events accounting for modelling uncertainties and historical simulation biases

    Science.gov (United States)

    Brown, Simon J.; Murphy, James M.; Sexton, David M. H.; Harris, Glen R.

    2014-11-01

    A methodology is presented for providing projections of absolute future values of extreme weather events that takes into account key uncertainties in predicting future climate. This is achieved by characterising both observed and modelled extremes with a single form of non-stationary extreme value (EV) distribution that depends on global mean temperature and which includes terms that account for model bias. Such a distribution allows the prediction of future "observed" extremes for any period in the twenty-first century. Uncertainty in modelling future climate, arising from a wide range of atmospheric, oceanic, sulphur cycle and carbon cycle processes, is accounted for by using probabilistic distributions of future global temperature and EV parameters. These distributions are generated by Bayesian sampling of emulators with samples weighted by their likelihood with respect to a set of observational constraints. The emulators are trained on a large perturbed parameter ensemble of global simulations of the recent past, and the equilibrium response to doubled CO2. Emulated global EV parameters are converted to the relevant regional scale through downscaling relationships derived from a smaller perturbed parameter regional climate model ensemble. The simultaneous fitting of the EV model to regional model data and observations allows the characterisation of how observed extremes may change in the future irrespective of biases that may be present in the regional models simulation of the recent past climate. The clearest impact of a parameter perturbation in this ensemble was found to be the depth to which plants can access water. Members with shallow soils tend to be biased hot and dry in summer for the observational period. These biases also appear to have an impact on the potential future response for summer temperatures with some members with shallow soils having increases for extremes that reduce with extreme severity. We apply this methodology for London, using the

  15. FORMATION OF CONSUMER ACCOMMODATION MODELS WITH DUE ACCOUNT OF POPULATION INVESTMENT POTENTIAL

    Directory of Open Access Journals (Sweden)

    I. Shaniukevich

    2013-01-01

    Full Text Available The paper considers a theme of typological urban housing diversity  which is relevant for a modern residential real estate market. Analyzed Quantitative and qualitative characteristics of the existing urban residential accommodation, new  house building have been analyzed in the paper. The paper presents author’s calculations of differentiation extent and changes in economic opportunities of the population. Differentiation of  potential consumer accommodation models with specific standardized characteristics has been made in terms of the population economic prosperity.  The paper substantiates proposals on accountability and implementation of typological differences in the government housing policy.

  16. A simple bioclogging model that accounts for spatial spreading of bacteria

    Directory of Open Access Journals (Sweden)

    Laurent Demaret

    2009-04-01

    Full Text Available An extension of biobarrier formation and bioclogging models is presented that accounts for spatial expansion of the bacterial population in the soil. The bacteria move into neighboring sites if locally almost all of the available pore space is occupied and the environmental conditions are such that further growth of the bacterial population is sustained. This is described by a density-dependent, double degenerate diffusion-equation that is coupled with the Darcy equations and a transport-reaction equation for growth limiting substrates. We conduct computational simulations of the governing differential equation system.

  17. Improvement of the integration of Soil Moisture Accounting into the NRCS-CN model

    Science.gov (United States)

    Durán-Barroso, Pablo; González, Javier; Valdés, Juan B.

    2016-11-01

    Rainfall-runoff quantification is one of the most important tasks in both engineering and watershed management as it allows the identification, forecast and explanation of the watershed response. This non-linear process depends on the watershed antecedent conditions, which are commonly related to the initial soil moisture content. Although several studies have highlighted the relevance of soil moisture measures to improve flood modelling, the discussion is still open in the literature about the approach to use in lumped model. The integration of these previous conditions in the widely used rainfall-runoff models NRCS-CN (e.g. National Resources Conservation Service - Curve Number model) could be handled in two ways: using the Antecedent Precipitation Index (API) concept to modify the model parameter; or alternatively, using a Soil Moisture Accounting (SMA) procedure into the NRCS-CN, being the soil moisture a state variable. For this second option, the state variable does not have a direct physical representation. This make difficult the estimation of the initial soil moisture store level. This paper presents a new formulation that overcomes such issue, the rainfall-runoff model called RSSa. Its suitability is evaluated by comparing the RSSa model with the original NRCS-CN model and alternatives SMA procedures in 12 watersheds located in six different countries, with different climatic conditions, from Mediterranean to Semi-arid regions. The analysis shows that the new model, RSSa, performs better when compared with previously proposed CN-based models. Finally, an assessment is made of the influence of the soil moisture parameter for each watershed and the relative weight of scale effects over model parameterization.

  18. Advances in stream shade modelling. Accounting for canopy overhang and off-centre view

    Science.gov (United States)

    Davies-Colley, R.; Meleason, M. A.; Rutherford, K.

    2005-05-01

    Riparian shade controls the stream thermal regime and light for photosynthesis of stream plants. The quantity difn (diffuse non-interceptance), defined as the proportion of incident lighting received under a sky of uniform brightness, is useful for general specification of stream light exposure, having the virtue that it can be measured directly with common light sensors of appropriate spatial and spectral character. A simple model (implemented in EXCEL-VBA) (Davies-Colley & Rutherford Ecol. Engrg in press) successfully reproduces the broad empirical trend of decreasing difn at the channel centre with increasing ratio of canopy height to stream width. We have now refined this model to account for (a) foliage overhanging the channel (for trees of different canopy form), and (b) off-centre view of the shade (rather than just the channel centre view). We use two extreme geometries bounding real (meandering) streams: the `canyon' model simulates an infinite straight canal, whereas the `cylinder' model simulates a stream meandering so tightly that its geometry collapses into an isolated pool in the forest. The model has been validated using a physical `rooftop' model of the cylinder case, with which it is possible to measure shade with different geometries.

  19. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    Directory of Open Access Journals (Sweden)

    Czoli Christine

    2011-10-01

    Full Text Available Abstract Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories. These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed. Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment.

  20. Radiative transfer modeling through terrestrial atmosphere and ocean accounting for inelastic processes: Software package SCIATRAN

    Science.gov (United States)

    Rozanov, V. V.; Dinter, T.; Rozanov, A. V.; Wolanin, A.; Bracher, A.; Burrows, J. P.

    2017-06-01

    SCIATRAN is a comprehensive software package which is designed to model radiative transfer processes in the terrestrial atmosphere and ocean in the spectral range from the ultraviolet to the thermal infrared (0.18-40 μm). It accounts for multiple scattering processes, polarization, thermal emission and ocean-atmosphere coupling. The main goal of this paper is to present a recently developed version of SCIATRAN which takes into account accurately inelastic radiative processes in both the atmosphere and the ocean. In the scalar version of the coupled ocean-atmosphere radiative transfer solver presented by Rozanov et al. [61] we have implemented the simulation of the rotational Raman scattering, vibrational Raman scattering, chlorophyll and colored dissolved organic matter fluorescence. In this paper we discuss and explain the numerical methods used in SCIATRAN to solve the scalar radiative transfer equation including trans-spectral processes, and demonstrate how some selected radiative transfer problems are solved using the SCIATRAN package. In addition we present selected comparisons of SCIATRAN simulations with those published benchmark results, independent radiative transfer models, and various measurements from satellite, ground-based, and ship-borne instruments. The extended SCIATRAN software package along with a detailed User's Guide is made available for scientists and students, who are undertaking their own research typically at universities, via the web page of the Institute of Environmental Physics (IUP), University of Bremen: http://www.iup.physik.uni-bremen.de.

  1. Implementation of a cost-accounting model in a biobank: practical implications.

    Science.gov (United States)

    Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C

    2014-01-01

    Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level.

  2. Hidden zero-temperature bicritical point in the two-dimensional anisotropic Heisenberg model: Monte Carlo simulations and proper finite-size scaling

    OpenAIRE

    Zhou, Chenggang; Landau, D. P.; Schulthess, Thomas C.

    2006-01-01

    By considering the appropriate finite-size effect, we explain the connection between Monte Carlo simulations of two-dimensional anisotropic Heisenberg antiferromagnet in a field and the early renormalization group calculation for the bicritical point in $2+\\epsilon$ dimensions. We found that the long length scale physics of the Monte Carlo simulations is indeed captured by the anisotropic nonlinear $\\sigma$ model. Our Monte Carlo data and analysis confirm that the bicritical point in two dime...

  3. An extended car-following model accounting for the average headway effect in intelligent transportation system

    Science.gov (United States)

    Kuang, Hua; Xu, Zhi-Peng; Li, Xing-Li; Lo, Siu-Ming

    2017-04-01

    In this paper, an extended car-following model is proposed to simulate traffic flow by considering average headway of preceding vehicles group in intelligent transportation systems environment. The stability condition of this model is obtained by using the linear stability analysis. The phase diagram can be divided into three regions classified as the stable, the metastable and the unstable ones. The theoretical result shows that the average headway plays an important role in improving the stabilization of traffic system. The mKdV equation near the critical point is derived to describe the evolution properties of traffic density waves by applying the reductive perturbation method. Furthermore, through the simulation of space-time evolution of the vehicle headway, it is shown that the traffic jam can be suppressed efficiently with taking into account the average headway effect, and the analytical result is consistent with the simulation one.

  4. Accounting for the kinetics in order parameter analysis: lessons from theoretical models and a disordered peptide

    CERN Document Server

    Berezovska, Ganna; Mostarda, Stefano; Rao, Francesco

    2012-01-01

    Molecular simulations as well as single molecule experiments have been widely analyzed in terms order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such description is not accurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account of the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-State-Models of the system under study. Application of the methodology to theoretical models with a noisy orde...

  5. Water accounting for stressed river basins based on water resources management models.

    Science.gov (United States)

    Pedro-Monzonís, María; Solera, Abel; Ferrer, Javier; Andreu, Joaquín; Estrela, Teodoro

    2016-09-15

    Water planning and the Integrated Water Resources Management (IWRM) represent the best way to help decision makers to identify and choose the most adequate alternatives among other possible ones. The System of Environmental-Economic Accounting for Water (SEEA-W) is displayed as a tool for the building of water balances in a river basin, providing a standard approach to achieve comparability of the results between different territories. The target of this paper is to present the building up of a tool that enables the combined use of hydrological models and water resources models to fill in the SEEA-W tables. At every step of the modelling chain, we are capable to build the asset accounts and the physical water supply and use tables according to SEEA-W approach along with an estimation of the water services costs. The case study is the Jucar River Basin District (RBD), located in the eastern part of the Iberian Peninsula in Spain which as in other many Mediterranean basins is currently water-stressed. To guide this work we have used PATRICAL model in combination with AQUATOOL Decision Support System (DSS). The results indicate that for the average year the total use of water in the district amounts to 15,143hm(3)/year, being the Total Water Renewable Water Resources 3909hm(3)/year. On the other hand, the water service costs in Jucar RBD amounts to 1634 million € per year at constant 2012 prices. It is noteworthy that 9% of these costs correspond to non-conventional resources, such as desalinated water, reused water and water transferred from other regions.

  6. Construction of reduced order models for the non-linear Navier-Stokes equations using the proper orthogonal fecomposition (POD)/Galerkin method.

    Energy Technology Data Exchange (ETDEWEB)

    Fike, Jeffrey A.

    2013-08-01

    The construction of stable reduced order models using Galerkin projection for the Euler or Navier-Stokes equations requires a suitable choice for the inner product. The standard L2 inner product is expected to produce unstable ROMs. For the non-linear Navier-Stokes equations this means the use of an energy inner product. In this report, Galerkin projection for the non-linear Navier-Stokes equations using the L2 inner product is implemented as a first step toward constructing stable ROMs for this set of physics.

  7. A Model for Urban Environment and Resource Planning Based on Green GDP Accounting System

    Directory of Open Access Journals (Sweden)

    Linyu Xu

    2013-01-01

    Full Text Available The urban environment and resources are currently on course that is unsustainable in the long run due to excessive human pursuit of economic goals. Thus, it is very important to develop a model to analyse the relationship between urban economic development and environmental resource protection during the process of rapid urbanisation. This paper proposed a model to identify the key factors in urban environment and resource regulation based on a green GDP accounting system, which consisted of four parts: economy, society, resource, and environment. In this model, the analytic hierarchy process (AHP method and a modified Pearl curve model were combined to allow for dynamic evaluation, with higher green GDP value as the planning target. The model was applied to the environmental and resource planning problem of Wuyishan City, and the results showed that energy use was a key factor that influenced the urban environment and resource development. Biodiversity and air quality were the most sensitive factors that influenced the value of green GDP in the city. According to the analysis, the urban environment and resource planning could be improved for promoting sustainable development in Wuyishan City.

  8. A model proposal concerning balance scorecard application integrated with resource consumption accounting in enterprise performance management

    Directory of Open Access Journals (Sweden)

    ORHAN ELMACI

    2014-06-01

    Full Text Available The present study intended to investigate the “Balance Scorecard (BSC model integrated with Resource Consumption Accounting (RCA” which helps to evaluate the enterprise as matrix structure in its all parts. It aims to measure how much tangible and intangible values (assets of enterprises contribute to the enterprises. In other words, it measures how effectively, actively, and efficiently these values (assets are used. In short, it aims to measure sustainable competency of enterprises. As expressing the effect of tangible and intangible values (assets of the enterprise on the performance in mathematical and statistical methods is insufficient, it is targeted that RCA Method integrated with BSC model is based on matrix structure and control models. The effects of all complex factors in the enterprise on the performance (productivity and efficiency estimated algorithmically with cause and effect diagram. The contributions of matrix structures for reaching the management functional targets of the enterprises that operate in market competitive environment increasing day to day, is discussed. So in the context of modern management theories, as a contribution to BSC approach which is in the foreground in today’s administrative science of enterprises in matrix organizational structures, multidimensional performance evaluation model -RCA integrated with BSC Model proposal- is presented as strategic planning and strategic evaluation instrument.

  9. Pointing, looking at, and pressing keys: A diffusion model account of response modality.

    Science.gov (United States)

    Gomez, Pablo; Ratcliff, Roger; Childers, Russ

    2015-12-01

    Accumulation of evidence models of perceptual decision making have been able to account for data from a wide range of domains at an impressive level of precision. In particular, Ratcliff's (1978) diffusion model has been used across many different 2-choice tasks in which the response is executed via a key-press. In this article, we present 2 experiments in which we used a letter-discrimination task exploring 3 central aspects of a 2-choice task: the discriminability of the stimulus, the modality of the response execution (eye movement, key pressing, and pointing on a touchscreen), and the mapping of the response areas for the eye movement and the touchscreen conditions (consistent vs. inconsistent). We fitted the diffusion model to the data from these experiments and examined the behavior of the model's parameters. Fits of the model were consistent with the hypothesis that the same decision mechanism is used in the task with 3 different response methods. Drift rates are affected by the duration of the presentation of the stimulus while the response execution time changed as a function of the response modality.

  10. A kinetic model for type I and II IP3R accounting for mode changes.

    Science.gov (United States)

    Siekmann, Ivo; Wagner, Larry E; Yule, David; Crampin, Edmund J; Sneyd, James

    2012-08-22

    Based upon an extensive single-channel data set, a Markov model for types I and II inositol trisphosphate receptors (IP(3)R) is developed. The model aims to represent accurately the kinetics of both receptor types of IP(3)R depending on the concentrations of inositol trisphosphate (IP(3)), adenosine trisphosphate (ATP), and intracellular calcium (Ca(2+)). In particular, the model takes into account that for some combinations of ligands the IP(3)R switches between extended periods of inactivity alternating with intervals of bursting activity (mode changes). In a first step, the inactive and active modes are modeled separately. It is found that, within modes, both receptor types are ligand-independent. In a second step, the submodels are connected by transition rates. Ligand-dependent regulation of the channel activity is achieved by modulating these transitions between active and inactive modes. As a result, a compact representation of the IP(3)R is obtained that accurately captures stochastic single-channel dynamics including mode changes in a model with six states and 10 rate constants, only two of which are ligand-dependent.

  11. An agent-based simulation model to study accountable care organizations.

    Science.gov (United States)

    Liu, Pai; Wu, Shinyi

    2016-03-01

    Creating accountable care organizations (ACOs) has been widely discussed as a strategy to control rapidly rising healthcare costs and improve quality of care; however, building an effective ACO is a complex process involving multiple stakeholders (payers, providers, patients) with their own interests. Also, implementation of an ACO is costly in terms of time and money. Immature design could cause safety hazards. Therefore, there is a need for analytical model-based decision-support tools that can predict the outcomes of different strategies to facilitate ACO design and implementation. In this study, an agent-based simulation model was developed to study ACOs that considers payers, healthcare providers, and patients as agents under the shared saving payment model of care for congestive heart failure (CHF), one of the most expensive causes of sometimes preventable hospitalizations. The agent-based simulation model has identified the critical determinants for the payment model design that can motivate provider behavior changes to achieve maximum financial and quality outcomes of an ACO. The results show nonlinear provider behavior change patterns corresponding to changes in payment model designs. The outcomes vary by providers with different quality or financial priorities, and are most sensitive to the cost-effectiveness of CHF interventions that an ACO implements. This study demonstrates an increasingly important method to construct a healthcare system analytics model that can help inform health policy and healthcare management decisions. The study also points out that the likely success of an ACO is interdependent with payment model design, provider characteristics, and cost and effectiveness of healthcare interventions.

  12. Saffman-Taylor fingering: why it is not a proper upscaled model of viscous fingering in a (even two-dimensional) random porous medium

    Science.gov (United States)

    Meheust, Y.; Toussaint, R.; Lovoll, G.; Maloy, K. J.

    2015-12-01

    P.G. Saffman & G. Taylor (1958) studied the stability of the interface between two immiscible fluids of different densities and viscosities when one displaces the other inside a Hele-Shaw (HS) cell. They showed that with a horizontal cell and if the displaced fluid is the more viscous, the interface is unstable and leads to a viscous fingering which they nearly fully modeled [1]. The HS geometry was introduced as a geometry imposing the same flow behavior as the Darcy-scale flow in a two-dimensional (2D) porous medium, and therefore allowing an analogy between the two configurations. This is however not obvious, since capillary forces act at very different scales in the two. Later, researchers performing unstable displacement experiments in HS cells containing random 2D porous media also observed viscous fingering at large viscosity ratios, but with invasion patterns very different from those of Saffman and Taylor (ST) [2-3]. It was however considered that the two processes were both Laplacian growth processes, i.e., processes in which the invasion probability density is proportional to the pressure gradient. Ten years ago, we investigated viscously-unstable drainage in 2D porous media experimentally and measured the growth activity as well as occupation probability maps for the invasion process [4-5]. We concluded that in viscous fingering in 2D porous media, the activity was rather proportional to the square of the pressure gradient magnitude (a so-called DBM model of exponent 2), so that the universality class of the growth/invasion process was different from that of ST viscous fingering. We now strengthen our claim with new results based on the comparison of (i) pressure measurements with the pressure field around a finger such as described by the ST analytical model, and (ii) branching angles in the invasion patterns with those expected for DBMs of various exponents. [1] Saffman, P. G. and Taylor, G. Proc. Soc. London 1958(Ser A 245), 312-329. [2] Lenormand, R

  13. An HST proper-motion study of the optical jet in 3C 264: Direct Evidence for the Internal Shock Model

    Science.gov (United States)

    Meyer, Eileen T.; Georganopoulos, Markos; Sparks, William B.; Perlman, Eric S.; Van Der Marel, Roeland P.; Anderson, Jay; Sohn, S. Tony; Biretta, John A.; Norman, Colin Arthur; Chiaberge, Marco

    2016-04-01

    Some of the most energetic phenomena in the Universe involve highly relativistic flows, in which particles are accelerated up to TeV energies. In the case of relativistic jets from Active Galactic Nuclei (AGN), these flows can carry enough energy to significantly influence both galactic and cluster evolution. While the exact physical mechanism that accelerates the radiating particles within the jet is not known, a widely adopted framework is the internal shock model, invoked to explain high-energy, non-thermal radiation from objects as diverse as microquasars, gamma-ray bursts, and relativistic jets in AGN. This model posits an unsteady relativistic flow that gives rise to components in the jet with different speeds. Faster components catch up to and collide with slower ones, leading to internal shocks. Despite its wide popularity as a theoretical framework, however, no occurance of this mechanism has ever been directly observed. We will present evidence of such a collision in a relativistic jet observed with the Hubble Space Telescope (HST) in the nearby radio galaxy 3C 264 (Meyer et al., 2015, Nature). Using images taken over 20 years, we show that a bright ‘knot’ in the jet is moving at an apparent speed of 7.0 +/- 0.8c and is in the incipient stages of a collision with a slow-moving knot (1.8 +/- 0.5c) just downstream. In the most recent epoch of imaging, we see evidence of brightening of the two knots as they commence their kiloparsec-scale collision. This is the behaviour expected in the internal shock scenario and the first direct evidence that internal shocks are a valid description of particle acceleration in relativistic jets.

  14. Air quality modeling for accountability research: Operational, dynamic, and diagnostic evaluation

    Science.gov (United States)

    Henneman, Lucas R. F.; Liu, Cong; Hu, Yongtao; Mulholland, James A.; Russell, Armistead G.

    2017-10-01

    Photochemical grid models play a central role in air quality regulatory frameworks, including in air pollution accountability research, which seeks to demonstrate the extent to which regulations causally impacted emissions, air quality, and public health. There is a need, however, to develop and demonstrate appropriate practices for model application and evaluation in an accountability framework. We employ a combination of traditional and novel evaluation techniques to assess four years (2001-02, 2011-12) of simulated pollutant concentrations across a decade of major emissions reductions using the Community Multiscale Air Quality (CMAQ) model. We have grouped our assessments in three categories: Operational evaluation investigates how well CMAQ captures absolute concentrations; dynamic evaluation investigates how well CMAQ captures changes in concentrations across the decade of changing emissions; diagnostic evaluation investigates how CMAQ attributes variability in concentrations and sensitivities to emissions between meteorology and emissions, and how well this attribution compares to empirical statistical models. In this application, CMAQ captures O3 and PM2.5 concentrations and change over the decade in the Eastern United States similarly to past CMAQ applications and in line with model evaluation guidance; however, some PM2.5 species-EC, OC, and sulfate in particular-exhibit high biases in various months. CMAQ-simulated PM2.5 has a high bias in winter months and low bias in the summer, mainly due to a high bias in OC during the cold months and low bias in OC and sulfate during the summer. Simulated O3 and PM2.5 changes across the decade have normalized mean bias of less than 2.5% and 17%, respectively. Detailed comparisons suggest biased EC emissions, negative wintertime SO42- sensitivities to mobile source emissions, and incomplete capture of OC chemistry in the summer and winter. Photochemical grid model-simulated O3 and PM2.5 responses to emissions and

  15. Enterprise marketing potential modeling taking into account optimizing and dynamic essence of the potential

    Directory of Open Access Journals (Sweden)

    Potrashkova Lyudmyla Vladimirovna

    2014-12-01

    consequent enumeration. At the same time, the constituent part of the models system is constrained optimization block. Conclusions and directions of further researches. The suggested simulation and optimization models system of b2b-enterprise marketing potential result-based estimation has the following advantages: it corresponds optimizing essence of potential, takes into account marketing resources dynamics and allows to get estimation in the view of enterprise potential hierarchic levels. The suggested models system is the instrument for estimation and analysis of the future enterprise sales and marketing abilities, comparison of which with producing and financial abilities will allow to define narrow places in the analyzed enterprise activity and increase its general potential. The given models system is a part of mathematical providing to manage future enterprise abilities. The further investigations on research area have to be oriented to build models of the enterprise marketing potential estimation in integral system concerning enterprise integral potential estimation.

  16. A Unifying Modeling of Plant Shoot Gravitropism With an Explicit Account of the Effects of Growth

    Directory of Open Access Journals (Sweden)

    Renaud eBastien

    2014-04-01

    Full Text Available Gravitropism, the slow reorientation of plant growth in response to gravity, is a major determinant of the form and posture of land plants. Recently a universal model of shoot gravitropism, the AC model, has been presented, in which the dynamics of the tropic movement is only determined by the contradictory controls of i graviception, that tends to curve the plants towards the vertical, and ii proprioception, that tends to keep the stem straights. This model was found valid over a large range of species and over two order of magnitude in organ size. However the motor of the movement, the elongation, has been neglected in the AC model. Taking into account explicit growth effects, however, requires consideration of the material derivative, i.e. the rate of change of curvature bound to an expanding and convected organ elements. Here we show that it is possible to rewrite the material equation of curvature in a compact simplified form that express directly the curvature variation as a function of the median elongation andof the distribution of the differential growth. Through this extended model, called the ACE model, two main destabilizing effects of growth on the tropic movement are identified : i the passive orientation drift, which occurs when a curved element elongates without differential growth and ii the fixed curvature which occurs when a element leaves the elongation zone and is no longer able to change its curvature actively. By comparing the AC and ACE models to experiments, these two effects were however found negligible, revealing a probable selection for rapid convergence to the steady state shape during the tropic movement so as to escape the growth destabilizing effects, involving in particular a selection over proprioceptive sensitivity. Then the simplified AC mode can be used to analyze gravitropism and posture control in actively elongating plant organs without significant information loss.

  17. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    Science.gov (United States)

    Mastin, Larry G.; Van Eaton, Alexa; Durant, A.J.

    2016-01-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16–17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m−3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between  ∼  2.3 and 2.7φ (0.20–0.15 mm), despite large variations in erupted mass (0.25–50 Tg), plume height (8.5–25 km), mass fraction of fine ( operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  18. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    Science.gov (United States)

    Mastin, Larry G.; Van Eaton, Alexa R.; Durant, Adam J.

    2016-07-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ˜ 2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5-25 km), mass fraction of fine ( water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  19. Material Protection, Accounting, and Control Technologies (MPACT): Modeling and Simulation Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    Cipiti, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dunn, Timothy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Durbin, Samual [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Durkee, Joe W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); England, Jeff [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, Robert [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Ketusky, Edward [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Li, Shelly [Idaho National Lab. (INL), Idaho Falls, ID (United States); Lindgren, Eric [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meier, David [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Miller, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Osburn, Laura Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pereira, Candido [Argonne National Lab. (ANL), Argonne, IL (United States); Rauch, Eric Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Scaglione, John [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Scherer, Carolynn P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sprinkle, James K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Yoo, Tae-Sic [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-05

    The development of sustainable advanced nuclear fuel cycles is a long-term goal of the Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technologies program. The Material Protection, Accounting, and Control Technologies (MPACT) campaign is supporting research and development (R&D) of advanced instrumentation, analysis tools, and integration methodologies to meet this goal. This advanced R&D is intended to facilitate safeguards and security by design of fuel cycle facilities. The lab-scale demonstration of a virtual facility, distributed test bed, that connects the individual tools being developed at National Laboratories and university research establishments, is a key program milestone for 2020. These tools will consist of instrumentation and devices as well as computer software for modeling. To aid in framing its long-term goal, during FY16, a modeling and simulation roadmap is being developed for three major areas of investigation: (1) radiation transport and sensors, (2) process and chemical models, and (3) shock physics and assessments. For each area, current modeling approaches are described, and gaps and needs are identified.

  20. Accounting for exhaust gas transport dynamics in instantaneous emission models via smooth transition regression.

    Science.gov (United States)

    Kamarianakis, Yiannis; Gao, H Oliver

    2010-02-15

    Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.

  1. Can the forward-shock model account for the multiwavelength emission of GRB afterglow 090510 ?

    CERN Document Server

    Neamus, Ano

    2010-01-01

    GRB 090510 is the first burst whose afterglow emission above 100 MeV was measured by Fermi over two decades in time. Owing to its power-law temporal decay and power-law spectrum, it seems likely that the high-energy emission is from the forward-shock energizing the ambient medium (the standard blast-wave model for GRB afterglows), the GeV flux and its decay rate being consistent with that model's expectations. However, the synchrotron emission from a collimated outflow (the standard jet model) has difficulties in accounting for the lower-energy afterglow emission, where a simultaneous break occurs in the optical and X-ray light-curves at 2 ks, but with the optical flux decay (before and after the break) being much slower than in the X-rays (at same time). The measured X-ray and GeV fluxes are incompatible with the higher-energy afterglow emission being from same spectral component as the lower-energy afterglow emission, which suggests a synchrotron self-Compton model for this afterglow. Cessation of energy in...

  2. Integrated Approach Model of Risk, Control and Auditing of Accounting Information Systems

    Directory of Open Access Journals (Sweden)

    Claudiu BRANDAS

    2013-01-01

    Full Text Available The use of IT in the financial and accounting processes is growing fast and this leads to an increase in the research and professional concerns about the risks, control and audit of Ac-counting Information Systems (AIS. In this context, the risk and control of AIS approach is a central component of processes for IT audit, financial audit and IT Governance. Recent studies in the literature on the concepts of risk, control and auditing of AIS outline two approaches: (1 a professional approach in which we can fit ISA, COBIT, IT Risk, COSO and SOX, and (2 a research oriented approach in which we emphasize research on continuous auditing and fraud using information technology. Starting from the limits of existing approaches, our study is aimed to developing and testing an Integrated Approach Model of Risk, Control and Auditing of AIS on three cycles of business processes: purchases cycle, sales cycle and cash cycle in order to improve the efficiency of IT Governance, as well as ensuring integrity, reality, accuracy and availability of financial statements.

  3. Statistical approaches to account for missing values in accelerometer data: Applications to modeling physical activity.

    Science.gov (United States)

    Xu, Selene Yue; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki

    2016-07-10

    Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.

  4. Identifying Opportunities to Reduce Uncertainty in a National-Scale Forest Carbon Accounting Model

    Science.gov (United States)

    Shaw, C. H.; Metsaranta, J. M.; Kurz, W.; Hilger, A.

    2013-12-01

    Assessing the quality of forest carbon budget models used for national and international reporting of greenhouse gas emissions is essential, but model evaluations are rarely conducted mainly because of lack of appropriate, independent ground plot data sets. Ecosystem carbon stocks for all major pools estimated from data collected for 696 ground plots from Canada's new National Forest Inventory (NFI) were used to assess plot-level carbon stocks predicted by the Carbon Budget Model of the Canadian Forest Sector 3 (CBM-CFS3) -- a model compliant with the most complex (Tier-3) approach in the reporting guidelines of the Intergovernmental Panel on Climate Change. The model is the core of Canada's National Forest Carbon Monitoring, Accounting, and Reporting System. At the landscape scale, a major portion of total uncertainty in both C stock and flux estimation is associated with biomass productivity, turnover, and soil and dead organic matter modelling parameters, which can best be further evaluated using plot-level data. Because the data collected for the ground plots were comprehensive we were able to compare carbon stock estimates for 13 pools also estimated by the CBM-CFS3 (all modelled pools excepting coarse and fine root biomass) using the classical comparison statistics of mean difference and correlation. Using a Monte Carlo approach we were able to determine the contribution of aboveground biomass, deadwood and soil pool error to modeled ecosystem total error, as well as the contribution of pools that are summed to estimate aboveground biomass, deadwood and soil, to the error of these three subtotal pools. We were also able to assess potential sources of error propagation in the computational sequence of the CBM-CFS3. Analysis of the data grouped by the 16 dominant tree species allowed us to isolate the leading species where further research would lead to the greatest reductions in uncertainty for modeling of carbon stocks using the CBM-CFS3. This analysis

  5. ACCOUNTING HARMONIZATION AND HISTORICAL COST ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Valentin Gabriel CRISTEA

    2017-05-01

    Full Text Available There is a huge interest in accounting harmonization and historical costs accounting, in what they offer us. In this article, different valuation models are discussed. Although one notices the movement from historical cost accounting to fair value accounting, each one has its advantages.

  6. REGRESSION MODEL FOR RISK REPORTING IN FINANCIAL STATEMENTS OF ACCOUNTING SERVICES ENTITIES

    Directory of Open Access Journals (Sweden)

    Mirela NICHITA

    2015-06-01

    Full Text Available The purpose of financial reports is to provide useful information to users; the utility of information is defined through the qualitative characteristics (fundamental and enhancing. The financial crisis emphasized the limits of financial reporting which has been unable to prevent investors about the risks they were facing. Due to the current changes in business environment, managers have been highly motivated to rethink and improve the risk governance philosophy, processes and methodologies. The lack of quality, timely data and adequate systems to capture, report and measure the right information across the organization is a fundamental challenge for implementing and sustaining all aspects of effective risk management. Starting with the 80s, the investors are more interested in narratives (Notes to financial statements, than in primary reports (financial position and performance. The research will apply a regression model for assessment of risk reporting by the professional (accounting and taxation services for major companies from Romania during the period 2009 – 2013.

  7. A Thermodamage Strength Theoretical Model of Ceramic Materials Taking into Account the Effect of Residual Stress

    Directory of Open Access Journals (Sweden)

    Weiguo Li

    2012-01-01

    Full Text Available A thermodamage strength theoretical model taking into account the effect of residual stress was established and applied to each temperature phase based on the study of effects of various physical mechanisms on the fracture strength of ultrahigh-temperature ceramics. The effects of SiC particle size, crack size, and SiC particle volume fraction on strength corresponding to different temperatures were studied in detail. This study showed that when flaw size is not large, the bigger SiC particle size results in the greater effect of tensile residual stress in the matrix grains on strength reduction, and this prediction coincides with experimental results; and the residual stress and the combined effort of particle size and crack size play important roles in controlling material strength.

  8. REGRESSION MODEL FOR RISK REPORTING IN FINANCIAL STATEMENTS OF ACCOUNTING SERVICES ENTITIES

    Directory of Open Access Journals (Sweden)

    Mirela NICHITA

    2015-06-01

    Full Text Available The purpose of financial reports is to provide useful information to users; the utility of information is defined through the qualitative characteristics (fundamental and enhancing. The financial crisis emphasized the limits of financial reporting which has been unable to prevent investors about the risks they were facing. Due to the current changes in business environment, managers have been highly motivated to rethink and improve the risk governance philosophy, processes and methodologies. The lack of quality, timely data and adequate systems to capture, report and measure the right information across the organization is a fundamental challenge for implementing and sustaining all aspects of effective risk management. Starting with the 80s, the investors are more interested in narratives (Notes to financial statements, than in primary reports (financial position and performance. The research will apply a regression model for assessment of risk reporting by the professional (accounting and taxation services for major companies from Romania during the period 2009 – 2013.

  9. Accounting for detectability in fish distribution models: an approach based on time-to-first-detection

    Directory of Open Access Journals (Sweden)

    Mário Ferreira

    2015-12-01

    Full Text Available Imperfect detection (i.e., failure to detect a species when the species is present is increasingly recognized as an important source of uncertainty and bias in species distribution modeling. Although methods have been developed to solve this problem by explicitly incorporating variation in detectability in the modeling procedure, their use in freshwater systems remains limited. This is probably because most methods imply repeated sampling (≥ 2 of each location within a short time frame, which may be impractical or too expensive in most studies. Here we explore a novel approach to control for detectability based on the time-to-first-detection, which requires only a single sampling occasion and so may find more general applicability in freshwaters. The approach uses a Bayesian framework to combine conventional occupancy modeling with techniques borrowed from parametric survival analysis, jointly modeling factors affecting the probability of occupancy and the time required to detect a species. To illustrate the method, we modeled large scale factors (elevation, stream order and precipitation affecting the distribution of six fish species in a catchment located in north-eastern Portugal, while accounting for factors potentially affecting detectability at sampling points (stream depth and width. Species detectability was most influenced by depth and to lesser extent by stream width and tended to increase over time for most species. Occupancy was consistently affected by stream order, elevation and annual precipitation. These species presented a widespread distribution with higher uncertainty in tributaries and upper stream reaches. This approach can be used to estimate sampling efficiency and provide a practical framework to incorporate variations in the detection rate in fish distribution models.

  10. Proper alignment of the microscope.

    Science.gov (United States)

    Rottenfusser, Rudi

    2013-01-01

    The light microscope is merely the first element of an imaging system in a research facility. Such a system may include high-speed and/or high-resolution image acquisition capabilities, confocal technologies, and super-resolution methods of various types. Yet more than ever, the proverb "garbage in-garbage out" remains a fact. Image manipulations may be used to conceal a suboptimal microscope setup, but an artifact-free image can only be obtained when the microscope is optimally aligned, both mechanically and optically. Something else is often overlooked in the quest to get the best image out of the microscope: Proper sample preparation! The microscope optics can only do its job when its design criteria are matched to the specimen or vice versa. The specimen itself, the mounting medium, the cover slip, and the type of immersion medium (if applicable) are all part of the total optical makeup. To get the best results out of a microscope, understanding the functions of all of its variable components is important. Only then one knows how to optimize these components for the intended application. Different approaches might be chosen to discuss all of the microscope's components. We decided to follow the light path which starts with the light source and ends at the camera or the eyepieces. To add more transparency to this sequence, the section up to the microscope stage was called the "Illuminating Section", to be followed by the "Imaging Section" which starts with the microscope objective. After understanding the various components, we can start "working with the microscope." To get the best resolution and contrast from the microscope, the practice of "Koehler Illumination" should be understood and followed by every serious microscopist. Step-by-step instructions as well as illustrations of the beam path in an upright and inverted microscope are included in this chapter. A few practical considerations are listed in Section 3. Copyright © 2013 Elsevier Inc. All rights

  11. Accounting comparability and the accuracy of peer-based valuation models

    NARCIS (Netherlands)

    Young, S.; Zeng, Y.

    2015-01-01

    We examine the link between enhanced accounting comparability and the valuation performance of pricing multiples. Using the warranted multiple method proposed by Bhojraj and Lee (2002, Journal of Accounting Research), we demonstrate how enhanced accounting comparability leads to better peer-based va

  12. Accounting comparability and the accuracy of peer-based valuation models

    NARCIS (Netherlands)

    Young, S.; Zeng, Y.

    2015-01-01

    We examine the link between enhanced accounting comparability and the valuation performance of pricing multiples. Using the warranted multiple method proposed by Bhojraj and Lee (2002, Journal of Accounting Research), we demonstrate how enhanced accounting comparability leads to better peer-based va

  13. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    Science.gov (United States)

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both.

  14. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    Science.gov (United States)

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  15. A common signal detection model accounts for both perception and discrimination of the watercolor effect.

    Science.gov (United States)

    Devinck, Frédéric; Knoblauch, Kenneth

    2012-03-21

    Establishing the relation between perception and discrimination is a fundamental objective in psychophysics, with the goal of characterizing the neural mechanisms mediating perception. Here, we show that a procedure for estimating a perceptual scale based on a signal detection model also predicts discrimination performance. We use a recently developed procedure, Maximum Likelihood Difference Scaling (MLDS), to measure the perceptual strength of a long-range, color, filling-in phenomenon, the Watercolor Effect (WCE), as a function of the luminance ratio between the two components of its generating contour. MLDS is based on an equal-variance, gaussian, signal detection model and yields a perceptual scale with interval properties. The strength of the fill-in percept increased 10-15 times the estimate of the internal noise level for a 3-fold increase in the luminance ratio. Each observer's estimated scale predicted discrimination performance in a subsequent paired-comparison task. A common signal detection model accounts for both the appearance and discrimination data. Since signal detection theory provides a common metric for relating discrimination performance and neural response, the results have implications for comparing perceptual and neural response functions.

  16. A Modified Model of Ecological Footprint Accounting and Its Application to Cropland in Jiangsu,China

    Institute of Scientific and Technical Information of China (English)

    LIU Qin-Pu; LIN Zhen-Shan; FENG Nian-Hua; LIU Yong-Mei

    2008-01-01

    Based on the theory of emergy analysis,a modified model of ecological footprint accounting,termed emergetic ecological footprint (EMEF) in contrast to the conventional ecological footprint (EF) model,is formulated and applied to a case study of Jiangsu cropland,China.Comparisons between the EF and the EMEF with respect to grain,cotton,and food oil were outlined.Per capita EF and EMEF of cropland were also presented to depict the resources consumption level by comparing the biocapacity (BC) or emergetic biocapacity (EMBC,a new BC calculation by emergy analysis)of the same area.In the meanwhile,the ecological sustainability index (ESI),a new concept initiated by the authors,was established in the modified model to indicate and compare the sustainability of cropland use at different levels and between different regions.The results from conventional EF showed that per capita EF of the cropland has exceeded its per capita BC in Jiangsu since 1986.In contrast,based on the EMBC,the per capita EMEF exceeded the per capita EMBC 5 years earlier.The ESIs of Jiangsu cropland use were between 0.7 and 0.4 by the conventional method,while the numbers were between 0.7 and 0.3 by the modified one.The fact that the results of the two methods were similar showed that the modified model was reasonable and feasible,although some principles of the EF and EMEF were quite different.Also,according to the realities of Jiangsu'cropland use,the results from the modified model were more acceptable.

  17. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    Science.gov (United States)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  18. Modelling of a mecanum wheel taking into account the geometry of road rollers

    Science.gov (United States)

    Hryniewicz, P.; Gwiazda, A.; Banaś, W.; Sękala, A.; Foit, K.

    2017-08-01

    During the process planning in a company one of the basic factors associated with the production costs is the operation time for particular technological jobs. The operation time consists of time units associated with the machining tasks of a workpiece as well as the time associated with loading and unloading and the transport operations of this workpiece between machining stands. Full automation of manufacturing in industry companies tends to a maximal reduction in machine downtimes, thereby the fixed costs simultaneously decreasing. The new construction of wheeled vehicles, using Mecanum wheels, reduces the transport time of materials and workpieces between machining stands. These vehicles have the ability to simultaneously move in two axes and thus more rapid positioning of the vehicle relative to the machining stand. The Mecanum wheel construction implies placing, around the wheel free rollers that are mounted at an angle 450, which allow the movement of the vehicle not only in its axis but also perpendicular thereto. The improper selection of the rollers can cause unwanted vertical movement of the vehicle, which may cause difficulty in positioning of the vehicle in relation to the machining stand and the need for stabilisation. Hence the proper design of the free rollers is essential in designing the whole Mecanum wheel construction. It allows avoiding the disadvantageous and unwanted vertical vibrations of a whole vehicle with these wheels. In the article the process of modelling the free rollers, in order to obtain the desired shape of unchanging, horizontal trajectory of the vehicle is presented. This shape depends on the desired diameter of the whole Mecanum wheel, together with the road rollers, and the width of the drive wheel. Another factor related with the curvature of the trajectory shape is the length of the road roller and its diameter decreases depending on the position with respect to its centre. The additional factor, limiting construction of

  19. Carbon accounting of forest bioenergy: from model calibrations to policy options (Invited)

    Science.gov (United States)

    Lamers, P.

    2013-12-01

    knowledge in the field by comparing different state-of-the-art temporal forest carbon modeling efforts, and discusses whether or to what extent a deterministic ';carbon debt' accounting is possible and appropriate. It concludes upon the possible scientific and eventually political choices in temporal carbon accounting for regulatory frameworks including alternative options to address unintentional carbon losses within forest ecosystems/bioenergy systems.

  20. Development of accounting quality management system

    Directory of Open Access Journals (Sweden)

    Plakhtii T.F.

    2017-08-01

    Full Text Available Accounting organization as one of the types of practical activities at the enterprise involves organization of the process of implementation of various kinds of accounting procedures to ensure meeting needs of the users of accounting information. Therefore, to improve its quality an owner should use tools, methods and procedures that enable to improve the quality of implementation of accounting methods and technology. The necessity of using a quality management system for the improvement of accounting organization at the enterprise is substantiated. The system of accounting quality management is developed and grounded in the context of ISO 9001:2015, which includes such processes as the processes of the accounting system, leadership, planning, and evaluation. On the basis of specification and justification of the set of universal requirements (content requirements, formal requirements the model of the environment of demands for high-quality organization of the computerized accounting system that improves the process of preparing high quality financial statements is developed. In order to improve the system of accounting quality management, to justify the main objectives of its further development, namely elimination of unnecessary characteristics of accounting information, the differences between the current level of accounting information quality and its perfect level are considered; the meeting of new needs of users of accounting information that have not been satisfied yet. The ways of leadership demonstration in the system of accounting quality management of accounting subjects at the enterprise are substantiated. The relationship between the current level of accounting information quality and its perfect level is considered. The possible types of measures aimed at improving the system of accounting quality management are identified. The paper grounds the need to include the principle of proper management in the current set of accounting

  1. Modelling the range expansion of the Tiger mosquito in a Mediterranean Island accounting for imperfect detection.

    Science.gov (United States)

    Tavecchia, Giacomo; Miranda, Miguel-Angel; Borrás, David; Bengoa, Mikel; Barceló, Carlos; Paredes-Esquivel, Claudia; Schwarz, Carl

    2017-01-01

    Aedes albopictus (Diptera; Culicidae) is a highly invasive mosquito species and a competent vector of several arboviral diseases that have spread rapidly throughout the world. Prevalence and patterns of dispersal of the mosquito are of central importance for an effective control of the species. We used site-occupancy models accounting for false negative detections to estimate the prevalence, the turnover, the movement pattern and the growth rate in the number of sites occupied by the mosquito in 17 localities throughout Mallorca Island. Site-occupancy probability increased from 0.35 in the 2012, year of first reported observation of the species, to 0.89 in 2015. Despite a steady increase in mosquito presence, the extinction probability was generally high indicating a high turnover in the occupied sites. We considered two site-dependent covariates, namely the distance from the point of first observation and the estimated yearly occupancy rate in the neighborhood, as predicted by diffusion models. Results suggested that mosquito distribution during the first year was consistent with what predicted by simple diffusion models, but was not consistent with the diffusion model in subsequent years when it was similar to those expected from leapfrog dispersal events. Assuming a single initial colonization event, the spread of Ae. albopictus in Mallorca followed two distinct phases, an early one consistent with diffusion movements and a second consistent with long distance, 'leapfrog', movements. The colonization of the island was fast, with ~90% of the sites estimated to be occupied 3 years after the colonization. The fast spread was likely to have occurred through vectors related to human mobility such as cars or other vehicles. Surveillance and management actions near the introduction point would only be effective during the early steps of the colonization.

  2. Long-term fiscal implications of funding assisted reproduction: a generational accounting model for Spain

    Directory of Open Access Journals (Sweden)

    R. Matorras

    2015-12-01

    Full Text Available The aim of this study was to assess the lifetime economic benefits of assisted reproduction in Spain by calculating the return on this investment. We developed a generational accounting model that simulates the flow of taxes paid by the individual, minus direct government transfers received over the individual’s lifetime. The difference between discounted transfers and taxes minus the cost of either IVF or artificial insemination (AI equals the net fiscal contribution (NFC of a child conceived through assisted reproduction. We conducted sensitivity analysis to test the robustness of our results under various macroeconomic scenarios. A child conceived through assisted reproduction would contribute €370,482 in net taxes to the Spanish Treasury and would receive €275,972 in transfers over their lifetime. Taking into account that only 75% of assisted reproduction pregnancies are successful, the NFC was estimated at €66,709 for IVF-conceived children and €67,253 for AI-conceived children. The return on investment for each euro invested was €15.98 for IVF and €18.53 for AI. The long-term NFC of a child conceived through assisted reproduction could range from €466,379 to €-9,529 (IVF and from €466,923 to €-8,985 (AI. The return on investment would vary between €-2.28 and €111.75 (IVF, and €-2.48 and €128.66 (AI for each euro invested. The break-even point at which the financial position would begin to favour the Spanish Treasury ranges between 29 and 41 years of age. Investment in assisted reproductive techniques may lead to positive discounted future fiscal revenue, notwithstanding its beneficial psychological effect for infertile couples in Spain.

  3. A performance weighting procedure for GCMs based on explicit probabilistic models and accounting for observation uncertainty

    Science.gov (United States)

    Renard, Benjamin; Vidal, Jean-Philippe

    2016-04-01

    In recent years, the climate modeling community has put a lot of effort into releasing the outputs of multimodel experiments for use by the wider scientific community. In such experiments, several structurally distinct GCMs are run using the same observed forcings (for the historical period) or the same projected forcings (for the future period). In addition, several members are produced for a single given model structure, by running each GCM with slightly different initial conditions. This multiplicity of GCM outputs offers many opportunities in terms of uncertainty quantification or GCM comparisons. In this presentation, we propose a new procedure to weight GCMs according to their ability to reproduce the observed climate. Such weights can be used to combine the outputs of several models in a way that rewards good-performing models and discards poorly-performing ones. The proposed procedure has the following main properties: 1. It is based on explicit probabilistic models describing the time series produced by the GCMs and the corresponding historical observations, 2. It can use several members whenever available, 3. It accounts for the uncertainty in observations, 4. It assigns a weight to each GCM (all weights summing up to one), 5. It can also assign a weight to the "H0 hypothesis" that all GCMs in the multimodel ensemble are not compatible with observations. The application of the weighting procedure is illustrated with several case studies including synthetic experiments, simple cases where the target GCM output is a simple univariate variable and more realistic cases where the target GCM output is a multivariate and/or a spatial variable. These case studies illustrate the generality of the procedure which can be applied in a wide range of situations, as long as the analyst is prepared to make an explicit probabilistic assumption on the target variable. Moreover, these case studies highlight several interesting properties of the weighting procedure. In

  4. 7 CFR 29.112 - Proper light.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Proper light. 29.112 Section 29.112 Agriculture... INSPECTION Regulations Inspectors, Samplers, and Weighers § 29.112 Proper light. Tobacco shall not be inspected or sampled for the purposes of the Act except when displayed in proper light for correct...

  5. Historical Account to the State of the Art in Debris Flow Modeling

    Science.gov (United States)

    Pudasaini, Shiva P.

    2013-04-01

    In this contribution, I present a historical account of debris flow modelling leading to the state of the art in simulations and applications. A generalized two-phase model is presented that unifies existing avalanche and debris flow theories. The new model (Pudasaini, 2012) covers both the single-phase and two-phase scenarios and includes many essential and observable physical phenomena. In this model, the solid-phase stress is closed by Mohr-Coulomb plasticity, while the fluid stress is modeled as a non-Newtonian viscous stress that is enhanced by the solid-volume-fraction gradient. A generalized interfacial momentum transfer includes viscous drag, buoyancy and virtual mass forces, and a new generalized drag force is introduced to cover both solid-like and fluid-like drags. Strong couplings between solid and fluid momentum transfer are observed. The two-phase model is further extended to describe the dynamics of rock-ice avalanches with new mechanical models. This model explains dynamic strength weakening and includes internal fluidization, basal lubrication, and exchanges of mass and momentum. The advantages of the two-phase model over classical (effectively single-phase) models are discussed. Advection and diffusion of the fluid through the solid are associated with non-linear fluxes. Several exact solutions are constructed, including the non-linear advection-diffusion of fluid, kinematic waves of debris flow front and deposition, phase-wave speeds, and velocity distribution through the flow depth and through the channel length. The new model is employed to study two-phase subaerial and submarine debris flows, the tsunami generated by the debris impact at lakes/oceans, and rock-ice avalanches. Simulation results show that buoyancy enhances flow mobility. The virtual mass force alters flow dynamics by increasing the kinetic energy of the fluid. Newtonian viscous stress substantially reduces flow deformation, whereas non-Newtonian viscous stress may change the

  6. On the proper motion of auroral arcs

    Energy Technology Data Exchange (ETDEWEB)

    Haerendel, G.; Raaf, B.; Rieger, E. (Max-Planck-Institut fuer Extraterrestrische Physik, Garching (Germany)); Buchert, S. (EISCAT Scientific Association, Kiruna (Sweden)); Hoz, C. la (Univ. of Tromso (Norway))

    1993-04-01

    The authors report on a series of measurements of the proper motion of auroral arcs, made using the EISCAT incoherent scatter radar. Radar measurements are correlated with auroral imaging from the ground to observe the arcs and sense their motion. The authors look at one to two broad classes of auroral arcs, namely the slow (approximately 100 m/s) class which are observed to move either poleward or equatorward. The other class is typically much faster, and observed to move poleward, and represents the class of events most studied in the past. They fit their observations to a previous model which provides a potential energy source for these events. The observations are consistent with the model, though no clear explanation for the actual cause of the motion can be reached from these limited measurements.

  7. Survey of stellar associations using proper motions

    Directory of Open Access Journals (Sweden)

    C. Abad

    2001-01-01

    Full Text Available Stellar Proper Motions can be represented as great circles over the Celestial Sphere. This point of view creates a geometry over the sphere where the study of parallelism of the motions is possible in an easy form. Calculus of intersections between circles can detect convergence point of motions. This means parallel spatial motion. The model can be carried out to open stars clusters, identifying convergence points as apex, in order to get membership probabilities or, in a general form, to stars of our galaxy to detect big stellar structures and to infer some details about their kinematics. We present here a short description of the model and some examples using stars of the Hipparcos catalogue.

  8. Modelling overbank flow on farmed catchments taking into account spatial hydrological discontinuities

    Science.gov (United States)

    Moussa, R.; Tilma, M.; Chahinian, N.; Huttel, O.

    2003-04-01

    In agricultural catchments, hydrological processes are largely variable in space due to human impact causing hydrological discontinuities such as ditch network, field limits and terraces. The ditch network accelerates runoff by concentrating flows, drains the water table or replenishes it by reinfiltration of the runoff water. During extreme flood events, overbank flow occurs and surface pathflows are modified. The purpose of this study is to assess the influence of overbank flow on hydrograph shape during flood events. For that, MHYDAS, a physically based distributed hydrological model, was especially developed to take into account these hydrological discontinuities. The model considers the catchment as a series of interconnected hydrological unit. Runoff from each unit is estimated using a deterministic model based on the pounding-time algorithm and then routed through the ditch network using the diffusive wave equation. Overbank flow is modelled by modifying links between the hydrological units and the ditch network. The model was applied to simulate the main hydrological processes on a small headwater farmed Mediterranean catchment located in Southern France. The basic hydrometeorological equipment consists of a meteorological station, rain gauges, a tensio-neutronic and a piezometric measurement network, and eight water flow measurements. A multi-criteria and multi-scale approach was used. Three independent error criteria (Nash, error on volume and error on peak flow) were calculated and combined using the Pareto technique. Then, a multi-scale approach was used to calibrate and validate the model for the eight water flow measurements. The application of MHYDAS on the extreme ten flood events of the last decade enables to identify the ditches where overbank flows occur and to calculate discharge at various points of the ditch network. Results show that for the extreme flood event, more than 45% of surface runoff occur due to overbank flow. Discussion shows that

  9. Use of the Sacramento Soil Moisture Accounting Model in Areas with Insufficient Forcing Data

    Science.gov (United States)

    Kuzmin, V.

    2009-04-01

    The Sacramento Soil Moisture Accounting model (SAC-SMA) is known as a very reliable and effective hydrological model. It is widely used by the U.S. National Weather Service (NWS) and many organizations in other countries for operational forecasting of flash floods. As a purely conceptual model, the SAC-SMA requires a periodic re-calibration. However, this procedure is not trivial in watersheds with little or no historical data, in areas with changing watershed properties, in a changing climate environment, in regions with low quality and low spatial resolution forcing data etc. In such cases, so-called physically based models with measurable parameters also may not be an alternative, because they usually require high quality forcing data and, hence, are quite expensive. Therefore, this type of models can not be implemented in countries with scarce surface observation data. To resolve this problem, we offer using a very fast and efficient automatic calibration algorithm, a Stepwise Line Search (SLS), which has been implementing in NWS since 2005, and also its modifications that were developed especially for automated operational forecasting of flash floods in regions where high resolution and high quality forcing data are not available. The SLS-family includes several simple yet efficient calibration algorithms: 1) SLS-F, which supposes simultaneous natural smoothing of the response surface by quasi-local estimation of F-indices, what allows finding the most stable and reliable parameters that can be different from "global" optima in usual sense. (Thus, this method slightly transforms the original objective function); 2) SLS-2L (Two-Loop SLS), which is suitable for basins where hydraulic properties of soil are unknown; 3) SLS-2LF, which represents a conjunction of the SLS-F and SLS-2L algorithms and allows obtaining the SAC-SMA parameters that can be transferred to ungauged catchments; 4) SLS-E, which also supposes stochastic filtering of the model input through

  10. Do current connectionist learning models account for reading development in different languages?

    Science.gov (United States)

    Hutzler, Florian; Ziegler, Johannes C; Perry, Conrad; Wimmer, Heinz; Zorzi, Marco

    2004-04-01

    Learning to read a relatively irregular orthography, such as English, is harder and takes longer than learning to read a relatively regular orthography, such as German. At the end of grade 1, the difference in reading performance on a simple set of words and nonwords is quite dramatic. Whereas children using regular orthographies are already close to ceiling, English children read only about 40% of the words and nonwords correctly. It takes almost 4 years for English children to come close to the reading level of their German peers. In the present study, we investigated to what extent recent connectionist learning models are capable of simulating this cross-language learning rate effect as measured by nonword decoding accuracy. We implemented German and English versions of two major connectionist reading models, Plaut et al.'s (Plaut, D. C., McClelland, J. L., Seidenberg, M. S., & Patterson, K. (1996). Understanding normal and impaired word reading: computational principles in quasi-regular domains. Psychological Review, 103, 56-115) parallel distributed model and Zorzi et al.'s (Zorzi, M., Houghton, G., & Butterworth, B. (1998a). Two routes or one in reading aloud? A connectionist dual-process model. Journal of Experimental Psychology: Human Perception and Performance, 24, 1131-1161); two-layer associative network. While both models predicted an overall advantage for the more regular orthography (i.e. German over English), they failed to predict that the difference between children learning to read regular versus irregular orthographies is larger earlier on. Further investigations showed that the two-layer network could be brought to simulate the cross-language learning rate effect when cross-language differences in teaching methods (phonics versus whole-word approach) were taken into account. The present work thus shows that in order to adequately capture the pattern of reading acquisition displayed by children, current connectionist models must not only be

  11. The Charitable Trust Model: An Alternative Approach For Department Of Defense Accounting

    Science.gov (United States)

    2016-12-01

    Constitution declares, “No Money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law ; and a regular Statement and Account...accounting to supplant the current corporate-style financial management and reporting practices mandated by federal law . First, the researcher identifies... administration . The researcher then analyzes how the misapplied logic of private sector accounting creates weakness and inconsistencies in federal

  12. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    Science.gov (United States)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS

  13. Short-run analysis of fiscal policy and the current account in a finite horizon model

    OpenAIRE

    Heng-fu Zou

    1995-01-01

    This paper utilizes a technique developed by Judd to quantify the short-run effects of fiscal policies and income shocks on the current account in a small open economy. It is found that: (1) a future increase in government spending improves the short-run current account; (2) a future tax increase worsens the short-run current account; (3) a present increase in the government spending worsens the short-run current account dollar by dollar, while a present increase in the income improves the cu...

  14. Process Accounting

    OpenAIRE

    Gilbertson, Keith

    2002-01-01

    Standard utilities can help you collect and interpret your Linux system's process accounting data. Describes the uses of process accounting, standard process accounting commands, and example code that makes use of process accounting utilities.

  15. A Global Correction to PPMXL Proper Motions

    CERN Document Server

    Vickers, John J; Grebel, Eva K

    2016-01-01

    In this paper we notice that extragalactic sources seem to have non-zero proper motions in the PPMXL proper motion catalog. We collect a large, all-sky sample of extragalactic objects and fit their reported PPMXL proper motions to an ensemble of spherical harmonics in magnitude shells. A magnitude dependent proper motion correction is thus constructed. This correction is applied to a set of fundamental radio sources, quasars, and is compared to similar corrections to assess its utility. We publish, along with this paper, code which may be used to correct proper motions in the PPMXL catalog over the full sky which have 2 Micron All Sky Survey photometry.

  16. Foundations for proper-time relativistic quantum theory

    Science.gov (United States)

    Gill, Tepper L.; Morris, Trey; Kurtz, Stewart K.

    2015-05-01

    This paper is a progress report on the foundations for the canonical proper-time approach to relativistic quantum theory. We first review the the standard square-root equation of relativistic quantum theory, followed by a review of the Dirac equation, providing new insights into the physical properties of both. We then introduce the canonical proper-time theory. For completeness, we give a brief outline of the canonical proper-time approach to electrodynamics and mechanics, and then introduce the canonical proper-time approach to relativistic quantum theory. This theory leads to three new relativistic wave equations. In each case, the canonical generator of proper-time translations is strictly positive definite, so that it represents a particle. We show that the canonical proper-time extension of the Dirac equation for Hydrogen gives results that are consistently closer to the experimental data, when compared to the Dirac equation. However, these results are not sufficient to account for either the Lamb shift or the anomalous magnetic moment.

  17. Underwriting information-theoretic accounts of quantum mechanics with a realist, psi-epistemic model

    Science.gov (United States)

    Stuckey, W. M.; Silberstein, Michael; McDevitt, Timothy

    2016-05-01

    We propose an adynamical interpretation of quantum theory called Relational Blockworld (RBW) where the fundamental ontological element is a 4D graphical amalgam of space, time and sources called a “spacetimesource element.” These are fundamental elements of space, time and sources, not source elements in space and time. The transition amplitude for a spacetimesource element is computed using a path integral with discrete graphical action. The action for a spacetimesource element is constructed from a difference matrix K and source vector J on the graph, as in lattice gauge theory. K is constructed from graphical field gradients so that it contains a non-trivial null space and J is then restricted to the row space of K, so that it is divergence-free and represents a conserved exchange of energy-momentum. This construct of K and J represents an adynamical global constraint between sources, the spacetime metric and the energy-momentum content of the spacetimesource element, rather than a dynamical law for time-evolved entities. To illustrate this interpretation, we explain the simple EPR-Bell and twin-slit experiments. This interpretation of quantum mechanics constitutes a realist, psi-epistemic model that might underwrite certain information-theoretic accounts of the quantum.

  18. Design of a Competency-Based Assessment Model in the Field of Accounting

    Science.gov (United States)

    Ciudad-Gómez, Adelaida; Valverde-Berrocoso, Jesús

    2012-01-01

    This paper presents the phases involved in the design of a methodology to contribute both to the acquisition of competencies and to their assessment in the field of Financial Accounting, within the European Higher Education Area (EHEA) framework, which we call MANagement of COMpetence in the areas of Accounting (MANCOMA). Having selected and…

  19. A pluralistic account of homology: adapting the models to the data.

    Science.gov (United States)

    Haggerty, Leanne S; Jachiet, Pierre-Alain; Hanage, William P; Fitzpatrick, David A; Lopez, Philippe; O'Connell, Mary J; Pisani, Davide; Wilkinson, Mark; Bapteste, Eric; McInerney, James O

    2014-03-01

    Defining homologous genes is important in many evolutionary studies but raises obvious issues. Some of these issues are conceptual and stem from our assumptions of how a gene evolves, others are practical, and depend on the algorithmic decisions implemented in existing software. Therefore, to make progress in the study of homology, both ontological and epistemological questions must be considered. In particular, defining homologous genes cannot be solely addressed under the classic assumptions of strong tree thinking, according to which genes evolve in a strictly tree-like fashion of vertical descent and divergence and the problems of homology detection are primarily methodological. Gene homology could also be considered under a different perspective where genes evolve as "public goods," subjected to various introgressive processes. In this latter case, defining homologous genes becomes a matter of designing models suited to the actual complexity of the data and how such complexity arises, rather than trying to fit genetic data to some a priori tree-like evolutionary model, a practice that inevitably results in the loss of much information. Here we show how important aspects of the problems raised by homology detection methods can be overcome when even more fundamental roots of these problems are addressed by analyzing public goods thinking evolutionary processes through which genes have frequently originated. This kind of thinking acknowledges distinct types of homologs, characterized by distinct patterns, in phylogenetic and nonphylogenetic unrooted or multirooted networks. In addition, we define "family resemblances" to include genes that are related through intermediate relatives, thereby placing notions of homology in the broader context of evolutionary relationships. We conclude by presenting some payoffs of adopting such a pluralistic account of homology and family relationship, which expands the scope of evolutionary analyses beyond the traditional, yet

  20. Situated sentence processing: the coordinated interplay account and a neurobehavioral model.

    Science.gov (United States)

    Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R

    2010-03-01

    Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms

  1. Toward an Human Resource Accounting (HRA)-Based Model for Designing an Organizational Effectiveness Audit in Education.

    Science.gov (United States)

    Myroon, John L.

    The major purpose of this paper was to develop a Human Resource Accounting (HRA) macro-model that could be used for designing a school organizational effectiveness audit. Initially, the paper reviewed the advent and definition of HRA. In order to develop the proposed model, the different approaches to measuring effectiveness were reviewed,…

  2. An Interactive Activation Model of Context Effects in Letter Perception: Part 1. An Account of Basic Findings.

    Science.gov (United States)

    McClelland, James L.; Rumelhart, David E.

    1981-01-01

    A model of context effects in perception is applied to perception of letters. Perception results from excitatory and inhibitory interactions of detectors for visual features, letters, and words. The model produces facilitation for letters in pronounceable pseudowords as well as words and accounts for rule-governed performance without any rules.…

  3. Materials measurement and accounting in an operating plutonium conversion and purification process. Phase I. Process modeling and simulation. [PUCSF code

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, C.C. Jr.; Ostenak, C.A.; Gutmacher, R.G.; Dayem, H.A.; Kern, E.A.

    1981-04-01

    A model of an operating conversion and purification process for the production of reactor-grade plutonium dioxide was developed as the first component in the design and evaluation of a nuclear materials measurement and accountability system. The model accurately simulates process operation and can be used to identify process problems and to predict the effect of process modifications.

  4. An Interactive Activation Model of Context Effects in Letter Perception: Part 1. An Account of Basic Findings.

    Science.gov (United States)

    McClelland, James L.; Rumelhart, David E.

    1981-01-01

    A model of context effects in perception is applied to perception of letters. Perception results from excitatory and inhibitory interactions of detectors for visual features, letters, and words. The model produces facilitation for letters in pronounceable pseudowords as well as words and accounts for rule-governed performance without any rules.…

  5. Tracking magnetogram proper motions by multiscale regularization

    Science.gov (United States)

    Jones, Harrison P.

    1995-01-01

    Long uninterrupted sequences of solar magnetograms from the global oscillations network group (GONG) network and from the solar and heliospheric observatory (SOHO) satellite will provide the opportunity to study the proper motions of magnetic features. The possible use of multiscale regularization, a scale-recursive estimation technique which begins with a prior model of how state variables and their statistical properties propagate over scale. Short magnetogram sequences are analyzed with the multiscale regularization algorithm as applied to optical flow. This algorithm is found to be efficient, provides results for all the spatial scales spanned by the data and provides error estimates for the solutions. It is found that the algorithm is less sensitive to evolutionary changes than correlation tracking.

  6. The Lorentzian proper vertex amplitude: Asymptotics

    CERN Document Server

    Engle, Jonathan; Zipfel, Antonia

    2015-01-01

    In previous work, the Lorentzian proper vertex amplitude for a spin-foam model of quantum gravity was derived. In the present work, the asymptotics of this amplitude are studied in the semi-classical limit. The starting point of the analysis is an expression for the amplitude as an action integral with action differing from that in the EPRL case by an extra `projector' term which scales linearly with spins only in the asymptotic limit. New tools are introduced to generalize stationary phase methods to this case. For the case of boundary data which can be glued to a non-degenerate Lorentzian 4-simplex, the asymptotic limit of the amplitude is shown to equal the single Feynman term, showing that the extra term in the asymptotics of the EPRL amplitude has been eliminated.

  7. Proper body mechanics from an engineering perspective.

    Science.gov (United States)

    Mohr, Edward G

    2010-04-01

    The economic viability of the manual therapy practitioner depends on the number of massages/treatments that can be given in a day or week. Fatigue or injuries can have a major impact on the income potential and could ultimately reach the point which causes the practitioner to quit the profession, and seek other, less physically demanding, employment. Manual therapy practitioners in general, and massage therapists in particular, can utilize a large variety of body postures while giving treatment to a client. The hypothesis of this paper is that there is an optimal method for applying force to the client, which maximizes the benefit to the client, and at the same time minimizes the strain and effort required by the practitioner. Two methods were used to quantifiably determine the effect of using "poor" body mechanics (Improper method) and "best" body mechanics (Proper/correct method). The first approach uses computer modeling to compare the two methods. Both postures were modeled, such that the biomechanical effects on the practitioner's elbow, shoulder, hip, knee and ankle joints could be calculated. The force applied to the client, along with the height and angle of application of the force, was held constant for the comparison. The second approach was a field study of massage practitioners (n=18) to determine their maximal force capability, again comparing methods using "Improper and Proper body mechanics". Five application methods were tested at three different application heights, using a digital palm force gauge. Results showed that there was a definite difference between the two methods, and that the use of correct body mechanics can have a large impact on the health and well being of the massage practitioner over both the short and long term.

  8. Crash Simulation of Roll Formed Parts by Damage Modelling Taking Into Account Preforming Effects

    Science.gov (United States)

    Till, Edwin T.; Hackl, Benjamin; Schauer, Hermann

    2011-08-01

    Complex phase steels of strength levels up to 1200 MPa are suitable to roll forming. These may be applied in automotive structures for enhancing the crashworthiness, e. g. as stiffeners in doors. Even though the strain hardening of the material is low there is considerable bending formability. However ductility decreases with the strength level. Higher strength requires more focus to the structural integrity of the part during the process planning stage and with respect to the crash behavior. Nowadays numerical simulation is used as a process design tool for roll-forming in a production environment. The assessment of the stability of a roll forming process is quite challenging for AHSS grades. There are two objectives of the present work. First to provide a reliable assessment tool to the roll forming analyst for failure prediction. Second to establish simulation procedures in order to predict the part's behavior in crash applications taking into account damage and failure. Today adequate ductile fracture models are available which can be used in forming and crash applications. These continuum models are based on failure strain curves or surfaces which depend on the stress triaxiality (e. g. Crach or GISSMO) and may additionally include the Lode angle (extended Mohr Coulomb or extended GISSMO model). A challenging task is to obtain the respective failure strain curves. In the paper the procedure is described in detail how these failure strain curves are obtained using small scale tests within voestalpine Stahl, notch tensile-, bulge and shear tests. It is shown that capturing the surface strains is not sufficient for obtaining reliable material failure parameters. The simulation tool for roll-forming at the site of voestalpine Krems is Copra® FEA RF, which is a 3D continuum finite element solver based on MSC.Marc. The simulation environment for crash applications is LS-DYNA. Shell elements are used for this type of analyses. A major task is to provide results of

  9. Proper Time in Weyl space-time

    CERN Document Server

    Avalos, R; Romero, C

    2016-01-01

    We discuss the question of whether or not a general Weyl structure is a suitable mathematical model of space-time. This is an issue that has been in debate since Weyl formulated his unified field theory for the first time. We do not present the discussion from the point of view of a particular unification theory, but instead from a more general standpoint, in which the viability of such a structure as a model of space-time is investigated. Our starting point is the well known axiomatic approach to space-time given by Elhers, Pirani and Schild (EPS). In this framework, we carry out an exhaustive analysis of what is required for a consistent definition for proper time and show that such a definition leads to the prediction of the so-called "second clock effect". We take the view that if, based on experience, we were to reject space-time models predicting this effect, this could be incorporated as the last axiom in the EPS approach. Finally, we provide a proof that, in this case, we are led to a Weyl integrable ...

  10. Do prevailing societal models influence reports of near-death experiences?: a comparison of accounts reported before and after 1975.

    Science.gov (United States)

    Athappilly, Geena K; Greyson, Bruce; Stevenson, Ian

    2006-03-01

    Transcendental near-death experiences show some cross-cultural variation that suggests they may be influenced by societal beliefs. The prevailing Western model of near-death experiences was defined by Moody's description of the phenomenon in 1975. To explore the influence of this cultural model, we compared near-death experience accounts collected before and after 1975. We compared the frequency of 15 phenomenological features Moody defined as characteristic of near-death experiences in 24 accounts collected before 1975 and in 24 more recent accounts matched on relevant demographic and situational variables. Near-death experience accounts collected after 1975 differed from those collected earlier only in increased frequency of tunnel phenomena, which other research has suggested may not be integral to the experience, and not in any of the remaining 14 features defined by Moody as characteristic of near-death experiences. These data challenge the hypothesis that near-death experience accounts are substantially influenced by prevailing cultural models.

  11. A remark on proper partitions of unity

    CERN Document Server

    Calcines, Jose M Garcia

    2011-01-01

    In this paper we introduce, by means of the category of exterior spaces and using a process that generalizes the Alexandroff compactification, an analogue notion of numerable covering of a space in the proper and exterior setting. An application is given for fibrewise proper homotopy equivalences.

  12. Spinfoam Cosmology with the Proper Vertex

    Science.gov (United States)

    Vilensky, Ilya

    2017-01-01

    A modification of the EPRL vertex amplitude in the spin-foam framework of quantum gravity - so-called ``proper vertex amplitude'' - has been developed to enable correct semi-classical behavior to conform to the classical Regge calculus. The proper vertex amplitude is defined by projecting to the single gravitational sector. The amplitude is recast into an exponentiated form and we derive the asymptotic form of the projector part of the action. This enables us to study the asymptotics of the proper vertex by applying extended stationary phase methods. We use the proper vertex amplitude to investigate transition amplitudes between coherent quantum boundary states of cosmological geometries. In particular, Hartle-Hawking no-boundary states are computed in the proper vertex framework. We confirm that in the classical limit the Hartle-Hawking wavefunction satisfies the Hamiltonian constraint. Partly supported by NSF grants PHY-1205968 and PHY-1505490.

  13. 工业企业会计的集中核算模式%Centralized Accounting Model on Industrial Enterprises

    Institute of Scientific and Technical Information of China (English)

    李宇

    2012-01-01

    工业企业会计管理工作关系到企业的发展,当前的会计核算模式已经无法适应工业企业发展的需要。我们需要建立会计集中核算制度,提高工业企业会计管理水平,促使降低生产经营成本。涉及资金利用问题企业的可持续发展,资金利用率是会计工作的重要内容。工业企业需要实现从资金会计核算向会计监督管理转变,从根苯上提升会计管理水平。%Management of industrial enterprises in accounting related to the business development, the current accounting model is unable to adapt the needs of industrial enterprises. We need to estaldish the centralized accounting system, to improve the management level of industrial enterprises, to promote lower costs. Utilization of the funds of industrial enterprises related to the sustainable development. Capital utilization rate is an important part of accounting. Industrial companies need to achieve a transformation from accoumting of funds to accounting supervision and manangement. Fundamentally enhance the accounting management.

  14. KINERJA PENGELOLAAN LIMBAH HOTEL PESERTA PROPER DAN NON PROPER DI KABUPATEN BADUNG, PROVINSI BALI

    Directory of Open Access Journals (Sweden)

    Putri Nilakandi Perdanawati Pitoyo

    2016-07-01

    Full Text Available Bali tourism development can lead to positive and negative impacts that threatening environmental sustainability. This research evaluates the hotel performance of the waste management that includes management of waste water, emission, hazardous, and solid waste by hotel that participate at PROPER and non PROPER. Research using qualitative descriptive method. Not all of non PROPER doing test on waste water quality, chimney emissions quality, an inventory of hazardous waste and solid waste sorting. Wastewater discharge of PROPER hotels ranged from 290.9 to 571.8 m3/day and non PROPER ranged from 8.4 to 98.1 m3/day with NH3 parameter values that exceed the quality standards. The quality of chimney emissions were still below the quality standard. The volume of the hazardous waste of PROPER hotels ranged from 66.1 to 181.9 kg/month and non PROPER ranged from 5.003 to 103.42 kg/month. Hazardous waste from the PROPER hotel which has been stored in the TPS hazardous waste. The volume of the solid waste of PROPER hotel ranged from 342.34 to 684.54 kg/day and non PROPER ranged from 4.83 to 181.51 kg/day. The PROPER and non PROPER hotel not sort the solid waste. The hotel performance in term of wastewater management, emission, hazardous, and solid waste is better at the PROPER hotel compared to non PROPER participants.

  15. THE MODEL OF MATERIALS AND STRUCTURES ENDURANCE, WITH TAKING INTO ACCOUNT THE EVOLUTION OF THEIR MECHANICAL CHARACTERISTICS

    Directory of Open Access Journals (Sweden)

    V. L. Gorobets

    2008-03-01

    Full Text Available In the article the mathematical model describing the process of changing a limit of durability of the railway rolling stock materials and structures with taking into account the change of parameters of durability curve during loading them is presented.

  16. Accounting for non-linear chemistry of ship plumes in the GEOS-Chem global chemistry transport model

    NARCIS (Netherlands)

    Vinken, G.C.M.; Boersma, K.F.; Jacob, D.J.; Meijer, E.W.

    2011-01-01

    We present a computationally efficient approach to account for the non-linear chemistry occurring during the dispersion of ship exhaust plumes in a global 3-D model of atmospheric chemistry (GEOS-Chem). We use a plume-in-grid formulation where ship emissions age chemically for 5 h before being relea

  17. Accounting for non-linear chemistry of ship plumes in the GEOS-Chem global chemistry transport model

    NARCIS (Netherlands)

    Meijer, E.W.; Vinken, G.C.M.; Boersma, K.F.; Jacob, D.J.

    2011-01-01

    Abstract. We present a computationally efficient approach to account for the non-linear chemistry occurring during the dispersion of ship exhaust plumes in a global 3-D model of atmospheric chemistry (GEOS-Chem). We use a plume-ingrid formulation where ship emissions age chemically for 5 h before be

  18. Closing the Gaps : Taking into Account the Effects of Heat stress and Fatique Modeling in an Operational Analysis

    NARCIS (Netherlands)

    Woodill, G.; Barbier, R.R.; Fiamingo, C.

    2010-01-01

    Traditional, combat model based analysis of Dismounted Combatant Operations (DCO) has focused on the ‘lethal’ aspects in an engagement, and to a limited extent the environment in which the engagement takes place. These are however only two of the factors that should be taken into account when conduc

  19. Structural equation models using partial least squares: an example of the application of SmartPLS® in accounting research

    Directory of Open Access Journals (Sweden)

    João Carlos Hipólito Bernardes do Nascimento

    2016-08-01

    Full Text Available In view of the Accounting academy’s increasing in the investigation of latent phenomena, researchers have used robust multivariate techniques. Although Structural Equation Models are frequently used in the international literature, however, the Accounting academy has made little use of the variant based on Partial Least Squares (PLS-SEM, mostly due to lack of knowledge on the applicability and benefits of its use for Accounting research. Even if the PLS-SEM approach is regularly used in surveys, this method is appropriate to model complex relations with multiple relationships of dependence and independence between latent variables. In that sense, it is very useful for application in experiments and file data. In that sense, a literature review is presented of Accounting studies that used the PLS-SEM technique. Next, as no specific publications were observed that exemplified the application of the technique in Accounting, a PLS-SEM application is developed to encourage exploratory research by means of the software SmartPLS®, being particularly useful to graduate students. Therefore, the main contribution of this article is methodological, given its objective to clearly identify the guidelines for the appropriate use of PLS. By presenting an example of how to conduct an exploratory research using PLS-SEM, the intention is to contribute to researchers’ enhanced understanding of how to use and report on the technique in their research.

  20. Current-account effects of a devaluation in an optimizing model with capital accumulation

    DEFF Research Database (Denmark)

    Nielsen, Søren Bo

    1991-01-01

    short, the devaluation is bound to improve the current account on impact, whereas this will deteriorate in the case of a long contract period, and the more so the smaller are adjustment costs in investment. In addition, we study the consequences for the terms of trade and for the stocks of foreign...

  1. A two-phase moisture transport model accounting for sorption hysteresis in layered porous building constructions

    DEFF Research Database (Denmark)

    Johannesson, Björn; Janz, Mårten

    2009-01-01

    , with account also to sorption hysteresis. The different materials in the considered layered construction are assigned different properties, i.e. vapor and liquid water diffusivities and boundary (wetting and drying) sorption curves. Further, the scanning behavior between wetting and drying boundary curves...

  2. Accounting Department Chairpersons' Perceptions of Business School Performance Using a Market Orientation Model

    Science.gov (United States)

    Webster, Robert L.; Hammond, Kevin L.; Rothwell, James C.

    2013-01-01

    This manuscript is part of a stream of continuing research examining market orientation within higher education and its potential impact on organizational performance. The organizations researched are business schools and the data collected came from chairpersons of accounting departments of AACSB member business schools. We use a reworded Narver…

  3. Accounting Department Chairpersons' Perceptions of Business School Performance Using a Market Orientation Model

    Science.gov (United States)

    Webster, Robert L.; Hammond, Kevin L.; Rothwell, James C.

    2013-01-01

    This manuscript is part of a stream of continuing research examining market orientation within higher education and its potential impact on organizational performance. The organizations researched are business schools and the data collected came from chairpersons of accounting departments of AACSB member business schools. We use a reworded Narver…

  4. Internet Accounting

    NARCIS (Netherlands)

    Pras, Aiko; Beijnum, van Bert-Jan; Sprenkels, Ron; Párhonyi, Robert

    2001-01-01

    This article provides an introduction to Internet accounting and discusses the status of related work within the IETF and IRTF, as well as certain research projects. Internet accounting is different from accounting in POTS. To understand Internet accounting, it is important to answer questions like

  5. A regional-scale, high resolution dynamical malaria model that accounts for population density, climate and surface hydrology.

    Science.gov (United States)

    Tompkins, Adrian M; Ermert, Volker

    2013-02-18

    The relative roles of climate variability and population related effects in malaria transmission could be better understood if regional-scale dynamical malaria models could account for these factors. A new dynamical community malaria model is introduced that accounts for the temperature and rainfall influences on the parasite and vector life cycles which are finely resolved in order to correctly represent the delay between the rains and the malaria season. The rainfall drives a simple but physically based representation of the surface hydrology. The model accounts for the population density in the calculation of daily biting rates. Model simulations of entomological inoculation rate and circumsporozoite protein rate compare well to data from field studies from a wide range of locations in West Africa that encompass both seasonal endemic and epidemic fringe areas. A focus on Bobo-Dioulasso shows the ability of the model to represent the differences in transmission rates between rural and peri-urban areas in addition to the seasonality of malaria. Fine spatial resolution regional integrations for Eastern Africa reproduce the malaria atlas project (MAP) spatial distribution of the parasite ratio, and integrations for West and Eastern Africa show that the model grossly reproduces the reduction in parasite ratio as a function of population density observed in a large number of field surveys, although it underestimates malaria prevalence at high densities probably due to the neglect of population migration. A new dynamical community malaria model is publicly available that accounts for climate and population density to simulate malaria transmission on a regional scale. The model structure facilitates future development to incorporate migration, immunity and interventions.

  6. Adaptation of an Electrochemistry-based Li-Ion Battery Model to Account for Deterioration Observed Under Randomized Use

    Science.gov (United States)

    2014-10-02

    Adaptation of an Electrochemistry -based Li-Ion Battery Model to Account for Deterioration Observed Under Randomized Use Brian Bole1, Chetan S...application’s accuracy requirements and available resources (Daigle et al., 2011). In this paper, we use an electrochemistry -based lithium ion (Li-ion...the use of UKF not only to estimate the states in an electrochemistry model that vary over a charge- discharge cycle, but also to adapt certain

  7. Hybrid proper orthogonal decomposition formulation for linear structural dynamics

    Science.gov (United States)

    Placzek, A.; Tran, D.-M.; Ohayon, R.

    2008-12-01

    Hybrid proper orthogonal decomposition (PODh) formulation is a POD-based reduced-order modeling method where the continuous equation of the physical system is projected on the POD modes obtained from a discrete model of the system. The aim of this paper is to evaluate the hybrid POD formulation and to compare it with other POD formulations on the simple case of a linear elastic rod subject to prescribed displacements in the perspective of building reduced-order models for coupled fluid-structure systems in the future. In the first part of the paper, the hybrid POD is compared to two other formulations for the response to an initial condition: an approach based on the discrete finite elements equation of the rod called the discrete POD (PODd), and an analytical approach using the exact solution of the problem and consequently called the analytical POD (PODa). This first step is useful to ensure that the PODh performs well with respect to the other formulations. The PODh is therefore used afterwards for the forced motion response where a displacement is imposed at the free end of the rod. The main contribution of this paper lies in the comparison of three techniques used to take into account the non-homogeneous Dirichlet boundary condition with the hybrid POD: the first method relies on control functions, the second on the penalty method and the third on Lagrange multipliers. Finally, the robustness of the hybrid POD is investigated on two examples involving firstly the introduction of structural damping and secondly a nonlinear force applied at the free end of the rod.

  8. Accounting Automation

    OpenAIRE

    Laynebaril1

    2017-01-01

    Accounting Automation   Click Link Below To Buy:   http://hwcampus.com/shop/accounting-automation/  Or Visit www.hwcampus.com Accounting Automation” Please respond to the following: Imagine you are a consultant hired to convert a manual accounting system to an automated system. Suggest the key advantages and disadvantages of automating a manual accounting system. Identify the most important step in the conversion process. Provide a rationale for your response. ...

  9. Physical and Theoretical Models of Heat Pollution Applied to Cramped Conditions Welding Taking into Account the Different Types of Heat

    Science.gov (United States)

    Bulygin, Y. I.; Koronchik, D. A.; Legkonogikh, A. N.; Zharkova, M. G.; Azimova, N. N.

    2017-05-01

    The standard k-epsilon turbulence model, adapted for welding workshops, equipped with fixed workstations with sources of pollution took into account only the convective component of heat transfer, which is quite reasonable for large-volume rooms (with low density distribution of sources of pollution) especially the results of model calculations taking into account only the convective component correlated well with experimental data. For the purposes of this study, when we are dealing with a small confined space where necessary to take account of the body heated to a high temperature (for welding), located next to each other as additional sources of heat, it can no longer be neglected radiative heat exchange. In the task - to experimentally investigate the various types of heat transfer in a limited closed space for welding and behavior of a mathematical model, describing the contribution of the various components of the heat exchange, including radiation, influencing the formation of fields of concentration, temperature, air movement and thermal stress in the test environment. Conducted field experiments to model cubic body, allowing you to configure and debug the model of heat and mass transfer processes with the help of the developed approaches, comparing the measurement results of air flow velocity and temperature with the calculated data showed qualitative and quantitative agreement between process parameters, that is an indicator of the adequacy of heat and mass transfer model.

  10. Investigation of a new model accounting for rotors of finite tip-speed ratio in yaw or tilt

    DEFF Research Database (Denmark)

    Branlard, Emmanuel; Gaunaa, Mac; Machefaux, Ewan

    2014-01-01

    from the MEXICO experiment are used as a basis for validation. Three tools using the same 2D airfoil coefficient data are compared: a BEM code, an Actuator-Line and a vortex code. The vortex code is further used to validate the results from the newly implemented BEM yaw-model. Significant improvements......The main results from a recently developed vortex model are implemented into a Blade Element Momentum(BEM) code. This implementation accounts for the effect of finite tip-speed ratio, an effect which was not considered in standard BEM yaw-models. The model and its implementation are presented. Data...

  11. On the treatment of evapotranspiration, soil moisture accounting, and aquifer recharge in monthly water balance models.

    Science.gov (United States)

    Alley, W.M.

    1984-01-01

    Several two- to six-parameter regional water balance models are examined by using 50-year records of monthly streamflow at 10 sites in New Jersey. These models include variants of the Thornthwaite-Mather model, the Palmer model, and the more recent Thomas abcd model. Prediction errors are relatively similar among the models. However, simulated values of state variables such as soil moisture storage differ substantially among the models, and fitted parameter values for different models sometimes indicated an entirely different type of basin response to precipitation.-from Author

  12. 会计监督的数学模型%The Math Model for the Accounting Supervision

    Institute of Scientific and Technical Information of China (English)

    韩英

    2001-01-01

    应用概率论和优化理论,对会计行为进行了系统的分析 ,建立了会计监督的数学模型,确定了会计单位和监督者的反应函数及最优行动选择,同时 给出监管者对会计单位违规处罚的最低值。%By analyzing the profit distributing of the managers and account ing units with probability and optimization theories,reaction function of manage rs and accounting units is present,and the math model of the accounting supervis ion is established.Minimum value of the punishment to the illegality in the acco unting units is determined.

  13. A simple model to quantitatively account for periodic outbreaks of the measles in the Dutch Bible Belt

    Science.gov (United States)

    Bier, Martin; Brak, Bastiaan

    2015-04-01

    In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.

  14. Accounting for Slipping and Other False Negatives in Logistic Models of Student Learning

    Science.gov (United States)

    MacLellan, Christopher J.; Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    Additive Factors Model (AFM) and Performance Factors Analysis (PFA) are two popular models of student learning that employ logistic regression to estimate parameters and predict performance. This is in contrast to Bayesian Knowledge Tracing (BKT) which uses a Hidden Markov Model formalism. While all three models tend to make similar predictions,…

  15. Computation of Asteroid Proper Elements: Recent Advances

    Science.gov (United States)

    Knežević, Z.

    2017-06-01

    The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.

  16. Proper Handling and Storage of Human Milk

    Science.gov (United States)

    ... Breastfeeding Micronutrient Malnutrition State and Local Programs Proper Handling and Storage of Human Milk Recommend on Facebook ... sure to wash your hands before expressing or handling breast milk. When collecting milk, be sure to ...

  17. Identifying Proper Names Based on Association Analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The issue of proper names recognition in Chinese text was discussed. An automatic approach based on association analysis to extract rules from corpus was presented. The method tries to discover rules relevant to external evidence by association analysis, without additional manual effort. These rules can be used to recognize the proper nouns in Chinese texts. The experimental result shows that our method is practical in some applications.Moreover, the method is language independent.

  18. Proper holomorphic mappings between hyperbolic product manifolds

    CERN Document Server

    Janardhanan, Jaikrishnan

    2011-01-01

    We generalize a result of Remmert and Stein, on proper holomorphic mappings between domains that are products of certain planar domains, to finite proper holomorphic mappings between complex manifolds that are products of hyper- bolic Riemann surfaces. While an important special case of our result follows from the ideas developed by Remmert and Stein, our proof of the full result relies on the interplay of the latter ideas and a finiteness theorem for Riemann surfaces.

  19. Equity Valuation and Accounting Numbers: Applying Zhang (2000 and Zhang and Chen (2007 models to Brazilian Market

    Directory of Open Access Journals (Sweden)

    Fernando Caio Galdi

    2011-03-01

    Full Text Available This paper investigates how accounting variables explain cross-sectional stocks returns in Brazilian capital markets. The analysis is based on Zhang (2000 and Zhang and Chen (2007 models. These models predict that stock returns are a function of net income, change in profitability, invested capital, changes in opportunity growths and discount rate. Generally, the empirical results for the Brazilian capital market are consistent with the theoretical relations that models describe, similarly to the results found in the US. Using different empirical tests (pooled regressions, Fama-Macbeth and panel data the results and coefficients remain similar, what support the robustness of our findings.

  20. Educational Accountability

    Science.gov (United States)

    Pincoffs, Edmund L.

    1973-01-01

    Discusses educational accountability as the paradigm of performance contracting, presents some arguments for and against accountability, and discusses the goals of education and the responsibility of the teacher. (Author/PG)

  1. Efficient modeling of sun/shade canopy radiation dynamics explicitly accounting for scattering

    Directory of Open Access Journals (Sweden)

    P. Bodin

    2012-04-01

    Full Text Available The separation of global radiation (Rg into its direct (Rb and diffuse constituents (Rg is important when modeling plant photosynthesis because a high Rd:Rg ratio has been shown to enhance Gross Primary Production (GPP. To include this effect in vegetation models, the plant canopy must be separated into sunlit and shaded leaves. However, because such models are often too intractable and computationally expensive for theoretical or large scale studies, simpler sun-shade approaches are often preferred. A widely used and computationally efficient sun-shade model was developed by Goudriaan (1977 (GOU. However, compared to more complex models, this model's realism is limited by its lack of explicit treatment of radiation scattering.

    Here we present a new model based on the GOU model, but which in contrast explicitly simulates radiation scattering by sunlit leaves and the absorption of this radiation by the canopy layers above and below (2-stream approach. Compared to the GOU model our model predicts significantly different profiles of scattered radiation that are in better agreement with measured profiles of downwelling diffuse radiation. With respect to these data our model's performance is equal to a more complex and much slower iterative radiation model while maintaining the simplicity and computational efficiency of the GOU model.

  2. Accounting outsourcing

    OpenAIRE

    Richtáriková, Paulína

    2012-01-01

    The thesis deals with accounting outsourcing and provides a comprehensive explanation of the topic. At first the thesis defines basic concepts (outsourcing, insourcing, offshoring and outplacement) and describes differences between the accounting outsourcing and outsourcing of other business activities. The emphasis is put on a decision whether or not to implement the accounting outsourcing. Thus the thesis describes main reasons why to implement the accounting outsourcing and risks that are ...

  3. An individual-based model of zebrafish population dynamics accounting for energy dynamics.

    Directory of Open Access Journals (Sweden)

    Rémy Beaudouin

    Full Text Available Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model was coupled to an individual based model of zebrafish population dynamics (IBM model. Next, we fitted the DEB model to new experimental data on zebrafish growth and reproduction thus improving existing models. We further analysed the DEB-model and DEB-IBM using a sensitivity analysis. Finally, the predictions of the DEB-IBM were compared to existing observations on natural zebrafish populations and the predicted population dynamics are realistic. While our zebrafish DEB-IBM model can still be improved by acquiring new experimental data on the most uncertain processes (e.g. survival or feeding, it can already serve to predict the impact of compounds at the population level.

  4. A comparison of Graham and Piotroski investment models using accounting information and efficacy measurement

    OpenAIRE

    2016-01-01

    We examine the investment models of Benjamin Graham and Joseph Piotroski and compare the efficacy of these two models by running backtest, using screening rules and ranking systems built in Portfolio 123. Using different combinations of screening rules and ranking systems, we also examine the performance of Piotroski and Graham investment models. We find that the combination of Piotroski and Graham investment models performs better than S&P 500. We also find that the Piotroski screening with ...

  5. Accounting outsourcing

    OpenAIRE

    Klečacká, Tereza

    2009-01-01

    This thesis gives a complex view on accounting outsourcing, deals with the outsourcing process from its beginning (condition of collaboration, making of contract), through collaboration to its possible ending. This work defines outsourcing, indicates the main advatages, disadvatages and arguments for its using. The main object of thesis is mainly practical side of accounting outsourcing and providing of first quality accounting services.

  6. Accounting standards

    NARCIS (Netherlands)

    Stellinga, B.; Mügge, D.

    2014-01-01

    The European and global regulation of accounting standards have witnessed remarkable changes over the past twenty years. In the early 1990s, EU accounting practices were fragmented along national lines and US accounting standards were the de facto global standards. Since 2005, all EU listed companie

  7. Accounting for subgrid scale topographic variations in flood propagation modeling using MODFLOW

    DEFF Research Database (Denmark)

    Milzow, Christian; Kinzelbach, W.

    2010-01-01

    To be computationally viable, grid-based spatially distributed hydrological models of large wetlands or floodplains must be set up using relatively large cells (order of hundreds of meters to kilometers). Computational costs are especially high when considering the numerous model runs or model time...

  8. THE CURRENT ACCOUNT DEFICIT AND THE FIXED EXCHANGE RATE. ADJUSTING MECHANISMS AND MODELS.

    Directory of Open Access Journals (Sweden)

    HATEGAN D.B. Anca

    2010-07-01

    Full Text Available The main purpose of the paper is to explain what measures can be taken in order to fix the trade deficit, and the pressure that is upon a country by imposing such measures. The international and the national supply and demand conditions change rapidly, and if a country doesn’t succeed in keeping a tight control over its deficit, a lot of factors will affect its wellbeing. In order to reduce the external trade deficit, the government needs to resort to several techniques. The desired result is to have a balanced current account, and therefore, the government is free to use measures such as fixing its exchange rate, reducing government spending etc. We have shown that all these measures will have a certain impact upon an economy, by allowing its exports to thrive and eliminate the danger from excessive imports, or vice-versa. The main conclusion our paper is that government intervention is allowed in order to maintain the balance of the current account.

  9. Nonlinear analysis of a new car-following model accounting for the global average optimal velocity difference

    Science.gov (United States)

    Peng, Guanghan; Lu, Weizhen; He, Hongdi

    2016-09-01

    In this paper, a new car-following model is proposed by considering the global average optimal velocity difference effect on the basis of the full velocity difference (FVD) model. We investigate the influence of the global average optimal velocity difference on the stability of traffic flow by making use of linear stability analysis. It indicates that the stable region will be enlarged by taking the global average optimal velocity difference effect into account. Subsequently, the mKdV equation near the critical point and its kink-antikink soliton solution, which can describe the traffic jam transition, is derived from nonlinear analysis. Furthermore, numerical simulations confirm that the effect of the global average optimal velocity difference can efficiently improve the stability of traffic flow, which show that our new consideration should be taken into account to suppress the traffic congestion for car-following theory.

  10. Research on the Accounting Model of Circular Economy%循环经济会计模式的研究

    Institute of Scientific and Technical Information of China (English)

    王傲舒媞

    2015-01-01

    In this paper, the author combined with the characteristics of circular economy, the important significance of developing circular economy and points out the limitations of current accounting model in the development of circular economy, and puts forward the accounting model suitable for the development of circular economy.%笔者结合循环经济的特点,从发展循环经济的重要意义入手,指出我国现行会计模式在循环经济发展中的局限性,提出适合循环经济发展的会计模式,以期为进一步促进我国循环经济会计体系的良好构建做出有益的参考.

  11. Studying Impact of Organizational Factors in Information Technology Acceptance in Accounting Occupation by Use of TAM Model (Iranian Case Study)

    OpenAIRE

    Akbar Allahyari; Morteza Ramazani

    2012-01-01

    Nowadays, information technology attitudes as the beneficial part of industry, economic and culture. Accounting posits as profession that provide information for decision- making of users and in the complex world, organizations must use information technology to present information for users in time. This research is by purpose of studying impact of organizational factors in information technology acceptance by use of TAM model in study descriptive-surveying method that researcher has used to...

  12. Accounting for spatial effects in land use regression for urban air pollution modeling.

    Science.gov (United States)

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models.

  13. JOMAR - A model for accounting the environmental loads from building constructions

    Energy Technology Data Exchange (ETDEWEB)

    Roenning, Anne; Nereng, Guro; Vold, Mie; Bjoerberg, Svein; Lassen, Niels

    2008-07-01

    The objective for this project was to develop a model as a basis for calculation of environmental profile for whole building constructions, based upon data from databases and general LCA software, in addition to the model structure from the Nordic project on LCC assessment of buildings. The model has been tested on three building constructions; timber based, flexible and heavy as well as heavy. Total energy consumption and emissions contributing to climate change are calculated in a total life cycle perspective. The developed model and exemplifying case assessments have shown that a holistic model including operation phase is both important and possible to implement. The project has shown that the operation phase causes the highest environmental loads when it comes to the exemplified impact categories. A suggestion on further development of the model along two different axes in collaboration with a broader representation from the building sector is given in the report (author)(tk)

  14. Modelling reverse characteristics of power LEDs with thermal phenomena taken into account

    Science.gov (United States)

    Ptak, Przemysław; Górecki, Krzysztof

    2016-01-01

    This paper refers to modelling characteristics of power LEDs with a particular reference to thermal phenomena. Special attention is paid to modelling characteristics of the circuit protecting the considered device against the excessive value of the reverse voltage and to the description of the temperature influence on optical power. The network form of the worked out model is presented and some results of experimental verification of this model for the selected diodes operating at different cooling conditions are described. The very good agreement between the calculated and measured characteristics is obtained.

  15. Mathematical modeling taking into account of intrinsic kinetic properties of cylinder-type vanadium catalyst

    Institute of Scientific and Technical Information of China (English)

    陈振兴; 李洪桂; 王零森

    2004-01-01

    The method to calculate internal surface effective factor of cylinder-type vanadium catalyst Ls-9 was given. Based on hypothesis of subjunctive one dimension diffusion and combined shape adjustment factor with threestep catalytic mechanism model, the macroscopic kinetic model equation about SO2 oxidation on Ls-9 was deduced.With fixed-bed integral reactor and under the conditions of temperature 350 - 410 ℃, space velocity 1 800 - 5 000h-1, SO2 inlet content 7 %- 12%, the macroscopic kinetic data were detected. Through model parameter estimation,the macroscopic kinetic model equation was obtained.

  16. A hybrid mode choice model to account for the dynamic effect of inertia over time

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Börjesson, Maria; Bierlaire, Michel

    gathered over a continuous period of time, six weeks, to study both inertia and the influence of habits. Tendency to stick with the same alternative is measured through lagged variables that link the current choice with the previous trip made with the same purpose, mode and time of day. However, the lagged......The influence of habits, giving rise to inertia effect, in the choice process has been intensely debated in the literature. Typically inertia is accounted for by letting the indirect utility functions of the alternatives of the choice situation at time t depend on the outcome of the choice made...... at a previous point in time. However, according to the psychological literature, inertia is the results of a habit, which is formed in a longer process where many past decisions (not only the immediately previous one) remain in the memory of the consumer and influence behavior. In this study we use panel data...

  17. Modeling the pulse shape of Q-switched lasers to account for terminal-level relaxation

    Institute of Scientific and Technical Information of China (English)

    Zeng Qin-Yong; Wan Yong; Xiong Ji-Chuan; Zhu Da-Yong

    2011-01-01

    To account for the effect of lower-level relaxation, we have derived a characteristic equation for describing the laser pulse from the modified rate equations for Q-switched lasers. The pulse temporal profile is related to the ratio of the lower-level lifetime to the cavity lifetime and the number of times the population inversion density is above the threshold. By solving the coupled rate equations numerically, the effect of terminal-level lifetime on pulse temporal behaviour is analysed. The mode is applied to the case of a diode-pumped Nd:YAG laser that is passively Q-switched by a Cr4+:YAG absorber. Theoretical results show good agreement with the experiments.

  18. Microsimulation Model Estimating Czech Farm Income from Farm Accountancy Data Network Database

    Directory of Open Access Journals (Sweden)

    Z. Hloušková

    2014-09-01

    Full Text Available Agricultural income is one of the most important measures of economic status of agricultural farms and the whole agricultural sector. This work is focused on finding the optimal method of estimating national agricultural income from micro-economic database managed by the Farm Accountancy Data Network (FADN. Use of FADN data base is relevant due to the representativeness of the results for the whole country and the opportunity to carry out micro-level analysis. The main motivation for this study was a first forecast of national agricultural income from FADN data undertaken 9 months before the final official FADN results were published. Our own method of estimating the income estimation and the simulation procedure were established and successfully tested on the whole database on data from two preceding years. Present paper also provides information on used method of agricultural income prediction and on tests of its suitability.

  19. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  20. Accounting for correlated observations in an age-based state-space stock assessment model

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders

    2016-01-01

    Fish stock assessment models often relyon size- or age-specific observations that are assumed to be statistically independent of each other. In reality, these observations are not raw observations, but rather they are estimates from a catch-standardization model or similar summary statistics based...

  1. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  2. Value-Added Models of Assessment: Implications for Motivation and Accountability

    Science.gov (United States)

    Anderman, Eric M.; Anderman, Lynley H.; Yough, Michael S.; Gimbert, Belinda G.

    2010-01-01

    In this article, we examine the relations of value-added models of measuring academic achievement to student motivation. Using an achievement goal orientation theory perspective, we argue that value-added models, which focus on the progress of individual students over time, are more closely aligned with research on student motivation than are more…

  3. Demand model for production of an enterprise taking into account factor of consumer expectations

    Directory of Open Access Journals (Sweden)

    L.V. Potrashkova

    2012-12-01

    Full Text Available This article presents a dynamic mathematical model of demand for innovative and uninnovative production of enterprises. The model allows to estimate future demand as a result of consumer expectations of production quality. Consumer expectations are considered as the resource component of enterprise marketing potential.

  4. Modelling of L-valine Repeated Fed-batch Fermentation Process Taking into Account the Dissolved Oxygen Tension

    Directory of Open Access Journals (Sweden)

    Tzanko Georgiev

    2009-03-01

    Full Text Available This article deals with synthesis of dynamic unstructured model of variable volume fed-batch fermentation process with intensive droppings for L-valine production. The presented approach of the investigation includes the following main procedures: description of the process by generalized stoichiometric equations; preliminary data processing and calculation of specific rates for main kinetic variables; identification of the specific rates takes into account the dissolved oxygen tension; establishment and optimisation of dynamic model of the process; simulation researches. MATLAB is used as a research environment.

  5. Spinfoam cosmology with the proper vertex amplitude

    CERN Document Server

    Vilensky, Ilya

    2016-01-01

    The proper vertex amplitude is derived from the EPRL vertex by restricting to a single gravitational sector in order to achieve the correct semi-classical behaviour. We apply the proper vertex to calculate a cosmological transition amplitude that can be viewed as the Hartle-Hawking wavefunction. To perform this calculation we deduce the integral form of the proper vertex and use extended stationary phase methods to estimate the large-volume limit. We show that the resulting amplitude satisfies an operator constraint whose classical analogue is the Hamiltonian constraint of the Friedmann-Robertson-Walker cosmology. We find that the constraint dynamically selects the relevant family of coherent states and demonstrate a similar dynamic selection in standard quantum mechanics.

  6. Proper conformal symmetries in SD Einstein spaces

    CERN Document Server

    Chudecki, Adam

    2014-01-01

    Proper conformal symmetries in self-dual (SD) Einstein spaces are considered. It is shown, that such symmetries are admitted only by the Einstein spaces of the type [N]x[N]. Spaces of the type [N]x[-] are considered in details. Existence of the proper conformal Killing vector implies existence of the isometric, covariantly constant and null Killing vector. It is shown, that there are two classes of [N]x[-]-metrics admitting proper conformal symmetry. They can be distinguished by analysis of the associated anti-self-dual (ASD) null strings. Both classes are analyzed in details. The problem is reduced to single linear PDE. Some general and special solutions of this PDE are presented.

  7. Modeling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    Directory of Open Access Journals (Sweden)

    C. Evenhuis

    2014-01-01

    Full Text Available Coral reefs are diverse ecosystems threatened by rising CO2 levels that are driving the observed increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are able to explicitly modelled by linking the rates of growth, recovery and calcification to the rates of bleaching and temperature stress induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The different characteristics of this model are also assessed against independent data to show that the model captures the observed response of corals. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles for understanding the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can reproduce much of the observed response of corals to changes in temperature and ocean acidification.

  8. Modeling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    Science.gov (United States)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2014-01-01

    Coral reefs are diverse ecosystems threatened by rising CO2 levels that are driving the observed increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are able to explicitly modelled by linking the rates of growth, recovery and calcification to the rates of bleaching and temperature stress induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The different characteristics of this model are also assessed against independent data to show that the model captures the observed response of corals. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles for understanding the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can reproduce much of the observed response of corals to changes in temperature and ocean acidification.

  9. Reduced models accounting for parallel magnetic perturbations: gyrofluid and finite Larmor radius-Landau fluid approaches

    Science.gov (United States)

    Tassi, E.; Sulem, P. L.; Passot, T.

    2016-12-01

    Reduced models are derived for a strongly magnetized collisionless plasma at scales which are large relative to the electron thermal gyroradius and in two asymptotic regimes. One corresponds to cold ions and the other to far sub-ion scales. By including the electron pressure dynamics, these models improve the Hall reduced magnetohydrodynamics (MHD) and the kinetic Alfvén wave model of Boldyrev et al. (2013 Astrophys. J., vol. 777, 2013, p. 41), respectively. We show that the two models can be obtained either within the gyrofluid formalism of Brizard (Phys. Fluids, vol. 4, 1992, pp. 1213-1228) or as suitable weakly nonlinear limits of the finite Larmor radius (FLR)-Landau fluid model of Sulem and Passot (J. Plasma Phys., vol 81, 2015, 325810103) which extends anisotropic Hall MHD by retaining low-frequency kinetic effects. It is noticeable that, at the far sub-ion scales, the simplifications originating from the gyroaveraging operators in the gyrofluid formalism and leading to subdominant ion velocity and temperature fluctuations, correspond, at the level of the FLR-Landau fluid, to cancellation between hydrodynamic contributions and ion finite Larmor radius corrections. Energy conservation properties of the models are discussed and an explicit example of a closure relation leading to a model with a Hamiltonian structure is provided.

  10. Proper generalized decompositions an introduction to computer implementation with Matlab

    CERN Document Server

    Cueto, Elías; Alfaro, Icíar

    2016-01-01

    This book is intended to help researchers overcome the entrance barrier to Proper Generalized Decomposition (PGD), by providing a valuable tool to begin the programming task. Detailed Matlab Codes are included for every chapter in the book, in which the theory previously described is translated into practice. Examples include parametric problems, non-linear model order reduction and real-time simulation, among others. Proper Generalized Decomposition (PGD) is a method for numerical simulation in many fields of applied science and engineering. As a generalization of Proper Orthogonal Decomposition or Principal Component Analysis to an arbitrary number of dimensions, PGD is able to provide the analyst with very accurate solutions for problems defined in high dimensional spaces, parametric problems and even real-time simulation. .

  11. An individual-based model of Zebrafish population dynamics accounting for energy dynamics

    DEFF Research Database (Denmark)

    Beaudouin, Remy; Goussen, Benoit; Piccini, Benjamin

    2015-01-01

    Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model......, the predictions of the DEB-IBM were compared to existing observations on natural zebrafish populations and the predicted population dynamics are realistic. While our zebrafish DEB-IBM model can still be improved by acquiring new experimental data on the most uncertain processes (e.g. survival or feeding), it can...

  12. Mathematical modelling of complex equilibria taking into account experimental data on activities of components

    Energy Technology Data Exchange (ETDEWEB)

    Nikolaeva, L.S.; Evseev, A.M.; Rozen, A.M.; Bobytev, A.P.; Kir' yanov, Yu.A. (Moskovskij Gosudarstvennyj Univ. (USSR))

    1981-09-01

    The extraction systems, the application of which is possible when reprocessing irradiated nuclear fuels are considered. It is shown that the selection of the component activities as the observed properties of the system (responses) provides a possibility to model the equilibria using methods of the regression analysis. The mathematical model of nitric acid extraction with tributyl phosphate is presented. The data on the composition of the complexes in the system studied, obtained by the method of mathematical modelling, are confirmed with the study of the IR spectra of the extracts.

  13. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    Science.gov (United States)

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. An agent-based simulation model of patient choice of health care providers in accountable care organizations.

    Science.gov (United States)

    Alibrahim, Abdullah; Wu, Shinyi

    2016-10-04

    Accountable care organizations (ACO) in the United States show promise in controlling health care costs while preserving patients' choice of providers. Understanding the effects of patient choice is critical in novel payment and delivery models like ACO that depend on continuity of care and accountability. The financial, utilization, and behavioral implications associated with a patient's decision to forego local health care providers for more distant ones to access higher quality care remain unknown. To study this question, we used an agent-based simulation model of a health care market composed of providers able to form ACO serving patients and embedded it in a conditional logit decision model to examine patients capable of choosing their care providers. This simulation focuses on Medicare beneficiaries and their congestive heart failure (CHF) outcomes. We place the patient agents in an ACO delivery system model in which provider agents decide if they remain in an ACO and perform a quality improving CHF disease management intervention. Illustrative results show that allowing patients to choose their providers reduces the yearly payment per CHF patient by $320, reduces mortality rates by 0.12 percentage points and hospitalization rates by 0.44 percentage points, and marginally increases provider participation in ACO. This study demonstrates a model capable of quantifying the effects of patient choice in a theoretical ACO system and provides a potential tool for policymakers to understand implications of patient choice and assess potential policy controls.

  15. Evaluation of proper height for squatting stool.

    Science.gov (United States)

    Jung, Hwa S; Jung, Hyung-Shik

    2008-05-01

    Many jobs and activities in people's daily lives have them in squatting postures. Jobs such as housekeeping, farming and welding require various squatting activities. It is speculated that prolonged squatting without any type of supporting stool would gradually and eventually impose musculoskeletal injuries on workers. This study aims to examine the proper height of the stool according to the position of working materials for the squatting worker. A total of 40 male and female college students and 10 female farmers participated in the experiment to find the proper stool height. Student participants were asked to sit and work in three different positions: floor level of 50 mm; ankle level of 200 mm; and knee level of 400 mm. They were then provided with stools of various heights and asked to maintain a squatting work posture. For each working position, they were asked to write down their thoughts on a preferred stool height. A Likert summated rating method as well as pairwise ranking test was applied to evaluate user preference for provided stools under conditions of different working positions. Under a similar experimental procedure, female farmers were asked to indicate their body part discomfort (BPD) on a body chart before and after performing the work. Statistical analysis showed that comparable results were found from both evaluation measures. When working position is below 50 mm, the proper stool height is 100 or should not be higher than 150 mm. When working position is 200 mm, the proper stool height is 150 mm. When working position is 400 mm, the proper stool height is 200 mm. Thus, it is strongly recommended to use proper height of stools with corresponding working position. Moreover, a wearable chair prototype was designed so that workers in a squatting posture do not have to carry and move the stool from one place to another. This stool should ultimately help to relieve physical stress and hence promote the health of squatting workers. This study sought

  16. Isometric Isomorphisms in Proper CQ*-algebras

    Institute of Scientific and Technical Information of China (English)

    Choonkil PARK; Jong Su AN

    2009-01-01

    In this paper,we prove the Hyers-Ulam-Rassias stability of isometric homomorphisms in proper CQ*-algebras for the following Cauchy-Jensen additive mapping:2f(x1+x2/2+y)=f(x1)+f(x2)+2f(y).The concept of Hyers-Ulam-Rassias stability originated from the Th.M.Rassias' stability theorem that appeared in the paper: On the stability of the linear mapping in Banach spaces,Proc.Amer.Math.Soc.,72 (1978),297-300.This is applied to investigate isometric isomorphisms between proper CQ*-algebras.

  17. Making Collaborative Innovation Accountable

    DEFF Research Database (Denmark)

    Sørensen, Eva

    The public sector is increasingly expected to be innovative, but the prize for a more innovative public sector might be that it becomes difficult to hold public authorities to account for their actions. The article explores the tensions between innovative and accountable governance, describes...... the foundation for these tensions in different accountability models, and suggest directions to take in analyzing the accountability of collaborative innovation processes....

  18. Accounting for scattering in the Landauer-Datta-Lundstrom transport model

    Directory of Open Access Journals (Sweden)

    Юрій Олексійович Кругляк

    2015-03-01

    Full Text Available Scattering of carriers in the LDL transport model during the changes of the scattering times in the collision processes is considered qualitatively. The basic relationship between the transmission coefficient T and the average mean free path  is derived for 1D conductor. As an example, the experimental data for Si MOSFET are analyzed with the use of various models of reliability.

  19. A retinal circuit model accounting for wide-field amacrine cells

    OpenAIRE

    SAĞLAM, Murat; Hayashida, Yuki; Murayama, Nobuki

    2008-01-01

    In previous experimental studies on the visual processing in vertebrates, higher-order visual functions such as the object segregation from background were found even in the retinal stage. Previously, the “linear–nonlinear” (LN) cascade models have been applied to the retinal circuit, and succeeded to describe the input-output dynamics for certain parts of the circuit, e.g., the receptive field of the outer retinal neurons. And recently, some abstract models composed of LN cascades as the cir...

  20. Loading Processes Dynamics Modelling Taking into Account the Bucket-Soil Interaction

    Directory of Open Access Journals (Sweden)

    Carmen Debeleac

    2007-10-01

    Full Text Available The author propose three dynamic models specialized for the vibrations and resistive forces analysis that appear at the loading process with different construction equipment like frontal loaders and excavators.The models used putting into evidence the components of digging: penetration, cutting, and loading.The conclusions of this study consist by evidentiate the dynamic overloads that appear on the working state and that induced the self-oscillations into the equipment structure.

  1. An extended continuum model accounting for the driver's timid and aggressive attributions

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Rongjun; Ge, Hongxia [Faculty of Maritime and Transportation, Ningbo University, Ningbo 315211 (China); Jiangsu Province Collaborative Innovation Center for Modern Urban Traffic Technologies, Nanjing 210096 (China); National Traffic Management Engineering and Technology Research Centre Ningbo University Sub-centre, Ningbo 315211 (China); Wang, Jufeng, E-mail: wjf@nit.zju.edu.cn [Ningbo Institute of Technology, Zhejiang University, Ningbo 315100 (China)

    2017-04-18

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV–Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption. - Highlights: • A new continuum model is developed with the consideration of the driver's timid and aggressive behaviors simultaneously. • Applying the linear stability theory, the new model's linear stability is obtained. • Through nonlinear analysis, the KdV–Burgers equation is derived. • The energy consumption for this model is studied.

  2. A retinal circuit model accounting for wide-field amacrine cells.

    Science.gov (United States)

    Sağlam, Murat; Hayashida, Yuki; Murayama, Nobuki

    2009-03-01

    In previous experimental studies on the visual processing in vertebrates, higher-order visual functions such as the object segregation from background were found even in the retinal stage. Previously, the "linear-nonlinear" (LN) cascade models have been applied to the retinal circuit, and succeeded to describe the input-output dynamics for certain parts of the circuit, e.g., the receptive field of the outer retinal neurons. And recently, some abstract models composed of LN cascades as the circuit elements could explain the higher-order retinal functions. However, in such a model, each class of retinal neurons is mostly omitted and thus, how those neurons play roles in the visual computations cannot be explored. Here, we present a spatio-temporal computational model of the vertebrate retina, based on the response function for each class of retinal neurons and on the anatomical inter-cellular connections. This model was capable of not only reproducing the spatio-temporal filtering properties of the outer retinal neurons, but also realizing the object segregation mechanism in the inner retinal circuit involving the "wide-field" amacrine cells. Moreover, the first-order Wiener kernels calculated for the neurons in our model showed a reasonable fit to the kernels previously measured in the real retinal neuron in situ.

  3. Analysis of homogeneous/non-homogeneous nanofluid models accounting for nanofluid-surface interactions

    Science.gov (United States)

    Ahmad, R.

    2016-07-01

    This article reports an unbiased analysis for the water based rod shaped alumina nanoparticles by considering both the homogeneous and non-homogeneous nanofluid models over the coupled nanofluid-surface interface. The mechanics of the surface are found for both the homogeneous and non-homogeneous models, which were ignored in previous studies. The viscosity and thermal conductivity data are implemented from the international nanofluid property benchmark exercise. All the simulations are being done by using the experimentally verified results. By considering the homogeneous and non-homogeneous models, the precise movement of the alumina nanoparticles over the surface has been observed by solving the corresponding system of differential equations. For the non-homogeneous model, a uniform temperature and nanofluid volume fraction are assumed at the surface, and the flux of the alumina nanoparticle is taken as zero. The assumption of zero nanoparticle flux at the surface makes the non-homogeneous model physically more realistic. The differences of all profiles for both the homogeneous and nonhomogeneous models are insignificant, and this is due to small deviations in the values of the Brownian motion and thermophoresis parameters.

  4. Modelling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    Science.gov (United States)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2015-05-01

    Coral reefs are diverse ecosystems that are threatened by rising CO2 levels through increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are explicitly modelled by linking rates of growth, recovery and calcification to rates of bleaching and temperature-stress-induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, correlated up- and down-regulation of traits that are consistent with resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The performance of the model is assessed against independent data to demonstrate how it can capture the observed response of corals to stress. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles to help understand the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can give insights into how corals respond to changes in temperature and ocean acidification.

  5. Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.

    Science.gov (United States)

    Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael

    2015-03-03

    Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.

  6. An extended continuum model accounting for the driver's timid and aggressive attributions

    Science.gov (United States)

    Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng

    2017-04-01

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV-Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption.

  7. Model of Environmental Development of the Urbanized Areas: Accounting of Ecological and other Factors

    Science.gov (United States)

    Abanina, E. N.; Pandakov, K. G.; Agapov, D. A.; Sorokina, Yu V.; Vasiliev, E. H.

    2017-05-01

    Modern cities and towns are often characterized by poor administration, which could be the reason of environmental degradation, the poverty growth, decline in economic growth and social isolation. In these circumstances it is really important to conduct fresh researches forming new ways of sustainable development of administrative districts. This development of the urban areas depends on many interdependent factors: ecological, economic, social. In this article we show some theoretical aspects of forming a model of environmental progress of the urbanized areas. We submit some model containing four levels including natural resources capacities of the territory, its social features, economic growth and human impact. The author describes the interrelations of elements of the model. In this article the program of environmental development of a city is offered and it could be used in any urban area.

  8. Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality

    Energy Technology Data Exchange (ETDEWEB)

    Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis

    2015-03-01

    This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.

  9. Does Don Fisher's high-pressure manifold model account for phloem transport and resource partitioning?

    Science.gov (United States)

    Patrick, John W

    2013-01-01

    The pressure flow model of phloem transport envisaged by Münch (1930) has gained wide acceptance. Recently, however, the model has been questioned on structural and physiological grounds. For instance, sub-structures of sieve elements may reduce their hydraulic conductances to levels that impede flow rates of phloem sap and observed magnitudes of pressure gradients to drive flow along sieve tubes could be inadequate in tall trees. A variant of the Münch pressure flow model, the high-pressure manifold model of phloem transport introduced by Donald Fisher may serve to reconcile at least some of these questions. To this end, key predicted features of the high-pressure manifold model of phloem transport are evaluated against current knowledge of the physiology of phloem transport. These features include: (1) An absence of significant gradients in axial hydrostatic pressure in sieve elements from collection to release phloem accompanied by transport properties of sieve elements that underpin this outcome; (2) Symplasmic pathways of phloem unloading into sink organs impose a major constraint over bulk flow rates of resources translocated through the source-path-sink system; (3) Hydraulic conductances of plasmodesmata, linking sieve elements with surrounding phloem parenchyma cells, are sufficient to support and also regulate bulk flow rates exiting from sieve elements of release phloem. The review identifies strong circumstantial evidence that resource transport through the source-path-sink system is consistent with the high-pressure manifold model of phloem transport. The analysis then moves to exploring mechanisms that may link demand for resources, by cells of meristematic and expansion/storage sinks, with plasmodesmal conductances of release phloem. The review concludes with a brief discussion of how these mechanisms may offer novel opportunities to enhance crop biomass yields.

  10. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Science.gov (United States)

    Beheshti, Rahmatollah; Jones-Smith, Jessica C; Igusa, Takeru

    2017-01-01

    Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES), the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1) quantifying the influence of prior diet preferences when food budgets are increased and 2) simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs), or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP). Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  11. Correcting for systematic effects in ground-based photographic proper motions: The Southern Proper Motion Program as a case study

    Science.gov (United States)

    van Altena, William F.; Girard, T. M.; Platais, I.; Kozhurina-Platais, V.; López, C. E.

    The derivation of accurate positions and proper motions from ground-based photographic materials requires the minimization of systematic errors due to inaccurate modeling of the telescopes' field-of-view and the magnitude equation. We describe the procedures that have been developed for the Southern Proper Motions Program (SPM) to deal with these important problems. The SPM is based on photographic plates taken at our Carlos Cesco Observatory at El Leoncito, Argentina and will yield absolute proper motions and positions to magnitude B approximately 19 for approximately 1 million stars south of declination -20 degrees. The SPM is a joint program between the Yale Southern Observatory and the Universidad Nacional de San Juan, Argentina. The SPM Catalog 2.0, which is the current version covering the -25 to -40 degree declination zones, provides positions, absolute proper motions, and photographic BV photometry for over 320,000 stars and galaxies. Stars cover the magnitude range 5 astrom/. Our web-side contains several useful plots showing the sky coverage, error distribution, a quick comparison with the Hipparcos proper motions, etc. We would appreciate your comments on the SPM 2.0 and our Web page.

  12. Mathematical modeling of pigment dispersion taking into account the full agglomerate particle size distribution

    DEFF Research Database (Denmark)

    Kiil, Søren

    2017-01-01

    particle size distribution was simulated. Data from two previous experimental investigations were used for model validation. The first concerns two different yellow organic pigments dispersed in nitrocellulose/ethanol vehicles in a ball mill and the second a red organic pigment dispersed in a solvent-based....... The only adjustable parameter used was an apparent rate constant for the linear agglomerate erosion rate. Model simulations, at selected values of time, for the full agglomerate particle size distribution were in good qualitative agreement with the measured values. A quantitative match of the experimental...

  13. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia

    Directory of Open Access Journals (Sweden)

    Supriyati

    2015-12-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the crite-ria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may dif-fer, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, theAudit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  14. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia (P.197-207

    Directory of Open Access Journals (Sweden)

    Supriyati Supriyati

    2017-01-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the criteria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may differ, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, the Audit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  15. Strategy Guideline. Proper Water Heater Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hoeschele, M. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Springer, D. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); German, A. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Staller, J. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Zhang, Y. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States)

    2015-04-09

    This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.

  16. Strategy Guideline: Proper Water Heater Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hoeschele, M. [Alliance for Residential Building Innovation, Davis, CA (United States); Springer, D. [Alliance for Residential Building Innovation, Davis, CA (United States); German, A. [Alliance for Residential Building Innovation, Davis, CA (United States); Staller, J. [Alliance for Residential Building Innovation, Davis, CA (United States); Zhang, Y. [Alliance for Residential Building Innovation, Davis, CA (United States)

    2015-04-01

    This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.

  17. The Essentials of Proper Wine Service.

    Science.gov (United States)

    Manago, Gary H.

    This instructional unit was designed to assist the food services instructor and/or the restaurant manager in training students and/or staff in the proper procedure for serving wines to guests. The lesson plans included in this unit focus on: (1) the different types of wine glasses and their uses; (2) the parts of a wine glass; (3) the proper…

  18. Isometry groups of proper metric spaces

    CERN Document Server

    Niemiec, Piotr

    2012-01-01

    Given a locally compact Polish space X, a necessary and sufficient condition for a group G of homeomorphisms of X to be the full isometry group of (X,d) for some proper metric d on X is given. It is shown that every locally compact Polish group G acts freely on GxY as the full isometry group of GxY with respect to a certain proper metric on GxY, where Y is an arbitrary locally compact Polish space with (card(G),card(Y)) different from (1,2). Locally compact Polish groups which act effectively and almost transitively on complete metric spaces as full isometry groups are characterized. Locally compact Polish non-Abelian groups on which every left invariant metric is automatically right invariant are characterized and fully classified. It is demonstrated that for every locally compact Polish space X having more than two points the set of proper metrics d such that Iso(X,d) = {id} is dense in the space of all proper metrics on X.

  19. A proper subclass of Maclane's class

    Directory of Open Access Journals (Sweden)

    May Hamdan

    1999-01-01

    paper, we define a subclass ℛ of consisting of those functions that have asymptotic values at a dense subset of the unit circle reached along rectifiable asymptotic paths. We also show that the class ℛ is a proper subclass of by constructing a function f∈ that admits no asymptotic paths of finite length.

  20. Accountability and non-proliferation nuclear regime: a review of the mutual surveillance Brazilian-Argentine model for nuclear safeguards; Accountability e regime de nao proliferacao nuclear: uma avaliacao do modelo de vigilancia mutua brasileiro-argentina de salvaguardas nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Xavier, Roberto Salles

    2014-08-01

    The regimes of accountability, the organizations of global governance and institutional arrangements of global governance of nuclear non-proliferation and of Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards are the subject of research. The starting point is the importance of the institutional model of global governance for the effective control of non-proliferation of nuclear weapons. In this context, the research investigates how to structure the current arrangements of the international nuclear non-proliferation and what is the performance of model Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards in relation to accountability regimes of global governance. For that, was searched the current literature of three theoretical dimensions: accountability, global governance and global governance organizations. In relation to the research method was used the case study and the treatment technique of data the analysis of content. The results allowed: to establish an evaluation model based on accountability mechanisms; to assess how behaves the model Mutual Vigilance Brazilian-Argentine Nuclear Safeguards front of the proposed accountability regime; and to measure the degree to which regional arrangements that work with systems of global governance can strengthen these international systems. (author)

  1. An Exemplar-Model Account of Feature Inference from Uncertain Categorizations

    Science.gov (United States)

    Nosofsky, Robert M.

    2015-01-01

    In a highly systematic literature, researchers have investigated the manner in which people make feature inferences in paradigms involving uncertain categorizations (e.g., Griffiths, Hayes, & Newell, 2012; Murphy & Ross, 1994, 2007, 2010a). Although researchers have discussed the implications of the results for models of categorization and…

  2. Teachers' Conceptions of Assessment in Chinese Contexts: A Tripartite Model of Accountability, Improvement, and Irrelevance

    Science.gov (United States)

    Brown, Gavin T. L.; Hui, Sammy K. F.; Yu, Flora W. M.; Kennedy, Kerry J.

    2011-01-01

    The beliefs teachers have about assessment influence classroom practices and reflect cultural and societal differences. This paper reports the development of a new self-report inventory to examine beliefs teachers in Hong Kong and southern China contexts have about the nature and purpose of assessment. A statistically equivalent model for Hong…

  3. Accounting for false-positive acoustic detections of bats using occupancy models

    Science.gov (United States)

    Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.

    2014-01-01

    1. Acoustic surveys have become a common survey method for bats and other vocal taxa. Previous work shows that bat echolocation may be misidentified, but common analytic methods, such as occupancy models, assume that misidentifications do not occur. Unless rare, such misidentifications could lead to incorrect inferences with significant management implications.

  4. A Mathematical Model Accounting for the Organisation in Multiplets of the Genetic Code

    OpenAIRE

    Sciarrino, A.

    2001-01-01

    Requiring stability of genetic code against translation errors, modelised by suitable mathematical operators in the crystal basis model of the genetic code, the main features of the organisation in multiplets of the mitochondrial and of the standard genetic code are explained.

  5. Practical Model for First Hyperpolarizability Dispersion Accounting for Both Homogeneous and Inhomogeneous Broadening Effects.

    Science.gov (United States)

    Campo, Jochen; Wenseleers, Wim; Hales, Joel M; Makarov, Nikolay S; Perry, Joseph W

    2012-08-16

    A practical yet accurate dispersion model for the molecular first hyperpolarizability β is presented, incorporating both homogeneous and inhomogeneous line broadening because these affect the β dispersion differently, even if they are indistinguishable in linear absorption. Consequently, combining the absorption spectrum with one free shape-determining parameter Ginhom, the inhomogeneous line width, turns out to be necessary and sufficient to obtain a reliable description of the β dispersion, requiring no information on the homogeneous (including vibronic) and inhomogeneous line broadening mechanisms involved, providing an ideal model for practical use in extrapolating experimental nonlinear optical (NLO) data. The model is applied to the efficient NLO chromophore picolinium quinodimethane, yielding an excellent fit of the two-photon resonant wavelength-dependent data and a dependable static value β0 = 316 × 10(-30) esu. Furthermore, we show that including a second electronic excited state in the model does yield an improved description of the NLO data at shorter wavelengths but has only limited influence on β0.

  6. Fluid Simulations with Atomistic Resolution: Multiscale Model with Account of Nonlocal Momentum Transfer

    NARCIS (Netherlands)

    Svitenkov, A.I.; Chivilikhin, S.A.; Hoekstra, A.G.; Boukhanovsky, A.V.

    2015-01-01

    Nano- and microscale flow phenomena turn out to be highly non-trivial for simulation and require the use of heterogeneous modeling approaches. While the continuum Navier-Stokes equations and related boundary conditions quickly break down at those scales, various direct simulation methods and hybrid

  7. Accountability in Training Transfer: Adapting Schlenker's Model of Responsibility to a Persistent but Solvable Problem

    Science.gov (United States)

    Burke, Lisa A.; Saks, Alan M.

    2009-01-01

    Decades have been spent studying training transfer in organizational environments in recognition of a transfer problem in organizations. Theoretical models of various antecedents, empirical studies of transfer interventions, and studies of best practices have all been advanced to address this continued problem. Yet a solution may not be so…

  8. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    Science.gov (United States)

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  9. A coupled surface/subsurface flow model accounting for air entrapment and air pressure counterflow

    DEFF Research Database (Denmark)

    Delfs, Jens Olaf; Wang, Wenqing; Kalbacher, Thomas

    2013-01-01

    This work introduces the soil air system into integrated hydrology by simulating the flow processes and interactions of surface runoff, soil moisture and air in the shallow subsurface. The numerical model is formulated as a coupled system of partial differential equations for hydrostatic (diffusive...

  10. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    Science.gov (United States)

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  11. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    NARCIS (Netherlands)

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts, allowin

  12. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    NARCIS (Netherlands)

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts,

  13. Thermodynamic Modeling of Developed Structural Turbulence Taking into Account Fluctuations of Energy Dissipation

    Science.gov (United States)

    Kolesnichenko, A. V.

    2004-03-01

    A thermodynamic approach to the construction of a phenomenological macroscopic model of developed turbulence in a compressible fluid is considered with regard for the formation of space-time dissipative structures. A set of random variables were introduced into the model as internal parameters of the turbulent-chaos subsystem. This allowed us to obtain, by methods of nonequilibrium thermodynamics, the kinetic Fokker-Planck equation in the configuration space. This equation serves to determine the temporary evolution of the conditional probability distribution function of structural parameters pertaining to the cascade process of fragmentation of large-scale eddies and temperature inhomogeneities and to analyze Markovian stochastic processes of transition from one nonequilibrium stationary turbulent-motion state to another as a result of successive loss of stability caused by a change in the governing parameters. An alternative method for investigating the mechanisms of such transitions, based on the stochastic Langevin-type equation intimately related to the derived kinetic equation, is also considered. Some postulates and physical and mathematical assumptions used in the thermodynamic model of structurized turbulence are discussed in detail. In particular, we considered, using the deterministic transport equation for conditional means, the cardinal problem of the developed approach-the possibility of the existence of asymptotically stable stationary states of the turbulent-chaos subsystem. Also proposed is the nonequilibrium thermodynamic potential for internal coordinates, which extends the well-known Boltzmann-Planck relationship for equilibrium states to the nonequilibrium stationary states of the representing ensemble. This potential is shown to be the Lyapunov function for such states. The relation is also explored between the internal intermittence in the inertial interval of scales and the fluctuations of the energy of dissipation. This study is aimed at

  14. MODIS Inundation Estimate Assimilation into Soil Moisture Accounting Hydrologic Model: A Case Study in Southeast Asia

    Directory of Open Access Journals (Sweden)

    Ari Posner

    2014-11-01

    Full Text Available Flash Flood Guidance consists of indices that estimate the amount of rain of a certain duration that is needed over a given small basin in order to cause minor flooding. Backwater catchment inundation from swollen rivers or regional groundwater inputs are not significant over the spatial and temporal scales for the majority of upland flash flood prone basins, as such, these effects are not considered. However, some lowland areas and flat terrain near large rivers experience standing water long after local precipitation has ceased. NASA is producing an experimental product from the MODIS that detects standing water. These observations were assimilated into the hydrologic model in order to more accurately represent soil moisture conditions within basins, from sources of water from outside of the basin. Based on the upper soil water content, relations are used to derive an error estimate for the modeled soil saturation fraction; whereby, the soil saturation fraction model state can be updated given the availability of satellite observed inundation. Model error estimates were used in a Monte Carlo ensemble forecast of soil water and flash flood potential. Numerical experiments with six months of data (July 2011–December 2011 showed that MODIS inundation data, when assimilated to correct soil moisture estimates, increased the likelihood that bankfull flow would occur, over non-assimilated modeling, at catchment outlets for approximately 44% of basin-days during the study time period. While this is a much more realistic representation of conditions, no actual events occurred allowing for validation during the time period.

  15. A Semi-Empirical Model for Tilted-Gun Planar Magnetron Sputtering Accounting for Chimney Shadowing

    Science.gov (United States)

    Bunn, J. K.; Metting, C. J.; Hattrick-Simpers, J.

    2015-01-01

    Integrated computational materials engineering (ICME) approaches to composition and thickness profiles of sputtered thin-film samples are the key to expediting materials exploration for these materials. Here, an ICME-based semi-empirical approach to modeling the thickness of thin-film samples deposited via magnetron sputtering is developed. Using Yamamura's dimensionless differential angular sputtering yield and a measured deposition rate at a point in space for a single experimental condition, the model predicts the deposition profile from planar DC sputtering sources. The model includes corrections for off-center, tilted gun geometries as well as shadowing effects from gun chimneys used in most state-of-the-art sputtering systems. The modeling algorithm was validated by comparing its results with experimental deposition rates obtained from a sputtering system utilizing sources with a multi-piece chimney assembly that consists of a lower ground shield and a removable gas chimney. Simulations were performed for gun-tilts ranging from 0° to 31.3° from the vertical with and without the gas chimney installed. The results for the predicted and experimental angular dependence of the sputtering deposition rate were found to have an average magnitude of relative error of for a 0°-31.3° gun-tilt range without the gas chimney, and for a 17.7°-31.3° gun-tilt range with the gas chimney. The continuum nature of the model renders this approach reverse-optimizable, providing a rapid tool for assisting in the understanding of the synthesis-composition-property space of novel materials.

  16. Using state-and-transition modeling to account for imperfect detection in invasive species management

    Science.gov (United States)

    Frid, Leonardo; Holcombe, Tracy; Morisette, Jeffrey T.; Olsson, Aaryn D.; Brigham, Lindy; Bean, Travis M.; Betancourt, Julio L.; Bryan, Katherine

    2013-01-01

    Buffelgrass, a highly competitive and flammable African bunchgrass, is spreading rapidly across both urban and natural areas in the Sonoran Desert of southern and central Arizona. Damages include increased fire risk, losses in biodiversity, and diminished revenues and quality of life. Feasibility of sustained and successful mitigation will depend heavily on rates of spread, treatment capacity, and cost–benefit analysis. We created a decision support model for the wildland–urban interface north of Tucson, AZ, using a spatial state-and-transition simulation modeling framework, the Tool for Exploratory Landscape Scenario Analyses. We addressed the issues of undetected invasions, identifying potentially suitable habitat and calibrating spread rates, while answering questions about how to allocate resources among inventory, treatment, and maintenance. Inputs to the model include a state-and-transition simulation model to describe the succession and control of buffelgrass, a habitat suitability model, management planning zones, spread vectors, estimated dispersal kernels for buffelgrass, and maps of current distribution. Our spatial simulations showed that without treatment, buffelgrass infestations that started with as little as 80 ha (198 ac) could grow to more than 6,000 ha by the year 2060. In contrast, applying unlimited management resources could limit 2060 infestation levels to approximately 50 ha. The application of sufficient resources toward inventory is important because undetected patches of buffelgrass will tend to grow exponentially. In our simulations, areas affected by buffelgrass may increase substantially over the next 50 yr, but a large, upfront investment in buffelgrass control could reduce the infested area and overall management costs.

  17. Codon-substitution models to detect adaptive evolution that account for heterogeneous selective pressures among site classes.

    Science.gov (United States)

    Yang, Ziheng; Swanson, Willie J

    2002-01-01

    The nonsynonymous to synonymous substitution rate ratio (omega = d(N)/d(S)) provides a sensitive measure of selective pressure at the protein level, with omega values 1 indicating purifying selection, neutral evolution, and diversifying selection, respectively. Maximum likelihood models of codon substitution developed recently account for variable selective pressures among amino acid sites by employing a statistical distribution for the omega ratio among sites. Those models, called random-sites models, are suitable when we do not know a priori which sites are under what kind of selective pressure. Sometimes prior information (such as the tertiary structure of the protein) might be available to partition sites in the protein into different classes, which are expected to be under different selective pressures. It is then sensible to use such information in the model. In this paper, we implement maximum likelihood models for prepartitioned data sets, which account for the heterogeneity among site partitions by using different omega parameters for the partitions. The models, referred to as fixed-sites models, are also useful for combined analysis of multiple genes from the same set of species. We apply the models to data sets of the major histocompatibility complex (MHC) class I alleles from human populations and of the abalone sperm lysin genes. Structural information is used to partition sites in MHC into two classes: those in the antigen recognition site (ARS) and those outside. Positive selection is detected in the ARS by the fixed-sites models. Similarly, sites in lysin are classified into the buried and solvent-exposed classes according to the tertiary structure, and positive selection was detected at the solvent-exposed sites. The random-sites models identified a number of sites under positive selection in each data set, confirming and elaborating the results of the fixed-sites models. The analysis demonstrates the utility of the fixed-sites models, as well as

  18. Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.

    2016-01-01

    Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.

  19. Tree biomass in the Swiss landscape: nationwide modelling for improved accounting for forest and non-forest trees.

    Science.gov (United States)

    Price, B; Gomez, A; Mathys, L; Gardi, O; Schellenberger, A; Ginzler, C; Thürig, E

    2017-03-01

    Trees outside forest (TOF) can perform a variety of social, economic and ecological functions including carbon sequestration. However, detailed quantification of tree biomass is usually limited to forest areas. Taking advantage of structural information available from stereo aerial imagery and airborne laser scanning (ALS), this research models tree biomass using national forest inventory data and linear least-square regression and applies the model both inside and outside of forest to create a nationwide model for tree biomass (above ground and below ground). Validation of the tree biomass model against TOF data within settlement areas shows relatively low model performance (R (2) of 0.44) but still a considerable improvement on current biomass estimates used for greenhouse gas inventory and carbon accounting. We demonstrate an efficient and easily implementable approach to modelling tree biomass across a large heterogeneous nationwide area. The model offers significant opportunity for improved estimates on land use combination categories (CC) where tree biomass has either not been included or only roughly estimated until now. The ALS biomass model also offers the advantage of providing greater spatial resolution and greater within CC spatial variability compared to the current nationwide estimates.

  20. Modelling and experimental validation for off-design performance of the helical heat exchanger with LMTD correction taken into account

    Energy Technology Data Exchange (ETDEWEB)

    Phu, Nguyen Minh; Trinh, Nguyen Thi Minh [Vietnam National University, Ho Chi Minh City (Viet Nam)

    2016-07-15

    Today the helical coil heat exchanger is being employed widely due to its dominant advantages. In this study, a mathematical model was established to predict off-design works of the helical heat exchanger. The model was based on the LMTD and e-NTU methods, where a LMTD correction factor was taken into account to increase accuracy. An experimental apparatus was set-up to validate the model. Results showed that errors of thermal duty, outlet hot fluid temperature, outlet cold fluid temperature, shell-side pressure drop, and tube-side pressure drop were respectively +-5%, +-1%, +-1%, +-5% and +-2%. Diagrams of dimensionless operating parameters and a regression function were also presented as design-maps, a fast calculator for usage in design and operation of the exchanger. The study is expected to be a good tool to estimate off-design conditions of the single-phase helical heat exchangers.

  1. Where's the problem? Considering Laing and Esterson's account of schizophrenia, social models of disability, and extended mental disorder.

    Science.gov (United States)

    Cooper, Rachel

    2017-08-01

    In this article, I compare and evaluate R. D. Laing and A. Esterson's account of schizophrenia as developed in Sanity, Madness and the Family (1964), social models of disability, and accounts of extended mental disorder. These accounts claim that some putative disorders (schizophrenia, disability, certain mental disorders) should not be thought of as reflecting biological or psychological dysfunction within the afflicted individual, but instead as external problems (to be located in the family, or in the material and social environment). In this article, I consider the grounds on which such claims might be supported. I argue that problems should not be located within an individual putative patient in cases where there is some acceptable test environment in which there is no problem. A number of cases where such an argument can show that there is no internal disorder are discussed. I argue, however, that Laing and Esterson's argument-that schizophrenia is not within diagnosed patients-does not work. The problem with their argument is that they fail to show that the diagnosed women in their study function adequately in any environment.

  2. Method for determining the duration of construction basing on evolutionary modeling taking into account random organizational expectations

    Directory of Open Access Journals (Sweden)

    Alekseytsev Anatoliy Viktorovich

    2016-10-01

    Full Text Available One of the problems of construction planning is failure to meet time constraints and increase of workflow duration. In the recent years informational technologies are efficiently used to solve the problem of estimation of construction period. The issue of optimal estimate of the duration of construction, taking into account the possible organizational expectations is considered in the article. In order to solve this problem the iteration scheme of evolutionary modeling, in which random values of organizational expectations are used as variable parameters is developed. Adjustable genetic operators are used to improve the efficiency of the search for solutions. The reliability of the proposed approach is illustrated by an example of formation of construction schedules of monolithic foundations for buildings, taking into account possible disruptions of supply of concrete and reinforcement cages. Application of the presented methodology enables automated acquisition of several alternative scheduling of construction in accordance with standard or directive duration. Application of this computational procedure has the prospects of taking into account of construction downtime due to weather, accidents related to construction machinery breakdowns or local emergency collapses of the structures being erected.

  3. Accounting for non-linear chemistry of ship plumes in the GEOS-Chem global chemistry transport model

    Directory of Open Access Journals (Sweden)

    G. C. M. Vinken

    2011-11-01

    Full Text Available We present a computationally efficient approach to account for the non-linear chemistry occurring during the dispersion of ship exhaust plumes in a global 3-D model of atmospheric chemistry (GEOS-Chem. We use a plume-in-grid formulation where ship emissions age chemically for 5 h before being released in the global model grid. Besides reducing the original ship NOx emissions in GEOS-Chem, our approach also releases the secondary compounds ozone and HNO3, produced during the 5 h after the original emissions, into the model. We applied our improved method and also the widely used "instant dilution" approach to a 1-yr GEOS-Chem simulation of global tropospheric ozone-NOx-VOC-aerosol chemistry. We also ran simulations with the standard model (emitting 10 molecules O3 and 1 molecule HNO3 per ship NOx molecule, and a model without any ship emissions at all. The model without any ship emissions simulates up to 0.1 ppbv (or 50% lower NOx concentrations over the North Atlantic in July than our improved GEOS-Chem model. "Instant dilution" overestimates NOx concentrations by 0.1 ppbv (50% and ozone by 3–5 ppbv (10–25%, compared to our improved model over this region. These conclusions are supported by comparing simulated and observed NOx and ozone concentrations in the lower troposphere over the Pacific Ocean. The comparisons show that the improved GEOS-Chem model simulates NOx concentrations in between the instant dilution model and the model without ship emissions, which results in lower O3 concentrations than the instant dilution model. The relative differences in simulated NOx and ozone between our improved approach and instant dilution are smallest over strongly polluted seas (e.g. North Sea, suggesting that accounting for in-plume chemistry is most relevant for pristine marine areas.

  4. Listening to food workers: Factors that impact proper health and hygiene practice in food service.

    Science.gov (United States)

    Clayton, Megan L; Clegg Smith, Katherine; Neff, Roni A; Pollack, Keshia M; Ensminger, Margaret

    2015-01-01

    Foodborne disease is a significant problem worldwide. Research exploring sources of outbreaks indicates a pronounced role for food workers' improper health and hygiene practice. To investigate food workers' perceptions of factors that impact proper food safety practice. Interviews with food service workers in Baltimore, MD, USA discussing food safety practices and factors that impact implementation in the workplace. A social ecological model organizes multiple levels of influence on health and hygiene behavior. Issues raised by interviewees include factors across the five levels of the social ecological model, and confirm findings from previous work. Interviews also reveal many factors not highlighted in prior work, including issues with food service policies and procedures, working conditions (e.g., pay and benefits), community resources, and state and federal policies. Food safety interventions should adopt an ecological orientation that accounts for factors at multiple levels, including workers' social and structural context, that impact food safety practice.

  5. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    Energy Technology Data Exchange (ETDEWEB)

    Beckon, William N., E-mail: William_Beckon@fws.gov

    2016-07-15

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  6. Accounting for Long Term Sediment Storage in a Watershed Scale Numerical Model for Suspended Sediment Routing

    Science.gov (United States)

    Keeler, J. J.; Pizzuto, J. E.; Skalak, K.; Karwan, D. L.; Benthem, A.; Ackerman, T. R.

    2015-12-01

    Quantifying the delivery of suspended sediment from upland sources to downstream receiving waters is important for watershed management, but current routing models fail to accurately represent lag times in delivery resulting from sediment storage. In this study, we route suspended sediment tagged by a characteristic tracer using a 1-dimensional model that implicitly includes storage and remobilization processes and timescales. From an input location where tagged sediment is added, the model advects suspended sediment downstream at the velocity of the stream (adjusted for the intermittency of transport events). Deposition rates are specified by the fraction of the suspended load stored per kilometer of downstream transport (presumably available from a sediment budget). Tagged sediment leaving storage is evaluated from a convolution equation based on the probability distribution function (pdf) of sediment storage waiting times; this approach avoids the difficulty of accurately representing complex processes of sediment remobilization from floodplain and other deposits. To illustrate the role of storage on sediment delivery, we compare exponential and bounded power-law waiting time pdfs with identical means of 94 years. In both cases, the median travel time for sediment to reach the depocenter in fluvial systems less than 40km long is governed by in-channel transport and is unaffected by sediment storage. As the channel length increases, however, the median sediment travel time reflects storage rather than in-channel transport; travel times do not vary significantly between the two different waiting time functions. At distances of 50, 100, and 200 km, the median travel time for suspended sediment is 36, 136, and 325 years, orders of magnitude slower than travel times associated with in-channel transport. These computations demonstrate that storage can be neglected for short rivers, but for longer systems, storage controls the delivery of suspended sediment.

  7. An extended macro traffic flow model accounting for multiple optimal velocity functions with different probabilities

    Science.gov (United States)

    Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng

    2017-08-01

    Due to the maximum velocity and safe headway distance of the different vehicles are not exactly the same, an extended macro model of traffic flow with the consideration of multiple optimal velocity functions with probabilities is proposed in this paper. By means of linear stability theory, the new model's linear stability condition considering multiple probabilities optimal velocity is obtained. The KdV-Burgers equation is derived to describe the propagating behavior of traffic density wave near the neutral stability line through nonlinear analysis. The numerical simulations of influences of multiple maximum velocities and multiple safety distances on model's stability and traffic capacity are carried out. The cases of two different kinds of maximum speeds with same safe headway distance, two different types of safe headway distances with same maximum speed and two different max velocities and two different time-gaps are all explored by numerical simulations. First cases demonstrate that when the proportion of vehicles with a larger vmax increase, the traffic tends to unstable, which also means that jerk and brakes is not conducive to traffic stability and easier to result in stop and go phenomenon. Second cases show that when the proportion of vehicles with greater safety spacing increases, the traffic tends to be unstable, which also means that too cautious assumptions or weak driving skill is not conducive to traffic stability. Last cases indicate that increase of maximum speed is not conducive to traffic stability, while reduction of the safe headway distance is conducive to traffic stability. Numerical simulation manifests that the mixed driving and traffic diversion does not have effect on the traffic capacity when traffic density is low or heavy. Numerical results also show that mixed driving should be chosen to increase the traffic capacity when the traffic density is lower, while the traffic diversion should be chosen to increase the traffic capacity when

  8. Regional input-output models and the treatment of imports in the European System of Accounts

    OpenAIRE

    Kronenberg, Tobias

    2011-01-01

    Input-output models are often used in regional science due to their versatility and their ability to capture many of the distinguishing features of a regional economy. Input-output tables are available for all EU member countries, but they are hard to find at the regional level, since many regional governments lack the resources or the will to produce reliable, survey-based regional input-output tables. Therefore, in many cases researchers adopt nonsurvey techniques to derive regional input-o...

  9. Accounting for heaping in retrospectively reported event data - a mixture-model approach.

    Science.gov (United States)

    Bar, Haim Y; Lillard, Dean R

    2012-11-30

    When event data are retrospectively reported, more temporally distal events tend to get 'heaped' on even multiples of reporting units. Heaping may introduce a type of attenuation bias because it causes researchers to mismatch time-varying right-hand side variables. We develop a model-based approach to estimate the extent of heaping in the data and how it affects regression parameter estimates. We use smoking cessation data as a motivating example, but our method is general. It facilitates the use of retrospective data from the multitude of cross-sectional and longitudinal studies worldwide that collect and potentially could collect event data.

  10. Research destruction ice under dynamic loading. Part 1. Modeling explosive ice cover into account the temperature

    Directory of Open Access Journals (Sweden)

    Bogomolov Gennady N.

    2017-01-01

    Full Text Available In the research, the behavior of ice under shock and explosive loads is analyzed. Full-scale experiments were carried out. It is established that the results of 2013 practically coincide with the results of 2017, which is explained by the temperature of the formation of river ice. Two research objects are considered, including freshwater ice and river ice cover. The Taylor test was simulated numerically. The results of the Taylor test are presented. Ice is described by an elastoplastic model of continuum mechanics. The process of explosive loading of ice by emulsion explosives is numerically simulated. The destruction of the ice cover under detonation products is analyzed in detail.

  11. 物流作业成本核算模型研究%Study on Logistics Activity Cost Accounting Model

    Institute of Scientific and Technical Information of China (English)

    李淼; 程国全

    2011-01-01

    通过对物流中心作业流程的标准化设计,基于作业成本法的思想,运用表格工具进行各项作业的成本参数设置,从而建立起物流作业成本核算模型.%Through designing the standard logistics center activity process and using the method of activity-based costing, the paper employs diagram and table drawing softwares to the setting of the cost parameters of various logistics activities which are then used to formulate the cost accounting model of logistics activities.

  12. Account of near-cathode sheath in numerical models of high-pressure arc discharges

    Science.gov (United States)

    Benilov, M. S.; Almeida, N. A.; Baeva, M.; Cunha, M. D.; Benilova, L. G.; Uhrlandt, D.

    2016-06-01

    Three approaches to describing the separation of charges in near-cathode regions of high-pressure arc discharges are compared. The first approach employs a single set of equations, including the Poisson equation, in the whole interelectrode gap. The second approach employs a fully non-equilibrium description of the quasi-neutral bulk plasma, complemented with a newly developed description of the space-charge sheaths. The third, and the simplest, approach exploits the fact that significant power is deposited by the arc power supply into the near-cathode plasma layer, which allows one to simulate the plasma-cathode interaction to the first approximation independently of processes in the bulk plasma. It is found that results given by the different models are generally in good agreement, and in some cases the agreement is even surprisingly good. It follows that the predicted integral characteristics of the plasma-cathode interaction are not strongly affected by details of the model provided that the basic physics is right.

  13. Singing with yourself: evidence for an inverse modeling account of poor-pitch singing.

    Science.gov (United States)

    Pfordresher, Peter Q; Mantell, James T

    2014-05-01

    Singing is a ubiquitous and culturally significant activity that humans engage in from an early age. Nevertheless, some individuals - termed poor-pitch singers - are unable to match target pitches within a musical semitone while singing. In the experiments reported here, we tested whether poor-pitch singing deficits would be reduced when individuals imitate recordings of themselves as opposed to recordings of other individuals. This prediction was based on the hypothesis that poor-pitch singers have not developed an abstract "inverse model" of the auditory-vocal system and instead must rely on sensorimotor associations that they have experienced directly, which is true for sequences an individual has already produced. In three experiments, participants, both accurate and poor-pitch singers, were better able to imitate sung recordings of themselves than sung recordings of other singers. However, this self-advantage was enhanced for poor-pitch singers. These effects were not a byproduct of self-recognition (Experiment 1), vocal timbre (Experiment 2), or the absolute pitch of target recordings (i.e., the advantage remains when recordings are transposed, Experiment 3). Results support the conceptualization of poor-pitch singing as an imitative deficit resulting from a deficient inverse model of the auditory-vocal system with respect to pitch. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Taking into account topography rapid variations in forward modelling and inverse problems in seismology

    Science.gov (United States)

    Capdeville, Y.; Jean-Jacques, M.

    2011-12-01

    The modeling of seismic elastic wave full waveform in a limited frequency band is now well established with a set of efficient numerical methods like the spectral element, the discontinuous Galerking or the finite difference methods. The constant increase of computing power with time has now allowed the use of seismic elastic wave full waveforms in a limited frequency band to image the elastic properties of the earth. Nevertheless, inhomogeneities of scale much smaller the minimum wavelength of the wavefield associated to the maximum frequency of the limited frequency band, are still a challenge for both forward and inverse problems. In this work, we tackle the problem of a topography varying much faster than the minimum wavelength. Using a non periodic homogenization theory and a matching asymptotic technique, we show how to remove the fast variation of the topography and replace it by a smooth Dirichlet to Neumann operator at the surface. After showing some 2D forward modeling numerical examples, we will discuss the implications of such a development for both forward and inverse problems.

  15. One-dimensional model of oxygen transport impedance accounting for convection perpendicular to the electrode

    Energy Technology Data Exchange (ETDEWEB)

    Mainka, J. [Laboratorio Nacional de Computacao Cientifica (LNCC), CMC 6097, Av. Getulio Vargas 333, 25651-075 Petropolis, RJ, Caixa Postal 95113 (Brazil); Maranzana, G.; Thomas, A.; Dillet, J.; Didierjean, S.; Lottin, O. [Laboratoire d' Energetique et de Mecanique Theorique et Appliquee (LEMTA), Universite de Lorraine, 2, avenue de la Foret de Haye, 54504 Vandoeuvre-les-Nancy (France); LEMTA, CNRS, 2, avenue de la Foret de Haye, 54504 Vandoeuvre-les-Nancy (France)

    2012-10-15

    A one-dimensional (1D) model of oxygen transport in the diffusion media of proton exchange membrane fuel cells (PEMFC) is presented, which considers convection perpendicular to the electrode in addition to diffusion. The resulting analytical expression of the convecto-diffusive impedance is obtained using a convection-diffusion equation instead of a diffusion equation in the case of classical Warburg impedance. The main hypothesis of the model is that the convective flux is generated by the evacuation of water produced at the cathode which flows through the porous media in vapor phase. This allows the expression of the convective flux velocity as a function of the current density and of the water transport coefficient {alpha} (the fraction of water being evacuated at the cathode outlet). The resulting 1D oxygen transport impedance neglects processes occurring in the direction parallel to the electrode that could have a significant impact on the cell impedance, like gas consumption or concentration oscillations induced by the measuring signal. However, it enables us to estimate the impact of convection perpendicular to the electrode on PEMFC impedance spectra and to determine in which conditions the approximation of a purely diffusive oxygen transport is valid. Experimental observations confirm the numerical results. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  16. Modeling co-occurrence of northern spotted and barred owls: accounting for detection probability differences

    Science.gov (United States)

    Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.

    2009-01-01

    Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.

  17. Accounting emergy flows to determine the best production model of a coffee plantation

    Energy Technology Data Exchange (ETDEWEB)

    Giannetti, B.F.; Ogura, Y.; Bonilla, S.H. [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil); Almeida, C.M.V.B., E-mail: cmvbag@terra.com.br [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil)

    2011-11-15

    Cerrado, a savannah region, is Brazil's second largest ecosystem after the Amazon rainforest and is also threatened with imminent destruction. In the present study emergy synthesis was applied to assess the environmental performance of a coffee farm located in Coromandel, Minas Gerais, in the Brazilian Cerrado. The effects of land use on sustainability were evaluated by comparing the emergy indices along ten years in order to assess the energy flows driving the production process, and to determine the best production model combining productivity and environmental performance. The emergy indices are presented as a function of the annual crop. Results show that Santo Inacio farm should produce approximately 20 bags of green coffee per hectare to accomplish its best performance regarding both the production efficiency and the environment. The evaluation of coffee trade complements those obtained by contrasting productivity and environmental performance, and despite of the market prices variation, the optimum interval for Santo Inacio's farm is between 10 and 25 coffee bags/ha. - Highlights: > Emergy synthesis is used to assess the environmental performance of a coffee farm in Brazil. > The effects of land use on sustainability were evaluated along ten years. > The energy flows driving the production process were assessed. > The best production model combining productivity and environmental performance was determined.

  18. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Oxberry, Geoffrey M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kostova-Vassilevska, Tanya [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Arrighi, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chand, Kyle [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.

  19. Modelling of trace metal uptake by roots taking into account complexation by exogenous organic ligands

    Science.gov (United States)

    Jean-Marc, Custos; Christian, Moyne; Sterckeman, Thibault

    2010-05-01

    The context of this study is phytoextraction of soil trace metals such as Cd, Pb or Zn. Trace metal transfer from soil to plant depends on physical and chemical processes such as minerals alteration, transport, adsorption/desorption, reactions in solution and biological processes including the action of plant roots and of associated micro-flora. Complexation of metal ions by organic ligands is considered to play a role on the availability of trace metals for roots in particular in the event that synthetic ligands (EDTA, NTA, etc.) are added to the soil to increase the solubility of the contaminants. As this role is not clearly understood, we wanted to simulate it in order to quantify the effect of organic ligands on root uptake of trace metals and produce a tool which could help in optimizing the conditions of phytoextraction.We studied the effect of an aminocarboxilate ligand on the absorption of the metal ion by roots, both in hydroponic solution and in soil solution, for which we had to formalize the buffer power for the metal. We assumed that the hydrated metal ion is the only form which can be absorbed by the plants. Transport and reaction processes were modelled for a system made up of the metal M, a ligand L and the metal complex ML. The Tinker-Nye-Barber model was adapted to describe the transport of solutes M, L and ML in the soil and absorption of M by the roots. This allowed to represent the interactions between transport, chelating reactions, absorption of the solutes at the root surface, root growth with time, in order to simulate metal uptake by a whole root system.Several assumptions were tested such as i) absorption of the metal by an infinite sink and according to a Michaelis-Menten kinetics, solutes transport by diffusion with and without ii) mass flow and iii) soil buffer power for the ligand L. In hydroponic solution (without soil buffer power), ligands decreased the trace metal flux towards roots, as they reduced the concentration of hydrated

  20. The application of multilevel modelling to account for the influence of walking speed in gait analysis.

    Science.gov (United States)

    Keene, David J; Moe-Nilssen, Rolf; Lamb, Sarah E

    2016-01-01

    Differences in gait performance can be explained by variations in walking speed, which is a major analytical problem. Some investigators have standardised speed during testing, but this can result in an unnatural control of gait characteristics. Other investigators have developed test procedures where participants walking at their self-selected slow, preferred and fast speeds, with computation of gait characteristics at a standardised speed. However, this analysis is dependent upon an overlap in the ranges of gait speed observed within and between participants, and this is difficult to achieve under self-selected conditions. In this report a statistical analysis procedure is introduced that utilises multilevel modelling to analyse data from walking tests at self-selected speeds, without requiring an overlap in the range of speeds observed or the routine use of data transformations.

  1. Accounting for crustal magnetization in models of the core magnetic field

    Science.gov (United States)

    Jackson, Andrew

    1990-01-01

    The problem of determining the magnetic field originating in the earth's core in the presence of remanent and induced magnetization is considered. The effect of remanent magnetization in the crust on satellite measurements of the core magnetic field is investigated. The crust as a zero-mean stationary Gaussian random process is modelled using an idea proposed by Parker (1988). It is shown that the matrix of second-order statistics is proportional to the Gram matrix, which depends only on the inner-products of the appropriate Green's functions, and that at a typical satellite altitude of 400 km the data are correlated out to an angular separation of approximately 15 deg. Accurate and efficient means of calculating the matrix elements are given. It is shown that the variance of measurements of the radial component of a magnetic field due to the crust is expected to be approximately twice that in horizontal components.

  2. A coupled surface/subsurface flow model accounting for air entrapment and air pressure counterflow

    DEFF Research Database (Denmark)

    Delfs, Jens Olaf; Wang, Wenqing; Kalbacher, Thomas

    2013-01-01

    This work introduces the soil air system into integrated hydrology by simulating the flow processes and interactions of surface runoff, soil moisture and air in the shallow subsurface. The numerical model is formulated as a coupled system of partial differential equations for hydrostatic (diffusive...... algorithm, leakances operate as a valve for gas pressure in a liquid-covered porous medium facilitating the simulation of air out-break events through the land surface. General criteria are stated to guarantee stability in a sequential iterative coupling algorithm and, in addition, for leakances to control...... the mass exchange between compartments. A benchmark test, which is based on a classic experimental data set on infiltration excess (Horton) overland flow, identified a feedback mechanism between surface runoff and soil air pressures. Our study suggests that air compression in soils amplifies surface runoff...

  3. Accounting for sex differences in PTSD: A multi-variable mediation model

    DEFF Research Database (Denmark)

    Christiansen, Dorte M.; Hansen, Maj

    2015-01-01

    ABSTRACT Background: Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD). However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used...... specifically to test a multiple mediator model. Results: Females reported more PTSD symptoms than males and higher levels of neuroticism, depression, physical anxiety sensitivity, peritraumatic fear, horror, and helplessness (the A2 criterion), tonic immobility, panic, dissociation, negative posttraumatic...... that females report more PTSD symptoms because they experience higher levels of associated risk factors. The results are relevant to other trauma populations and to other trauma- related psychiatric disorders more prevalent in females, such as depression and anxiety. Keywords: Posttraumatic stress disorder...

  4. A hybrid mode choice model to account for the dynamic effect of inertia over time

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Börjesson, Maria; Bierlaire, Michel

    gathered over a continuous period of time, six weeks, to study both inertia and the influence of habits. Tendency to stick with the same alternative is measured through lagged variables that link the current choice with the previous trip made with the same purpose, mode and time of day. However, the lagged...... effect of the previous trips is not constant but it depends on the individual propensity to undertake habitual trips which is captured by the individual specific latent variable. And the frequency of the trips in the previous week is used as an indicator of the habitual behavior. The model estimation...... confirms that the tendency to stick with the same alternative varies not only among modes but also across individuals as a function of the individual propensity to undertake habitual behavior....

  5. Accounting for subordinate perceptions of supervisor power: an identity-dependence model.

    Science.gov (United States)

    Farmer, Steven M; Aguinis, Herman

    2005-11-01

    The authors present a model that explains how subordinates perceive the power of their supervisors and the causal mechanisms by which these perceptions translate into subordinate outcomes. Drawing on identity and resource-dependence theories, the authors propose that supervisors have power over their subordinates when they control resources needed for the subordinates' enactment and maintenance of current and desired identities. The joint effect of perceptions of supervisor power and supervisor intentions to provide such resources leads to 4 conditions ranging from highly functional to highly dysfunctional: confirmation, hope, apathy, and progressive withdrawal. Each of these conditions is associated with specific outcomes such as the quality of the supervisor-subordinate relationship, turnover, and changes in the type and centrality of various subordinate identities. ((c) 2005 APA, all rights reserved).

  6. Does Reading Cause Later Intelligence? Accounting for Stability in Models of Change.

    Science.gov (United States)

    Bailey, Drew H; Littlefield, Andrew K

    2016-11-08

    This study reanalyzes data presented by Ritchie, Bates, and Plomin (2015) who used a cross-lagged monozygotic twin differences design to test whether reading ability caused changes in intelligence. The authors used data from a sample of 1,890 monozygotic twin pairs tested on reading ability and intelligence at five occasions between the ages of 7 and 16, regressing twin differences in intelligence on twin differences in prior intelligence and twin differences in prior reading ability. Results from a state-trait model suggest that reported effects of reading ability on later intelligence may be artifacts of previously uncontrolled factors, both environmental in origin and stable during this developmental period, influencing both constructs throughout development. Implications for cognitive developmental theory and methods are discussed. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  7. Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation

    KAUST Repository

    De La Garza Martinez, Pablo

    2016-05-01

    Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.

  8. Exploring the relationship between proper name anomia and word retrieval: a single case study.

    Science.gov (United States)

    Kay, J; Hanley, J R; Miles, R

    2001-09-01

    We report the results of an investigation of the spoken word retrieval abilities of a patient, BG, with proper name anomia. Our investigations reveal that she is impaired in retrieving common nouns as well as proper names. Common noun retrieval was influenced by age-of-acquisition, word familiarity and name agreement. Cued retrieval of proper names was influenced by age-of-acquisition, although effects of other linguistic variables were not excluded. It is claimed that an explanation in terms of a 'continuum of word retrieval difficulty' rather than of proper names as 'pure referring expressions' can best account for the findings. However, this proposal is unlikely to be able to explain all cases of proper name anomia. Nonetheless, it is suggested that similar findings may be observed in other people with proper name anomia, and that it is necessary for future studies to investigate not only proper name but also common noun retrieval. We also provide evidence that Plausible Phonology and Specificity hypotheses of proper name anomia cannot account for BG's naming abilities.

  9. Proper motions of the HH 1 jet

    Science.gov (United States)

    Raga, A. C.; Reipurth, B.; Esquivel, A.; Castellanos-Ramírez, A.; Velázquez, P. F.; Hernández-Martínez, L.; Rodríguez-González, A.; Rechy-García, J. S.; Estrella-Trujillo, D.; Bally, J.; González-Gómez, D.; Riera, A.

    2017-10-01

    We describe a new method for determining proper motions of extended objects, and a pipeline developed for the application of this method. We then apply this method to an analysis of four epochs of [S II] HST images of the HH 1 jet (covering a period of ≈20 yr). We determine the proper motions of the knots along the jet, and make a reconstruction of the past ejection velocity time-variability (assuming ballistic knot motions). This reconstruction shows an "acceleration" of the ejection velocities of the jet knots, with higher velocities at more recent times. This acceleration will result in an eventual merging of the knots in ≈450 yr and at a distance of ≈80'' from the outflow source, close to the present-day position of HH 1.

  10. Accounting for non-linear chemistry of ship plumes in the GEOS-Chem global chemistry transport model

    Directory of Open Access Journals (Sweden)

    G. C. M. Vinken

    2011-06-01

    Full Text Available We present a computationally efficient approach to account for the non-linear chemistry occurring during the dispersion of ship exhaust plumes in a global 3-D model of atmospheric chemistry (GEOS-Chem. We use a plume-in-grid formulation where ship emissions age chemically for 5 h before being released in the global model grid. Besides reducing the original ship NOx emissions in GEOS-Chem, our approach also releases the secondary compounds ozone and HNO3, produced in the 5 h after the original emissions, into the model. We applied our improved method and also the widely used "instant dilution" approach to a 1-yr GEOS-Chem simulation of global tropospheric ozone-NOx-VOC-aerosol chemistry. We also ran simulations with the standard model, and a model without any ship emissions at all. Our improved GEOS-Chem model simulates up to 0.1 ppbv (or 90 % more NOx over the North Atlantic in July than GEOS-Chem versions without any ship NOx emissions at all. "Instant dilution" overestimates NOx concentrations by 50 % (0.1 ppbv and ozone by 10–25 % (3–5 ppbv over this region. These conclusions are supported by comparing simulated and observed NOx and ozone concentrations in the lower troposphere over the Pacific Ocean. The comparisons show that the improved GEOS-Chem model simulates NOx concentrations in between the instant diluting model and the model with no ship emissions, and results in lower O3 concentrations than the instant diluting model. The relative differences in simulated NOx and ozone between our improved approach and instant dilution are smallest over strongly polluted seas (e.g. North Sea, suggesting that accounting for in-plume chemistry is most relevant for pristine marine areas.

  11. Proper time method in de Sitter space

    CERN Document Server

    Das, Ashok K

    2015-01-01

    We use the proper time formalism to study a (non-self-interacting) massive Klein-Gordon theory in the two dimensional de Sitter space. We determine the exact Green's function of the theory by solving the DeWitt-Schwinger equation as well as by calculating the operator matrix element. We point out how the one parameter family of arbitrariness in the Green's function arises in this method.

  12. VVV IR high proper motion stars

    Science.gov (United States)

    Kurtev, R.; Gromadzki, M.; Beamin, J. C.; Peña, K.; Folkes, S.; Ivanov, V. D.; Borissova, J.; Kuhn, M.; Villanueva, V.; Minniti, D.; Mendez, R.; Lucas, P.; Smith, L.; Pinfield, D.; Antonova, A.

    2015-10-01

    We used the VISTA Variables en Vía Láctea (VVV) survey to search for large proper motion (PM) objects in the zone of avoidance in the Milky Way bulge and southern Galactic disk. This survey is multi-epoch and already spans a period of more than four years, giving us an excellent opportunity for proper motion and parallax studies. We found around 1700 PM objects with PM>30 mas yr(-1) . The majority of them are early and mid M-dwarfs. There are also few later spectral type objects, as well as numerous new K- and G-dwarfs. 75 of the stars have PM>300 mas (-1) and 189 stars have PM>200 mas (-1) . There are only 42 previously known stars in the VVV area with proper motion PM>200 mas (-1) . We also found three dM+WD binaries and new members of the immediate solar vicinity of 25 pc. We generated a catalog which will be a complementary to the existing catalogs outside this zone.

  13. Assessing patient awareness of proper hand hygiene.

    Science.gov (United States)

    Busby, Sunni R; Kennedy, Bryan; Davis, Stephanie C; Thompson, Heather A; Jones, Jan W

    2015-05-01

    The authors hypothesized that patients may not understand the forms of effective hand hygiene employed in the hospital environment. Multiple studies demonstrate the importance of hand hygiene in reducing healthcare-associated infections (HAIs). Extensive research about how to improve compliance has been conducted. Patients' perceptions of proper hand hygiene were evaluated when caregivers used soap and water, waterless hand cleaner, or a combination of these. No significant differences were observed, but many patients reported they did not notice whether their providers cleaned their hands. Educating patients and their caregivers about the protection afforded by proper, consistent hand hygiene practices is important. Engaging patients to monitor healthcare workers may increase compliance, reduce the spread of infection, and lead to better overall patient outcomes. This study revealed a need to investigate the effects of patient education on patient perceptions of hand hygiene. Results of this study appear to indicate a need to focus on patient education and the differences between soap and water versus alcohol-based hand sanitizers as part of proper hand hygiene. Researchers could be asking: "Why have patients not been engaged as members of the healthcare team who have the most to lose?"

  14. [Morphology of neurons of human subiculum proper].

    Science.gov (United States)

    Stanković-Vulović, Maja; Zivanović-Macuzić, Ivana; Sazdanović, Predrag; Jeremić, Dejan; Tosevski, Jovo

    2010-01-01

    Subiculum proper is an archicortical structure of the subicular complex and presents the place of origin of great majority of axons of the whole hippocampal formation. In contrast to the hippocampus which has been intensively studied, the data about human subiculum proper are quite scarce. The aim of our study was to identify morphological characteristics of neurons of the human subiculum proper. The study was performed on 10 brains of both genders by using Golgi impregnation and Nissl staining. The subiculum has three layers: molecular, pyramidal and polymorphic layer. The dominant cell type in the pyramidal layer was the pyramidal neurons, which had pyramidal shaped soma, multiple basal dendrites and one apical dendrite. The nonpyramidal cells were scattered among the pyramidal cells of the pyramidal layer. The nonpyramidal cells were classified on: multipolar, bipolar and neurons with triangular-shaped soma. The neurons of the molecular layer of the human subiculum were divided into groups: bipolar and multipolar neurons. The most numerous cells of the polymorphic layer were bipolar and multipolar neurons.

  15. THE PROPER MOTION OF THE LMC

    Directory of Open Access Journals (Sweden)

    R. A. Méndez

    2009-01-01

    Full Text Available We have determined the proper motion of the Large Magellanic Cloud (LMC relative to a background quasistellar object, using observations carried out in seven epochs (six years of base time. Our proper motion value agrees well with most results obtained by other authors and indicates that the LMC is not a member of a proposed stream of galaxies with similar orbits around our galaxy. Using published values of the radial velocity for the center of the LMC, in combination with the transverse velocity vector derived from our measured proper motion, we have calculated the absolute space velocity of the LMC. This value, along with some assumptions regarding the mass distribution of the Galaxy, has in turn been used to calculate the mass of the latter. This work is part of a program to study the space motion of the Magellanic Clouds system and its relationship to the Milky Way (MW. This knowledge is essential to understand the nature, origin and evolution of this system as well as the origin and evolution of the outer parts of the MW.

  16. An extended macro traffic flow model accounting for the driver's bounded rationality and numerical tests

    Science.gov (United States)

    Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan

    2017-02-01

    In this paper, we propose a macro traffic flow model to explore the effects of the driver's bounded rationality on the evolutions of traffic waves (which include shock and rarefaction waves) and small perturbation, and on the fuel consumption and emissions (that include CO, HC and NOX) during the evolution process. The numerical results illustrate that considering the driver's bounded rationality can prominently smooth the wavefront of the traffic waves and improve the stability of traffic flow, which shows that the driver's bounded rationality has positive impacts on traffic flow; but considering the driver's bounded rationality reduces the fuel consumption and emissions only at the upstream of the rarefaction wave while enhances the fuel consumption and emissions under other situations, which shows that the driver's bounded rationality has positive impacts on the fuel consumption and emissions only at the upstream of the rarefaction wave, while negative effects on the fuel consumption and emissions under other situations. In addition, the numerical results show that the driver's bounded rationality has little prominent impact on the total fuel consumption, and emissions during the whole evolution of small perturbation.

  17. Rapid prediction of damage on a struck ship accounting for side impact scenario models

    Science.gov (United States)

    Prabowo, Aditya Rio; Bae, Dong Myung; Sohn, Jung Min; Zakki, Ahmad Fauzan; Cao, Bo

    2017-04-01

    The impact phenomenon is inseparable part of every physical things, from substantial particle until macrostructure namely ship. In ship collisions, short-period load is distributed during impact process from striking ship into struck ship. The kinetic energy that is used to move striking ship is absorbed by struck ship that makes its structure undergoes plastic deformation and failure. This paper presents study that focuses on predicting occurred damage on side hull of struck ship for various impact scenario models. These scenarios are calculated by finite element approach to obtain characteristic on damage, energy as well as load during and after impact processes. The results indicate that the damages on impact to longitudinal components such as main and car decks are smaller than impact to transverse structure components. The damage and deformation are widely distributed to almost side structures including inner structure. The width between outer and inner shells is very affecting the damage mode where the width below the two meters will make inner shell experience damage beyond plastic deformation. The contribution of structure components is proofed deliver significant effect to damage mode and material strengths clearly affect the results in energy and load characteristic.

  18. Exploring the proper experimental conditions in 2D thermal cloaking demonstration

    Science.gov (United States)

    Hu, Run; Zhou, Shuling; Yu, Xingjian; Luo, Xiaobing

    2016-10-01

    Although thermal cloak has been studied extensively, the specific discussions on the proper experimental conditions to successfully observe the thermal cloaking effect are lacking. In this study, we focus on exploring the proper experimental conditions for 2D thermal cloaking demonstration. A mathematical model is established and detailed discussions are presented based on the model. The proper experimental conditions are suggested and verified with finite element simulations.

  19. 7 CFR 1735.92 - Accounting considerations.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Accounting considerations. 1735.92 Section 1735.92... All Acquisitions and Mergers § 1735.92 Accounting considerations. (a) Proper accounting shall be... in the absence of such a commission, as required by RUS based on Generally Accepted...

  20. A statistical model-based technique for accounting for prostate gland deformation in endorectal coil-based MR imaging.

    Science.gov (United States)

    Tahmasebi, Amir M; Sharifi, Reza; Agarwal, Harsh K; Turkbey, Baris; Bernardo, Marcelino; Choyke, Peter; Pinto, Peter; Wood, Bradford; Kruecker, Jochen

    2012-01-01

    In prostate brachytherapy procedures, combining high-resolution endorectal coil (ERC)-MRI with Computed Tomography (CT) images has shown to improve the diagnostic specificity for malignant tumors. Despite such advantage, there exists a major complication in fusion of the two imaging modalities due to the deformation of the prostate shape in ERC-MRI. Conventionally, nonlinear deformable registration techniques have been utilized to account for such deformation. In this work, we present a model-based technique for accounting for the deformation of the prostate gland in ERC-MR imaging, in which a unique deformation vector is estimated for every point within the prostate gland. Modes of deformation for every point in the prostate are statistically identified using a set of MR-based training set (with and without ERC-MRI). Deformation of the prostate from a deformed (ERC-MRI) to a non-deformed state in a different modality (CT) is then realized by first calculating partial deformation information for a limited number of points (such as surface points or anatomical landmarks) and then utilizing the calculated deformation from a subset of the points to determine the coefficient values for the modes of deformations provided by the statistical deformation model. Using a leave-one-out cross-validation, our results demonstrated a mean estimation error of 1mm for a MR-to-MR registration.

  1. Rooting opinions in the minds: a cognitive model and a formal account of opinions and their dynamics

    CERN Document Server

    Giardini, Francesca; Conte, Rosaria

    2011-01-01

    The study of opinions, their formation and change, is one of the defining topics addressed by social psychology, but in recent years other disciplines, like computer science and complexity, have tried to deal with this issue. Despite the flourishing of different models and theories in both fields, several key questions still remain unanswered. The understanding of how opinions change and the way they are affected by social influence are challenging issues requiring a thorough analysis of opinion per se but also of the way in which they travel between agents' minds and are modulated by these exchanges. To account for the two-faceted nature of opinions, which are mental entities undergoing complex social processes, we outline a preliminary model in which a cognitive theory of opinions is put forward and it is paired with a formal description of them and of their spreading among minds. Furthermore, investigating social influence also implies the necessity to account for the way in which people change their minds...

  2. Model Fitting Versus Curve Fitting: A Model of Renormalization Provides a Better Account of Age Aftereffects Than a Model of Local Repulsion.

    Science.gov (United States)

    O'Neil, Sean F; Mac, Amy; Rhodes, Gillian; Webster, Michael A

    2015-12-01

    Recently, we proposed that the aftereffects of adapting to facial age are consistent with a renormalization of the perceived age (e.g., so that after adapting to a younger or older age, all ages appear slightly older or younger, respectively). This conclusion has been challenged by arguing that the aftereffects can also be accounted for by an alternative model based on repulsion (in which facial ages above or below the adapting age are biased away from the adaptor). However, we show here that this challenge was based on allowing the fitted functions to take on values which are implausible and incompatible across the different adapting conditions. When the fits are constrained or interpreted in terms of standard assumptions about normalization and repulsion, then the two analyses both agree in pointing to a pattern of renormalization in age aftereffects.

  3. Accounting for uncertainty due to 'last observation carried forward' outcome imputation in a meta-analysis model.

    Science.gov (United States)

    Dimitrakopoulou, Vasiliki; Efthimiou, Orestis; Leucht, Stefan; Salanti, Georgia

    2015-02-28

    Missing outcome data are a problem commonly observed in randomized control trials that occurs as a result of participants leaving the study before its end. Missing such important information can bias the study estimates of the relative treatment effect and consequently affect the meta-analytic results. Therefore, methods on manipulating data sets with missing participants, with regard to incorporating the missing information in the analysis so as to avoid the loss of power and minimize the bias, are of interest. We propose a meta-analytic model that accounts for possible error in the effect sizes estimated in studies with last observation carried forward (LOCF) imputed patients. Assuming a dichotomous outcome, we decompose the probability of a successful unobserved outcome taking into account the sensitivity and specificity of the LOCF imputation process for the missing participants. We fit the proposed model within a Bayesian framework, exploring different prior formulations for sensitivity and specificity. We illustrate our methods by performing a meta-analysis of five studies comparing the efficacy of amisulpride versus conventional drugs (flupenthixol and haloperidol) on patients diagnosed with schizophrenia. Our meta-analytic models yield estimates similar to meta-analysis with LOCF-imputed patients. Allowing for uncertainty in the imputation process, precision is decreased depending on the priors used for sensitivity and specificity. Results on the significance of amisulpride versus conventional drugs differ between the standard LOCF approach and our model depending on prior beliefs on the imputation process. Our method can be regarded as a useful sensitivity analysis that can be used in the presence of concerns about the LOCF process.

  4. Temperature dependence of the epidermal growth factor receptor signaling network can be accounted for by a kinetic model.

    Science.gov (United States)

    Moehren, Gisela; Markevich, Nick; Demin, Oleg; Kiyatkin, Anatoly; Goryanin, Igor; Hoek, Jan B; Kholodenko, Boris N

    2002-01-08

    Stimulation of isolated hepatocytes with epidermal growth factor (EGF) causes rapid tyrosine phosphorylation of the EGF receptor (EGFR) and adapter/target proteins, which was monitored with 1 and 2 s resolution at 37, 20, and 4 degrees C. The temporal responses detected for multiple signaling proteins involve both transient and sustained phosphorylation patterns, which change dramatically at low temperatures. To account quantitatively for complex responses, we employed a mechanistic kinetic model of the EGFR pathway, formulated in molecular terms as cascades of protein interactions and phosphorylation and dephosphorylation reactions. Assuming differential temperature dependencies for different reaction groups, such as SH2 and PTB domain-mediated interactions, the EGFR kinase, and the phosphatases, good quantitative agreement was obtained between computer-simulated and measured responses. The kinetic model demonstrates that, for each protein-protein interaction, the dissociation rate constant, k(off), strongly decreases at low temperatures, whereas this decline may or may not be accompanied by a large decrease in the k(on) value. Temperature-induced changes in the maximal activities of the reactions catalyzed by the EGFR kinase were moderate, compared to such changes in the V(max) of the phosphatases. However, strong changes in both the V(max) and K(m) for phosphatases resulted in moderate changes in the V(max)/K(m) ratio, comparable to the corresponding changes in EGFR kinase activity, with a single exception for the receptor phosphatase at 4 degrees C. The model suggests a significant decrease in the rates of the EGF receptor dimerization and its dephosphorylation at 4 degrees C, which can be related to the phase transition in the membrane lipids. A combination of high-resolution experimental monitoring and molecular level kinetic modeling made it possible to quantitatively account for the temperature dependence of the integrative signaling responses.

  5. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  6. On high proper motion white dwarfs from photographic surveys

    CERN Document Server

    Reylé, C; Creze, M; Reyle, Celine; Robin, Annie C.; Creze, Michel

    2001-01-01

    The interpretation of high proper motion white dwarfs detected by Oppenheimer et al (2001) was the start of a tough controversy. While the discoverers identify a large fraction of their findings as dark halo members, others interpret the same sample as essentially made of disc and/or thick disc stars. We use the comprehensive description of galactic stellar populations provided by the "Besancon" model to produce a realistic simulation of Oppenheimer et al. data, including all observational selections and calibration biases. The conclusion is unambiguous: Thick disc white dwarfs resulting from ordinary hypotheses on the local density and kinematics are sufficient to explain the observed objects, there is no need for halo white dwarfs. This conclusion is robust to reasonable changes in model ingredients. The main cause of the misinterpretation seems to be that the velocity distribution of a proper motion selected star sample is severely biased in favour of high velocities. This has been neglected in previous an...

  7. IFRS 9 replacing IAS 39 : A study about how the implementation of the Expected Credit Loss Model in IFRS 9 i beleived to impact comparability in accounting

    OpenAIRE

    Klefvenberg, Louise; Nordlander, Viktoria

    2015-01-01

    This thesis examines how the implementation process of Expected Credit Loss Model in the accounting standard IFRS 9 – Financial instruments is perceived and interpreted and how these factors can affect comparability in accounting. One of the main changes with IFRS 9 is that companies need to account for expected credit losses rather than just incurred ones. The data is primarily collected through a web survey where all of Nordic banks and credit institutes with a minimum book value of total a...

  8. Probabilistic modelling in urban drainage – two approaches that explicitly account for temporal variation of model errors

    DEFF Research Database (Denmark)

    Löwe, Roland; Del Giudice, Dario; Mikkelsen, Peter Steen

    to observations. After a brief discussion of the assumptions made for likelihood-based parameter inference, we illustrated the basic principles of both approaches on the example of sewer flow modelling with a conceptual rainfallrunoff model. The results from a real-world case study suggested that both approaches...

  9. A field-scale infiltration model accounting for spatial heterogeneity of rainfall and soil saturated hydraulic conductivity

    Science.gov (United States)

    Morbidelli, Renato; Corradini, Corrado; Govindaraju, Rao S.

    2006-04-01

    This study first explores the role of spatial heterogeneity, in both the saturated hydraulic conductivity Ks and rainfall intensity r, on the integrated hydrological response of a natural slope. On this basis, a mathematical model for estimating the expected areal-average infiltration is then formulated. Both Ks and r are considered as random variables with assessed probability density functions. The model relies upon a semi-analytical component, which describes the directly infiltrated rainfall, and an empirical component, which accounts further for the infiltration of surface water running downslope into pervious soils (the run-on effect). Monte Carlo simulations over a clay loam soil and a sandy loam soil were performed for constructing the ensemble averages of field-scale infiltration used for model validation. The model produced very accurate estimates of the expected field-scale infiltration rate, as well as of the outflow generated by significant rainfall events. Furthermore, the two model components were found to interact appropriately for different weights of the two infiltration mechanisms involved.

  10. A micromechanics-inspired constitutive model for shape-memory alloys that accounts for initiation and saturation of phase transformation

    Science.gov (United States)

    Kelly, Alex; Stebner, Aaron P.; Bhattacharya, Kaushik

    2016-12-01

    A constitutive model to describe macroscopic elastic and transformation behaviors of polycrystalline shape-memory alloys is formulated using an internal variable thermodynamic framework. In a departure from prior phenomenological models, the proposed model treats initiation, growth kinetics, and saturation of transformation distinctly, consistent with physics revealed by recent multi-scale experiments and theoretical studies. Specifically, the proposed approach captures the macroscopic manifestations of three micromechanial facts, even though microstructures are not explicitly modeled: (1) Individual grains with favorable orientations and stresses for transformation are the first to nucleate martensite, and the local nucleation strain is relatively large. (2) Then, transformation interfaces propagate according to growth kinetics to traverse networks of grains, while previously formed martensite may reorient. (3) Ultimately, transformation saturates prior to 100% completion as some unfavorably-oriented grains do not transform; thus the total transformation strain of a polycrystal is modest relative to the initial, local nucleation strain. The proposed formulation also accounts for tension-compression asymmetry, processing anisotropy, and the distinction between stress-induced and temperature-induced transformations. Consequently, the model describes thermoelastic responses of shape-memory alloys subject to complex, multi-axial thermo-mechanical loadings. These abilities are demonstrated through detailed comparisons of simulations with experiments.

  11. Low Energy Atomic Models Suggesting a Pilus Structure that could Account for Electrical Conductivity of Geobacter sulfurreducens Pili.

    Science.gov (United States)

    Xiao, Ke; Malvankar, Nikhil S; Shu, Chuanjun; Martz, Eric; Lovley, Derek R; Sun, Xiao

    2016-03-22

    The metallic-like electrical conductivity of Geobacter sulfurreducens pili has been documented with multiple lines of experimental evidence, but there is only a rudimentary understanding of the structural features which contribute to this novel mode of biological electron transport. In order to determine if it was feasible for the pilin monomers of G. sulfurreducens to assemble into a conductive filament, theoretical energy-minimized models of Geobacter pili were constructed with a previously described approach, in which pilin monomers are assembled using randomized structural parameters and distance constraints. The lowest energy models from a specific group of predicted structures lacked a central channel, in contrast to previously existing pili models. In half of the no-channel models the three N-terminal aromatic residues of the pilin monomer are arranged in a potentially electrically conductive geometry, sufficiently close to account for the experimentally observed metallic like conductivity of the pili that has been attributed to overlapping pi-pi orbitals of aromatic amino acids. These atomic resolution models capable of explaining the observed conductive properties of Geobacter pili are a valuable tool to guide further investigation of the metallic-like conductivity of the pili, their role in biogeochemical cycling, and applications in bioenergy and bioelectronics.

  12. Goals and Psychological Accounting

    DEFF Research Database (Denmark)

    Koch, Alexander Karl; Nafziger, Julia

    -induced reference points make substandard performance psychologically painful and motivate the individual to stick to his goals. How strong the commitment to goals is depends on the type of psychological account. We provide conditions when it is optimal to evaluate goals in narrow accounts. The key intuition......We model how people formulate and evaluate goals to overcome self-control problems. People often attempt to regulate their behavior by evaluating goal-related outcomes separately (in narrow psychological accounts) rather than jointly (in a broad account). To explain this evidence, our theory...... of endogenous narrow or broad psychological accounts combines insights from the literatures on goals and mental accounting with models of expectations-based reference-dependent preferences. By formulating goals the individual creates expectations that induce reference points for task outcomes. These goal...

  13. Model cortical association fields account for the time course and dependence on target complexity of human contour perception.

    Directory of Open Access Journals (Sweden)

    Vadas Gintautas

    2011-10-01

    Full Text Available Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas distributed among groups of randomly rotated fragments (clutter. The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms, followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least [Formula: see text] ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas.

  14. Accounting rigid support at the border in a mixed model the finite element method in problems of ice cover destruction

    Directory of Open Access Journals (Sweden)

    V. V. Knyazkov

    2014-01-01

    Full Text Available To evaluate the force to damage the ice covers is necessary for estimation of icebreaking capability of vessels, as well as of hull strength of icebreakers, and navigation of ships in ice conditions. On the other hand, the use of ice cover support to arrange construction works from the ice is also of practical interest.By the present moment a great deal of investigations of ice cover deformation have been carried out to result, usually, in approximate calculations formula which was obtained after making a variety of assumptions. Nevertheless, we believe that it is possible to make further improvement in calculations. Application numerical methods, and, for example, FEM, makes possible to avoid numerous drawbacks of analytical methods dealing with both complex boundaries and load application areas and other problem peculiarities.The article considers an application of mixed models of FEM for investigating ice cover deformation. A simple flexible triangle element of mixed type was taken to solve this problem. Vector of generalized coordinates of the element contains apices flexures and normal bending moments in the middle of its sides. Compared to other elements mixed models easily satisfy compatibility requirements on the boundary of adjacent elements and do not require numerical displacement differentiation to define bending moments, because bending moments are included in vector of element generalized coordinates.The method of account of rigid support plate is proposed. The resulting ratio, taking into account the "stiffening", reduces the number of resolving systems of equations by the number of elements on the plate contour.To evaluate further the results the numerical realization of ice cover stress-strained problem it becomes necessary and correct to check whether calculation results correspond to accurate solution. Using an example of circular plate the convergence of numerical solutions to analytical solutions is showed.The article

  15. A nonlinear BOLD model accounting for refractory effect by applying the longitudinal relaxation in NMR to the linear BOLD model.

    Science.gov (United States)

    Jung, Kwan-Jin

    2009-09-01

    A mathematical model to regress the nonlinear blood oxygen level-dependent (BOLD) fMRI signal has been developed by incorporating the refractory effect into the linear BOLD model of the biphasic gamma variate function. The refractory effect was modeled as a relaxation of two separate BOLD capacities corresponding to the biphasic components of the BOLD signal in analogy with longitudinal relaxation of magnetization in NMR. When tested with the published fMRI data of finger tapping, the nonlinear BOLD model with the refractory effect reproduced the nonlinear BOLD effects such as reduced poststimulus undershoot and saddle pattern in a prolonged stimulation as well as the reduced BOLD signal for repetitive stimulation.

  16. Accounting for geochemical alterations of caprock fracture permeability in basin-scale models of leakage from geologic CO2 reservoirs

    Science.gov (United States)

    Guo, B.; Fitts, J. P.; Dobossy, M.; Bielicki, J. M.; Peters, C. A.

    2012-12-01

    Climate mitigation, public acceptance and energy, markets demand that the potential CO2 leakage rates from geologic storage reservoirs are predicted to be low and are known to a high level of certainty. Current approaches to predict CO2 leakage rates assume constant permeability of leakage pathways (e.g., wellbores, faults, fractures). A reactive transport model was developed to account for geochemical alterations that result in permeability evolution of leakage pathways. The one-dimensional reactive transport model was coupled with the basin-scale Estimating Leakage Semi-Analytical (ELSA) model to simulate CO2 and brine leakage through vertical caprock pathways for different CO2 storage reservoir sites and injection scenarios within the Mt. Simon and St. Peter sandstone formations of the Michigan basin. Mineral dissolution in the numerical reactive transport model expands leakage pathways and increases permeability as a result of calcite dissolution by reactions driven by CO2-acidified brine. A geochemical model compared kinetic and equilibrium treatments of calcite dissolution within each grid block for each time step. For a single fracture, we investigated the effect of the reactions on leakage by performing sensitivity analyses of fracture geometry, CO2 concentration, calcite abundance, initial permeability, and pressure gradient. Assuming that calcite dissolution reaches equilibrium at each time step produces unrealistic scenarios of buffering and permeability evolution within fractures. Therefore, the reactive transport model with a kinetic treatment of calcite dissolution was coupled to the ELSA model and used to compare brine and CO2 leakage rates at a variety of potential geologic storage sites within the Michigan basin. The results are used to construct maps based on the susceptibility to geochemically driven increases in leakage rates. These maps should provide useful and easily communicated inputs into decision-making processes for siting geologic CO2

  17. Modelling the spatial distribution of snow water equivalent at the catchment scale taking into account changes in snow covered area

    Directory of Open Access Journals (Sweden)

    T. Skaugen

    2011-12-01

    Full Text Available A successful modelling of the snow reservoir is necessary for water resources assessments and the mitigation of spring flood hazards. A good estimate of the spatial probability density function (PDF of snow water equivalent (SWE is important for obtaining estimates of the snow reservoir, but also for modelling the changes in snow covered area (SCA, which is crucial for the runoff dynamics in spring. In a previous paper the PDF of SWE was modelled as a sum of temporally correlated gamma distributed variables. This methodology was constrained to estimate the PDF of SWE for snow covered areas only. In order to model the PDF of SWE for a catchment, we need to take into account the change in snow coverage and provide the spatial moments of SWE for both snow covered areas and for the catchment as a whole. The spatial PDF of accumulated SWE is, also in this study, modelled as a sum of correlated gamma distributed variables. After accumulation and melting events the changes in the spatial moments are weighted by changes in SCA. The spatial variance of accumulated SWE is, after both accumulation- and melting events, evaluated by use of the covariance matrix. For accumulation events there are only positive elements in the covariance matrix, whereas for melting events, there are both positive and negative elements. The negative elements dictate that the correlation between melt and SWE is negative. The negative contributions become dominant only after some time into the melting season so at the onset of the melting season, the spatial variance thus continues to increase, for later to decrease. This behaviour is consistent with observations and called the "hysteretic" effect by some authors. The parameters for the snow distribution model can be estimated from observed historical precipitation data which reduces by one the number of parameters to be calibrated in a hydrological model. Results from the model are in good agreement with observed spatial moments

  18. Modelling the spatial distribution of snow water equivalent at the catchment scale taking into account changes in snow covered area

    Science.gov (United States)

    Skaugen, T.; Randen, F.

    2011-12-01

    A successful modelling of the snow reservoir is necessary for water resources assessments and the mitigation of spring flood hazards. A good estimate of the spatial probability density function (PDF) of snow water equivalent (SWE) is important for obtaining estimates of the snow reservoir, but also for modelling the changes in snow covered area (SCA), which is crucial for the runoff dynamics in spring. In a previous paper the PDF of SWE was modelled as a sum of temporally correlated gamma distributed variables. This methodology was constrained to estimate the PDF of SWE for snow covered areas only. In order to model the PDF of SWE for a catchment, we need to take into account the change in snow coverage and provide the spatial moments of SWE for both snow covered areas and for the catchment as a whole. The spatial PDF of accumulated SWE is, also in this study, modelled as a sum of correlated gamma distributed variables. After accumulation and melting events the changes in the spatial moments are weighted by changes in SCA. The spatial variance of accumulated SWE is, after both accumulation- and melting events, evaluated by use of the covariance matrix. For accumulation events there are only positive elements in the covariance matrix, whereas for melting events, there are both positive and negative elements. The negative elements dictate that the correlation between melt and SWE is negative. The negative contributions become dominant only after some time into the melting season so at the onset of the melting season, the spatial variance thus continues to increase, for later to decrease. This behaviour is consistent with observations and called the "hysteretic" effect by some authors. The parameters for the snow distribution model can be estimated from observed historical precipitation data which reduces by one the number of parameters to be calibrated in a hydrological model. Results from the model are in good agreement with observed spatial moments of SWE and SCA

  19. Serviceability limit state related to excessive lateral deformations to account for infill walls in the structural model

    Directory of Open Access Journals (Sweden)

    G. M. S. ALVA

    Full Text Available Brazilian Codes NBR 6118 and NBR 15575 provide practical values for interstory drift limits applied to conventional modeling in order to prevent negative effects in masonry infill walls caused by excessive lateral deformability, however these codes do not account for infill walls in the structural model. The inclusion of infill walls in the proposed model allows for a quantitative evaluation of structural stresses in these walls and an assessment of cracking in these elements (sliding shear diagonal tension and diagonal compression cracking. This paper presents the results of simulations of single-story one-bay infilled R/C frames. The main objective is to show how to check the serviceability limit states under lateral loads when the infill walls are included in the modeling. The results of numerical simulations allowed for an evaluation of stresses and the probable cracking pattern in infill walls. The results also allowed an identification of some advantages and limitations of the NBR 6118 practical procedure based on interstory drift limits.

  20. A statistical human rib cage geometry model accounting for variations by age, sex, stature and body mass index.

    Science.gov (United States)

    Shi, Xiangnan; Cao, Libo; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2014-07-18

    In this study, we developed a statistical rib cage geometry model accounting for variations by age, sex, stature and body mass index (BMI). Thorax CT scans were obtained from 89 subjects approximately evenly distributed among 8 age groups and both sexes. Threshold-based CT image segmentation was performed to extract the rib geometries, and a total of 464 landmarks on the left side of each subject׳s ribcage were collected to describe the size and shape of the rib cage as well as the cross-sectional geometry of each rib. Principal component analysis and multivariate regression analysis were conducted to predict rib cage geometry as a function of age, sex, stature, and BMI, all of which showed strong effects on rib cage geometry. Except for BMI, all parameters also showed significant effects on rib cross-sectional area using a linear mixed model. This statistical rib cage geometry model can serve as a geometric basis for developing a parametric human thorax finite element model for quantifying effects from different human attributes on thoracic injury risks.

  1. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Peixin; Chai, Feng [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Bi, Yunlong [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Pei, Yulong, E-mail: peiyulong1@163.com [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Cheng, Shukang [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China)

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization. - Highlights: • The no-load magnetic field of poke-type motors is firstly calculated by analytical method. • The magnetic circuit model and iterative method are employed to calculate the permeability. • The analytical expression of each subdomain is derived.. • The proposed method can effectively reduce the predesign stages duration.

  2. Accounting for Population Structure in Gene-by-Environment Interactions in Genome-Wide Association Studies Using Mixed Models.

    Science.gov (United States)

    Sul, Jae Hoon; Bilow, Michael; Yang, Wen-Yun; Kostem, Emrah; Furlotte, Nick; He, Dan; Eskin, Eleazar

    2016-03-01

    Although genome-wide association studies (GWASs) have discovered numerous novel genetic variants associated with many complex traits and diseases, those genetic variants typically explain only a small fraction of phenotypic variance. Factors that account for phenotypic variance include environmental factors and gene-by-environment interactions (GEIs). Recently, several studies have conducted genome-wide gene-by-environment association analyses and demonstrated important roles of GEIs in complex traits. One of the main challenges in these association studies is to control effects of population structure that may cause spurious associations. Many studies have analyzed how population structure influences statistics of genetic variants and developed several statistical approaches to correct for population structure. However, the impact of population structure on GEI statistics in GWASs has not been extensively studied and nor have there been methods designed to correct for population structure on GEI statistics. In this paper, we show both analytically and empirically that population structure may cause spurious GEIs and use both simulation and two GWAS datasets to support our finding. We propose a statistical approach based on mixed models to account for population structure on GEI statistics. We find that our approach effectively controls population structure on statistics for GEIs as well as for genetic variants.

  3. A sampling design and model for estimating abundance of Nile crocodiles while accounting for heterogeneity of detectability of multiple observers

    Science.gov (United States)

    Shirley, Matthew H.; Dorazio, Robert M.; Abassery, Ekramy; Elhady, Amr A.; Mekki, Mohammed S.; Asran, Hosni H.

    2012-01-01

    As part of the development of a management program for Nile crocodiles in Lake Nasser, Egypt, we used a dependent double-observer sampling protocol with multiple observers to compute estimates of population size. To analyze the data, we developed a hierarchical model that allowed us to assess variation in detection probabilities among observers and survey dates, as well as account for variation in crocodile abundance among sites and habitats. We conducted surveys from July 2008-June 2009 in 15 areas of Lake Nasser that were representative of 3 main habitat categories. During these surveys, we sampled 1,086 km of lake shore wherein we detected 386 crocodiles. Analysis of the data revealed significant variability in both inter- and intra-observer detection probabilities. Our raw encounter rate was 0.355 crocodiles/km. When we accounted for observer effects and habitat, we estimated a surface population abundance of 2,581 (2,239-2,987, 95% credible intervals) crocodiles in Lake Nasser. Our results underscore the importance of well-trained, experienced monitoring personnel in order to decrease heterogeneity in intra-observer detection probability and to better detect changes in the population based on survey indices. This study will assist the Egyptian government establish a monitoring program as an integral part of future crocodile harvest activities in Lake Nasser

  4. Neural Tuning Size in a Model of Primate Visual Processing Accounts for Three Key Markers of Holistic Face Processing.

    Directory of Open Access Journals (Sweden)

    Cheston Tan

    Full Text Available Faces are an important and unique class of visual stimuli, and have been of interest to neuroscientists for many years. Faces are known to elicit certain characteristic behavioral markers, collectively labeled "holistic processing", while non-face objects are not processed holistically. However, little is known about the underlying neural mechanisms. The main aim of this computational simulation work is to investigate the neural mechanisms that make face processing holistic. Using a model of primate visual processing, we show that a single key factor, "neural tuning size", is able to account for three important markers of holistic face processing: the Composite Face Effect (CFE, Face Inversion Effect (FIE and Whole-Part Effect (WPE. Our proof-of-principle specifies the precise neurophysiological property that corresponds to the poorly-understood notion of holism, and shows that this one neural property controls three classic behavioral markers of holism. Our work is consistent with neurophysiological evidence, and makes further testable predictions. Overall, we provide a parsimonious account of holistic face processing, connecting computation, behavior and neurophysiology.

  5. Prolexbase: a multilingual relational dictionary of Proper Names Prolexbase. Un dictionnaire relationnel multilingue de noms propres

    Directory of Open Access Journals (Sweden)

    Mickaël Tran

    2007-10-01

    Full Text Available This paper presents the modelling of Proper Name domain defined by the Prolex project. This modelling is based on two main concepts: the Conceptual Proper Name and the Prolexeme. The Conceptual Proper Name do not represents the referent, but a point of view on this referent. It has a specific concept in each language, the Prolexeme, that is a structured family of lexemes. Around them, we have defined other concepts and relations (synonymy, meronymy, accessibility, eponymy.... Each Conceptual Proper Name is an hyponym of a type and an existence within an ontology.

  6. Design Accountability

    DEFF Research Database (Denmark)

    Koskinen, Ilpo; Krogh, Peter

    2015-01-01

    design research is that where classical research is interested in singling out a particular aspect and exploring it in depth, design practice is characterized by balancing numerous concerns in a heterogenous and occasionally paradoxical product. It is on this basis the notion of design accountability...

  7. Design Accountability

    DEFF Research Database (Denmark)

    Koskinen, Ilpo; Krogh, Peter

    2015-01-01

    design research is that where classical research is interested in singling out a particular aspect and exploring it in depth, design practice is characterized by balancing numerous concerns in a heterogenous and occasionally paradoxical product. It is on this basis the notion of design accountability...

  8. Impact of accounting for coloured noise in radar altimetry data on a regional quasi-geoid model

    Science.gov (United States)

    Farahani, H. H.; Slobbe, D. C.; Klees, R.; Seitz, Kurt

    2016-07-01

    We study the impact of an accurate computation and incorporation of coloured noise in radar altimeter data when computing a regional quasi-geoid model using least-squares techniques. Our test area comprises the Southern North Sea including the Netherlands, Belgium, and parts of France, Germany, and the UK. We perform the study by modelling the disturbing potential with spherical radial base functions. To that end, we use the traditional remove-compute-restore procedure with a recent GRACE/GOCE static gravity field model. Apart from radar altimeter data, we use terrestrial, airborne, and shipboard gravity data. Radar altimeter sea surface heights are corrected for the instantaneous dynamic topography and used in the form of along-track quasi-geoid height differences. Noise in these data are estimated using repeat-track and post-fit residual analysis techniques and then modelled as an auto regressive moving average process. Quasi-geoid models are computed with and without taking the modelled coloured noise into account. The difference between them is used as a measure of the impact of coloured noise in radar altimeter along-track quasi-geoid height differences on the estimated quasi-geoid model. The impact strongly depends on the availability of shipboard gravity data. If no such data are available, the impact may attain values exceeding 10 centimetres in particular areas. In case shipboard gravity data are used, the impact is reduced, though it still attains values of several centimetres. We use geometric quasi-geoid heights from GPS/levelling data at height markers as control data to analyse the quality of the quasi-geoid models. The quasi-geoid model computed using a model of the coloured noise in radar altimeter along-track quasi-geoid height differences shows in some areas a significant improvement over a model that assumes white noise in these data. However, the interpretation in other areas remains a challenge due to the limited quality of the control data.

  9. Impact of accounting for coloured noise in radar altimetry data on a regional quasi-geoid model

    Science.gov (United States)

    Farahani, H. H.; Slobbe, D. C.; Klees, R.; Seitz, Kurt

    2017-01-01

    We study the impact of an accurate computation and incorporation of coloured noise in radar altimeter data when computing a regional quasi-geoid model using least-squares techniques. Our test area comprises the Southern North Sea including the Netherlands, Belgium, and parts of France, Germany, and the UK. We perform the study by modelling the disturbing potential with spherical radial base functions. To that end, we use the traditional remove-compute-restore procedure with a recent GRACE/GOCE static gravity field model. Apart from radar altimeter data, we use terrestrial, airborne, and shipboard gravity data. Radar altimeter sea surface heights are corrected for the instantaneous dynamic topography and used in the form of along-track quasi-geoid height differences. Noise in these data are estimated using repeat-track and post-fit residual analysis techniques and then modelled as an auto regressive moving average process. Quasi-geoid models are computed with and without taking the modelled coloured noise into account. The difference between them is used as a measure of the impact of coloured noise in radar altimeter along-track quasi-geoid height differences on the estimated quasi-geoid model. The impact strongly depends on the availability of shipboard gravity data. If no such data are available, the impact may attain values exceeding 10 centimetres in particular areas. In case shipboard gravity data are used, the impact is reduced, though it still attains values of several centimetres. We use geometric quasi-geoid heights from GPS/levelling data at height markers as control data to analyse the quality of the quasi-geoid models. The quasi-geoid model computed using a model of the coloured noise in radar altimeter along-track quasi-geoid height differences shows in some areas a significant improvement over a model that assumes white noise in these data. However, the interpretation in other areas remains a challenge due to the limited quality of the control data.

  10. [Knowledge regarding Proper Use Guidelines for Benzodiazepines].

    Science.gov (United States)

    Inada, Ken

    2016-01-01

      Benzodiazepines (BZs) work by agonising gamma-aminobutyric acid (GABA)-BZ-receptor complex and thereby produce sedation and anti-anxiety effects. BZs are commonly used in several clinical areas as hypnotics or anti-anxiety drugs. However, these drugs once supplied by medical institutions often lead to abuse and dependence. Thus it is important for institutions to supply and manage BZs properly. At Tokyo Women's Medical University Hospital educational activities about proper use of BZs are performed by not only medical doctors but also pharmacists. We coordinate distribution of leaflets and run an educational workshop. As a result of these activities, the number of patients receiving BZ prescriptions was reduced. Performing these activities, pharmacists were required to work for patients, doctors, and nurses; they acquired knowledge about BZs such as action mechanisms, efficacy, adverse effects, problems about co-prescription, and methods of discontinuing BZs, as well as information on coping techniques other than medication. The most important point to attend the patients is to answer their anxieties.

  11. Interaction of a supersonic NO beam with static and resonant RF fields: Simple theoretical model to account for molecular interferences

    Science.gov (United States)

    Ureña, A. González; Caceres, J. O.; Morato, M.

    2006-09-01

    In previous experimental works from this laboratory two unexpected phenomena were reported: (i) a depletion of ca. 40% in the total intensity of a pulsed He seeded NO beam when these molecules passed a homogeneous and a resonant oscillating RF electric field and (ii) a beam splitting of ca. 0.5° when the transverse beam profile is measured, under the same experimental conditions. In this work a model based on molecular beam interferences is introduced which satisfactorily accounts for these two observations. It is shown how the experimental set-up a simple device used as C-field in early molecular beam electric resonance experiments, can be employed as molecular interferometer to investigate matter-wave interferences in beams of polar molecules.

  12. Accounting for the uncertainty related to building occupants with regards to visual comfort: A literature survey on drivers and models

    DEFF Research Database (Denmark)

    Fabi, Valentina; Andersen, Rune Korsholm; Corgnati, Stefano

    2016-01-01

    energy use during the design stage. Since the reaction to thermal, acoustic, or visual stimuli is not the same for every human being, monitoring the behaviour inside buildings is an essential step to assert differences in energy consumption related to different interactions. Reliable information...... conditions influence occupants' manual controlling of the system in offices and by consequence the energy consumption. The purpose of this study was to investigate the possible drivers for light-switching to model occupant behaviour in office buildings. The probability of switching lighting systems on or off...... who take daylight level into account and switch on lights only if necessary and people who totally disregard the natural lighting. This underlines the importance of how individuality is at the base of the definition of the different types of users....

  13. Towards ecosystem accounting

    NARCIS (Netherlands)

    Duku, C.; Rathjens, H.; Zwart, S.J.; Hein, L.

    2015-01-01

    Ecosystem accounting is an emerging field that aims to provide a consistent approach to analysing environment-economy interactions. One of the specific features of ecosystem accounting is the distinction between the capacity and the flow of ecosystem services. Ecohydrological modelling to support

  14. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    Science.gov (United States)

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012.

  15. An empirical test of Birkett’s competency model for management accountants : A confirmative study conducted in the Netherlands

    NARCIS (Netherlands)

    Bots, J.M.; Groenland, E.A.G.; Swagerman, D.

    2009-01-01

    In 2002, the Accountants-in-Business section of the International Federation of Accountants (IFAC) issued the Competency Profiles for Management Accounting Practice and Practitioners report. This “Birkett Report” presents a framework for competency development during the careers of management accoun

  16. An empirical test of Birkett’s competency model for management accountants : A confirmative study conducted in the Netherlands

    NARCIS (Netherlands)

    Bots, J.M.; Groenland, E.A.G.; Swagerman, D.

    2009-01-01

    In 2002, the Accountants-in-Business section of the International Federation of Accountants (IFAC) issued the Competency Profiles for Management Accounting Practice and Practitioners report. This “Birkett Report” presents a framework for competency development during the careers of management accoun

  17. AMERICAN ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Mihaela Onica

    2005-01-01

    Full Text Available The international Accounting Standards already contribute to the generation of better and more easily comparable financial information on an international level, supporting thus a more effective allocationof the investments resources in the world. Under the circumstances, there occurs the necessity of a consistent application of the standards on a global level. The financial statements are part of thefinancial reporting process. A set of complete financial statements usually includes a balance sheet,a profit and loss account, a report of the financial item change (which can be presented in various ways, for example as a status of the treasury flows and of the funds flows and those notes, as well as those explanatory situations and materials which are part of the financial statements.

  18. Bayesian Estimation of Wave Spectra – Proper Formulation of ABIC

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    , a proper formulation of ABIC (a Bayesian Information Criterion) is given, in contrast to the improper formulation given of ABIC when only one hyperparameter is included. From a numerical example, the paper illustrates that the optimum pair of hyperparameters, determined by use of ABIC, corresponds......It is possible to estimate on-site wave spectra using measured ship responses applied to Bayesian Modelling based on two prior information: the wave spectrum must be smooth both directional-wise and frequency-wise. This paper introduces two hyperparameters into Bayesian Modelling and, hence...

  19. A statistical human resources costing and accounting model for analysing the economic effects of an intervention at a workplace.

    Science.gov (United States)

    Landstad, Bodil J; Gelin, Gunnar; Malmquist, Claes; Vinberg, Stig

    2002-09-15

    The study had two primary aims. The first aim was to combine a human resources costing and accounting approach (HRCA) with a quantitative statistical approach in order to get an integrated model. The second aim was to apply this integrated model in a quasi-experimental study in order to investigate whether preventive intervention affected sickness absence costs at the company level. The intervention studied contained occupational organizational measures, competence development, physical and psychosocial working environmental measures and individual and rehabilitation measures on both an individual and a group basis. The study is a quasi-experimental design with a non-randomized control group. Both groups involved cleaning jobs at predominantly female workplaces. The study plan involved carrying out before and after studies on both groups. The study included only those who were at the same workplace during the whole of the study period. In the HRCA model used here, the cost of sickness absence is the net difference between the costs, in the form of the value of the loss of production and the administrative cost, and the benefits in the form of lower labour costs. According to the HRCA model, the intervention used counteracted a rise in sickness absence costs at the company level, giving an average net effect of 266.5 Euros per person (full-time working) during an 8-month period. Using an analogue statistical analysis on the whole of the material, the contribution of the intervention counteracted a rise in sickness absence costs at the company level giving an average net effect of 283.2 Euros. Using a statistical method it was possible to study the regression coefficients in sub-groups and calculate the p-values for these coefficients; in the younger group the intervention gave a calculated net contribution of 605.6 Euros with a p-value of 0.073, while the intervention net contribution in the older group had a very high p-value. Using the statistical model it was

  20. On the importance of accounting for competing risks in pediatric brain cancer: II. Regression modeling and sample size.

    Science.gov (United States)

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-03-15

    To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Integrated water resources management of the Ichkeul basin taking into account the durability of its wetland ecosystem using WEAP model

    Science.gov (United States)

    Shabou, M.; Lili-Chabaane, Z.; Gastli, W.; Chakroun, H.; Ben Abdallah, S.; Oueslati, I.; Lasram, F.; Laajimi, R.; Shaiek, M.; Romdhane, M. S.; Mnajja, A.

    2012-04-01

    The Conservation of coastal wetlands in the Mediterranean area is generally faced with development issues. It is the case of Tunisia where the precipitation is irregular in time and space. For the equity of water use (drinking, irrigation), there is a planning at the national level allowing the possibility of water transfer from regions rich in water resources to poor ones. This plan was initially done in Tunisia without taking into account the wetlands ecosystems and their specificities. The main purpose of this study is to find a model able to integrate simultaneously available resources and various water demands within a watershed by taking into account the durability of related wetland ecosystems. It is the case of the Ichkeul basin. This later is situated in northern of Tunisia, having an area of 2080 km2 and rainfall of about 600 mm/year. Downstream this basin, the Ichkeul Lake is characterized by a double alternation of seasonal high water and low salinity in winter and spring and low water levels and high salinity in summer and autumn that makes the Ichkeul an exceptional ecosystem. The originality of this hydrological system of Lake-marsh conditions is related to the presence of aquatic vegetation in the lake and special rich and varied hygrophilic in the marshes that constitutes the main source of food for large migrating water birds. After the construction of three dams on the principle rivers that are feeding the Ichkeul Lake, aiming particularly to supply the local irrigation and the drinking water demand of cities in the north and the east of Tunisia, freshwater inflow to the lake is greatly reduced causing a hydrological disequilibrium that influences the ecological conditions of the different species. Therefore, to ensure the sustainability of the water resources management, it's important to find a trade off between the existing hydrological and ecological systems taking into account water demands of various users (drinking, irrigation fishing, and

  2. Taking into account the temporal variation of hydraulic conductivity when calibrating overland flow models on tilled fields.

    Science.gov (United States)

    Chahinian, N.; Andrieux, P.; Moussa, R.; Voltz, M.

    2003-04-01

    Tillage operations are known to change the structure of agricultural soils. In this paper we seek a calibration methodology to take into account the impact of tillage on overland flow simulation at the scale of a tilled field located in southern France. The study site is a 3240 m2 vineyard equipped with a Venturi flume and a tipping bucket rain gauge. 20 monitored rainfall events were used for the study, equally divided between calibration and validation sets. The overland flow model used consists of a modified Green &Ampt equation to simulate infiltration, a surface detention module, and an overland flow routing module based on the unit hydrograph concept. The model parameters that were calibrated for each event are the saturated hydraulic conductivity and the random roughness. The calibrated Ks values decreased monotonously according to the total amount of rainfall since tillage. No clear relationship was observed between the random roughness and cumulated rainfall. A regression curve was fitted to the calibrated Ks values. This curve was then used to determine Ks values for any rainfall event considering the total rainfall since tillage. Fairly good agreement was observed between the simulated and measured hydrographs of the calibration set. The validation results were relatively poorer but remain satisfactory given the uncertainties related to the initial soil moisture conditions. The calibration methodology developed seems robust and may be transposed to other sites.

  3. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system.

    Science.gov (United States)

    Beckon, William N

    2016-07-01

    For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  4. Two models to compute an adjusted Green Vegetation Fraction taking into account the spatial variability of soil NDVI

    Science.gov (United States)

    Montandon, L. M.; Small, E.

    2008-12-01

    The green vegetation fraction (Fg) is an important climate and hydrologic model parameter. The commonly- used Fg model is a simple linear mixing of two NDVI end-members: bare soil NDVI (NDVIo) and full vegetation NDVI (NDVI∞). NDVI∞ is generally set as a percentile of the historical maximum NDVI for each land cover. This approach works well for areas where Fg reaches full cover (100%). Because many biomes do not reach Fg=0, however, NDVIo is often determined as a single invariant value for all land cover types. In general, it is selected among the lowest NDVI observed over bare or desert areas, yielding NDVIo close to zero. There are two issues with this approach: large-scale variability of soil NDVI is ignored and observations on a wide range of soils show that soil NDVI is often larger. Here we introduce and test two new approaches to compute Fg that takes into account the spatial variability of soil NDVI. The first approach uses a global soil NDVI database and time series of MODIS NDVI data over the conterminous United States to constrain possible soil NDVI values over each pixel. Fg is computed using a subset of the soils database that respects the linear mixing model condition NDVIo≤NDVIh, where NDVIh is the pixel historical minimum. The second approach uses an empirical soil NDVI model that combines information of soil organic matter content and texture to infer soil NDVI. The U.S. General Soil Map (STATSGO2) database is used as input for spatial soil properties. Using in situ measurements of soil NDVI from sites that span a range of land cover types, we test both models and compare their performance to the standard Fg model. We show that our models adjust the temporal Fg estimates by 40-90% depending on the land cover type and amplitude of the seasonal NDVI signal. Using MODIS NDVI and soil maps over the conterminous U.S., we also study the spatial distribution of Fg adjustments in February and June 2008. We show that the standard Fg method

  5. Modelling of the physico-chemical behaviour of clay minerals with a thermo-kinetic model taking into account particles morphology in compacted material.

    Science.gov (United States)

    Sali, D.; Fritz, B.; Clément, C.; Michau, N.

    2003-04-01

    Modelling of fluid-mineral interactions is largely used in Earth Sciences studies to better understand the involved physicochemical processes and their long-term effect on the materials behaviour. Numerical models simplify the processes but try to preserve their main characteristics. Therefore the modelling results strongly depend on the data quality describing initial physicochemical conditions for rock materials, fluids and gases, and on the realistic way of processes representations. The current geo-chemical models do not well take into account rock porosity and permeability and the particle morphology of clay minerals. In compacted materials like those considered as barriers in waste repositories, low permeability rocks like mudstones or compacted powders will be used : they contain mainly fine particles and the geochemical models used for predicting their interactions with fluids tend to misjudge their surface areas, which are fundamental parameters in kinetic modelling. The purpose of this study was to improve how to take into account the particles morphology in the thermo-kinetic code KINDIS and the reactive transport code KIRMAT. A new function was integrated in these codes, considering the reaction surface area as a volume depending parameter and the calculated evolution of the mass balance in the system was coupled with the evolution of reactive surface areas. We made application exercises for numerical validation of these new versions of the codes and the results were compared with those of the pre-existing thermo-kinetic code KINDIS. Several points are highlighted. Taking into account reactive surface area evolution during simulation modifies the predicted mass transfers related to fluid-minerals interactions. Different secondary mineral phases are also observed during modelling. The evolution of the reactive surface parameter helps to solve the competition effects between different phases present in the system which are all able to fix the chemical

  6. Infrastrukturel Accountability

    DEFF Research Database (Denmark)

    Ubbesen, Morten Bonde

    Hvordan redegør man troværdigt for noget så diffust som en hel nations udledning af drivhusgasser? Det undersøger denne afhandling i et etnografisk studie af hvordan Danmarks drivhusgasregnskab udarbejdes, rapporteres og kontrolleres. Studiet trækker på begreber og forståelser fra 'Science & Tech...... & Technology Studies', og bidrager med begrebet 'infrastrukturel accountability' til nye måder at forstå og tænke om det arbejde, hvormed højt specialiserede praksisser dokumenterer og redegør for kvaliteten af deres arbejde....

  7. Infrastrukturel Accountability

    DEFF Research Database (Denmark)

    Ubbesen, Morten Bonde

    Hvordan redegør man troværdigt for noget så diffust som en hel nations udledning af drivhusgasser? Det undersøger denne afhandling i et etnografisk studie af hvordan Danmarks drivhusgasregnskab udarbejdes, rapporteres og kontrolleres. Studiet trækker på begreber og forståelser fra 'Science & Tech...... & Technology Studies', og bidrager med begrebet 'infrastrukturel accountability' til nye måder at forstå og tænke om det arbejde, hvormed højt specialiserede praksisser dokumenterer og redegør for kvaliteten af deres arbejde....

  8. Emerging accounting trends accounting for leases.

    Science.gov (United States)

    Valletta, Robert; Huggins, Brian

    2010-12-01

    A new model for lease accounting can have a significant impact on hospitals and healthcare organizations. The new approach proposes a "right-of-use" model that involves complex estimates and significant administrative burden. Hospitals and health systems that draw heavily on lease arrangements should start preparing for the new approach now even though guidance and a final rule are not expected until mid-2011. This article highlights a number of considerations from the lessee point of view.

  9. A simple arc column model that accounts for the relationship between voltage, current and electrode gap during VAR

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, R.L. [Sandia National Labs., Albuquerque, NM (United States). Liquid Metal Processing Lab.

    1997-02-01

    Mean arc voltage is a process parameter commonly used in vacuum arc remelting (VAR) control schemes. The response of this parameter to changes in melting current (I) and electrode gap (g{sub e}) at constant pressure may be accurately described by an equation of the form V = V{sub 0} + c{sub 1}g{sub e}I + c{sub 2}g{sub e}{sup 2} + c{sub 3}I{sup 2}, where c{sub 1}, c{sub 2} and c{sub 3} are constants, and where the non-linear terms generally constitute a relatively small correction. If the non-linear terms are ignored, the equation has the form of Ohm`s law with a constant offset (V{sub 0}), c{sub 1}g{sub e} playing the role of resistance. This implies that the arc column may be treated approximately as a simple resistor during constant current VAR, the resistance changing linearly with g{sub e}. The VAR furnace arc is known to originate from multiple cathode spot clusters situated randomly on the electrode tip surface. Each cluster marks a point of exist for conduction electrons leaving the cathode surface and entering the electrode gap. Because the spot clusters re highly localized on the cathode surface, each gives rise to an arc column that may be considered to operate independently of other local arc columns. This approximation is used to develop a model that accounts for the observed arc voltage dependence on electrode gap at constant current. Local arc column resistivity is estimated from elementary plasma physics and used to test the model for consistency by using it to predict local column heavy particle density. Furthermore, it is shown that the local arc column resistance increases as particle density increases. This is used to account for the common observation that the arc stiffens with increasing current, i.e. the arc voltage becomes more sensitive to changes in electrode gap as the melting current is increased. This explains why arc voltage is an accurate electrode gap indicator for high current VAR processes but not low current VAR processes.

  10. Proper Treatment of Acute Mesenteric Ischemia

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Kwan; Han, Young Min [Dept. of Radiology, Chonbuk National University Hospital and School of Medicine, Jeonju (Korea, Republic of); Kwak, Hyo Sung [Research Institue of Clinical Medicine, Chonbuk National University Hospital and School of Medicine, Jeonju (Korea, Republic of); Yu, Hee Chul [Dept. of Radiology, Chonbuk National University Hospital and School of Medicine, Jeonju (Korea, Republic of)

    2011-10-15

    To evaluate the effectiveness of treatment options for Acute Mesenteric Ischemia and establish proper treatment guidelines. From January 2007 to May 2010, 14 patients (13 men and 1 woman, mean age: 52.1 years) with acute mesenteric ischemia were enrolled in this study. All of the lesions were detected by CT scan and angiography. Initially, 4 patients underwent conservative treatment. Eleven patients were managed by endovascular treatment. We evaluated the therapeutic success and survival rate of each patient. The causes of ischemia included thromboembolism in 6 patients and dissection in 8 patients. Nine patients showed bowel ischemia on CT scans, 4 dissection patients underwent conservative treatment, 3 patients had recurring symptoms, and 5 dissection patients underwent endovascular treatment. Overall success and survival rate was 100%. However, overall success was 83% and survival rate was 40% in the 6 thromboembolism patients. The choice of 20 hours as the critical time in which the procedure is ideally performed was statistically significant (p = 0.0476). A percutaneous endovascular procedure is an effective treatment for acute mesenteric ischemia, especially in patients who underwent treatment within 20 hours. However, further study and a long term follow-up are needed.

  11. Proper motions of Upper Sco T-type candidates

    CERN Document Server

    Lodieu, N; Dobbie, P D

    2013-01-01

    We present new z- and H-band photometry and proper motion measurements for the five candidate very-low-mass T-type objects we recently proposed to be members of the nearest OB association to the Sun, Upper Scorpius. These new data fail to corroborate our prior conclusions regarding their spectral types and affiliation with the Upper Scorpius population. We conclude that we may be in presence of a turnover in the mass function of Upper Sco taking place below 10-4 Jupiter masses, depending on the age assigned to Upper Sco and the models used.

  12. The Objective Model about Disclosure of Accounting Information under the Internet Conditions%Internet网上财务会计信息披露趋势展望

    Institute of Scientific and Technical Information of China (English)

    张美红

    2001-01-01

    The current disclosure model of accounting information has greatly restricted the realization of accounting objective. It is not far away from the era when we can present periodic accounting information through the internet. The present author attempts to systematically construct the objective model for disclosure of financial accounting information under the internet conditions and pose some measures to be taken.%现行上市公司财务会计信息披露模式严重制约会计目标的实现。根据国际互联网技术发展的趋势和实现会计目标的要求,全面构建网上财务会计信息披露的目标模式,并提出为实现这一目标模式应采取的措施。

  13. Dorsoventral and Proximodistal Hippocampal Processing Account for the Influences of Sleep and Context on Memory (Reconsolidation: A Connectionist Model

    Directory of Open Access Journals (Sweden)

    Justin Lines

    2017-01-01

    Full Text Available The context in which learning occurs is sufficient to reconsolidate stored memories and neuronal reactivation may be crucial to memory consolidation during sleep. The mechanisms of context-dependent and sleep-dependent memory (reconsolidation are unknown but involve the hippocampus. We simulated memory (reconsolidation using a connectionist model of the hippocampus that explicitly accounted for its dorsoventral organization and for CA1 proximodistal processing. Replicating human and rodent (reconsolidation studies yielded the following results. (1 Semantic overlap between memory items and extraneous learning was necessary to explain experimental data and depended crucially on the recurrent networks of dorsal but not ventral CA3. (2 Stimulus-free, sleep-induced internal reactivations of memory patterns produced heterogeneous recruitment of memory items and protected memories from subsequent interference. These simulations further suggested that the decrease in memory resilience when subjects were not allowed to sleep following learning was primarily due to extraneous learning. (3 Partial exposure to the learning context during simulated sleep (i.e., targeted memory reactivation uniformly increased memory item reactivation and enhanced subsequent recall. Altogether, these results show that the dorsoventral and proximodistal organization of the hippocampus may be important components of the neural mechanisms for context-based and sleep-based memory (reconsolidations.

  14. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    Science.gov (United States)

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Analytical modeling of demagnetizing effect in magnetoelectric ferrite/PZT/ferrite trilayers taking into account a mechanical coupling

    Science.gov (United States)

    Loyau, V.; Aubert, A.; LoBue, M.; Mazaleyrat, F.

    2017-03-01

    In this paper, we investigate the demagnetizing effect in ferrite/PZT/ferrite magnetoelectric (ME) trilayer composites consisting of commercial PZT discs bonded by epoxy layers to Ni-Co-Zn ferrite discs made by a reactive Spark Plasma Sintering (SPS) technique. ME voltage coefficients (transversal mode) were measured on ferrite/PZT/ferrite trilayer ME samples with different thicknesses or phase volume ratio in order to highlight the influence of the magnetic field penetration governed by these geometrical parameters. Experimental ME coefficients and voltages were compared to analytical calculations using a quasi-static model. Theoretical demagnetizing factors of two magnetic discs that interact together in parallel magnetic structures were derived from an analytical calculation based on a superposition method. These factors were introduced in ME voltage calculations which take account of the demagnetizing effect. To fit the experimental results, a mechanical coupling factor was also introduced in the theoretical formula. This reflects the differential strain that exists in the ferrite and PZT layers due to shear effects near the edge of the ME samples and within the bonding epoxy layers. From this study, an optimization in magnitude of the ME voltage is obtained. Lastly, an analytical calculation of demagnetizing effect was conducted for layered ME composites containing higher numbers of alternated layers (n ≥ 5). The advantage of such a structure is then discussed.

  16. View Trade Deficit In Proper Light

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    On February 14, the Office of the U.S. Trade Representative issued a report entitled "U.S.-China Trade Relations: Entering a New Phase of Greater Accountability and Enforcement." It is the first comprehensive statement of U.S. trade policy toward China since China joined the World Trade Organization in 2001. U.S. Trade Representative Rob Portman, who submitted the report to Congress, says, "Our bilateral trade relationship with China today lacks equity, durability and balance in the opportunities it prov...

  17. Developing a Global Model of Accounting Education and Examining IES Compliance in Australia, Japan, and Sri Lanka

    Science.gov (United States)

    Watty, Kim; Sugahara, Satoshi; Abayadeera, Nadana; Perera, Luckmika

    2013-01-01

    The introduction of International Education Standards (IES) signals a clear move by the International Accounting Education Standards Board (IAESB) to ensure high quality standards in professional accounting education at a global level. This study investigated how IES are perceived and valued by member bodies and academics in three counties:…

  18. Self-consistent modeling of induced magnetic field in Titan's atmosphere accounting for the generation of Schumann resonance

    Science.gov (United States)

    Béghin, Christian

    2015-02-01

    This model is worked out in the frame of physical mechanisms proposed in previous studies accounting for the generation and the observation of an atypical Schumann Resonance (SR) during the descent of the Huygens Probe in the Titan's atmosphere on 14 January 2005. While Titan is staying inside the subsonic co-rotating magnetosphere of Saturn, a secondary magnetic field carrying an Extremely Low Frequency (ELF) modulation is shown to be generated through ion-acoustic instabilities of the Pedersen current sheets induced at the interface region between the impacting magnetospheric plasma and Titan's ionosphere. The stronger induced magnetic field components are focused within field-aligned arcs-like structures hanging down the current sheets, with minimum amplitude of about 0.3 nT throughout the ramside hemisphere from the ionopause down to the Moon surface, including the icy crust and its interface with a conductive water ocean. The deep penetration of the modulated magnetic field in the atmosphere is thought to be allowed thanks to the force balance between the average temporal variations of thermal and magnetic pressures within the field-aligned arcs. However, there is a first cause of diffusion of the ELF magnetic components, probably due to feeding one, or eventually several SR eigenmodes. A second leakage source is ascribed to a system of eddy-Foucault currents assumed to be induced through the buried water ocean. The amplitude spectrum distribution of the induced ELF magnetic field components inside the SR cavity is found fully consistent with the measurements of the Huygens wave-field strength. Waiting for expected future in-situ exploration of Titan's lower atmosphere and the surface, the Huygens data are the only experimental means available to date for constraining the proposed model.

  19. An Extended Normalization Model of Attention Accounts for Feature-Based Attentional Enhancement of Both Response and Coherence Gain.

    Science.gov (United States)

    Schwedhelm, Philipp; Krishna, B Suresh; Treue, Stefan

    2016-12-01

    Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively.

  20. An Extended Normalization Model of Attention Accounts for Feature-Based Attentional Enhancement of Both Response and Coherence Gain.

    Directory of Open Access Journals (Sweden)

    Philipp Schwedhelm

    2016-12-01

    Full Text Available Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain rather than increase the sensory input strength of the attended stimulus (input gain. This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively.

  1. Bayesian model discrimination for glucose-insulin homeostasis

    DEFF Research Database (Denmark)

    Andersen, Kim Emil; Brooks, Stephen P.; Højbjerre, Malene

    the reformulation of existing deterministic models as stochastic state space models which properly accounts for both measurement and process variability. The analysis is further enhanced by Bayesian model discrimination techniques and model averaged parameter estimation which fully accounts for model as well...

  2. Stochastic inverse modelling of hydraulic conductivity fields taking into account independent stochastic structures: A 3D case study

    Science.gov (United States)

    Llopis-Albert, C.; Capilla, J. E.

    2010-09-01

    SummaryMajor factors affecting groundwater flow through fractured rocks include the geometry of each fracture, its properties and the fracture-network connectivity together with the porosity and conductivity of the rock matrix. When modelling fractured rocks this is translated into attaining a characterization of the hydraulic conductivity ( K) as adequately as possible, despite its high heterogeneity. This links with the main goal of this paper, which is to present an improvement of a stochastic inverse model, named as Gradual Conditioning (GC) method, to better characterise K in a fractured rock medium by considering different K stochastic structures, belonging to independent K statistical populations (SP) of fracture families and the rock matrix, each one with its own statistical properties. The new methodology is carried out by applying independent deformations to each SP during the conditioning process for constraining stochastic simulations to data. This allows that the statistical properties of each SPs tend to be preserved during the iterative optimization process. It is worthwhile mentioning that so far, no other stochastic inverse modelling technique, with the whole capabilities implemented in the GC method, is able to work with a domain covered by several different stochastic structures taking into account the independence of different populations. The GC method is based on a procedure that gradually changes an initial K field, which is conditioned only to K data, to approximate the reproduction of other types of information, i.e., piezometric head and solute concentration data. The approach is applied to the Äspö Hard Rock Laboratory (HRL) in Sweden, where, since the middle nineties, many experiments have been carried out to increase confidence in alternative radionuclide transport modelling approaches. Because the description of fracture locations and the distribution of hydrodynamic parameters within them are not accurate enough, we address the

  3. Large-scale determinants of diversity across Spanish forest habitats: accounting for model uncertainty in compositional and structural indicators

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Quller, E.; Torras, O.; Alberdi, I.; Solana, J.; Saura, S.

    2011-07-01

    An integral understanding of forest biodiversity requires the exploration of the many aspects it comprises and of the numerous potential determinants of their distribution. The landscape ecological approach provides a necessary complement to conventional local studies that focus on individual plots or forest ownerships. However, most previous landscape studies used equally-sized cells as units of analysis to identify the factors affecting forest biodiversity distribution. Stratification of the analysis by habitats with a relatively homogeneous forest composition might be more adequate to capture the underlying patterns associated to the formation and development of a particular ensemble of interacting forest species. Here we used a landscape perspective in order to improve our understanding on the influence of large-scale explanatory factors on forest biodiversity indicators in Spanish habitats, covering a wide latitudinal and attitudinal range. We considered six forest biodiversity indicators estimated from more than 30,000 field plots in the Spanish national forest inventory, distributed in 213 forest habitats over 16 Spanish provinces. We explored biodiversity response to various environmental (climate and topography) and landscape configuration (fragmentation and shape complexity) variables through multiple linear regression models (built and assessed through the Akaike Information Criterion). In particular, we took into account the inherent model uncertainty when dealing with a complex and large set of variables, and considered different plausible models and their probability of being the best candidate for the observed data. Our results showed that compositional indicators (species richness and diversity) were mostly explained by environmental factors. Models for structural indicators (standing deadwood and stand complexity) had the worst fits and selection uncertainties, but did show significant associations with some configuration metrics. In general

  4. Response function theories that account for size distribution effects - A review. [mathematical models concerning composite propellant heterogeneity effects on combustion instability

    Science.gov (United States)

    Cohen, N. S.

    1980-01-01

    The paper presents theoretical models developed to account for the heterogeneity of composite propellants in expressing the pressure-coupled combustion response function. It is noted that the model of Lengelle and Williams (1968) furnishes a viable basis to explain the effects of heterogeneity.

  5. Molecular weight​/branching distribution modeling of low-​density-​polyethylene accounting for topological scission and combination termination in continuous stirred tank reactor

    NARCIS (Netherlands)

    Yaghini, N.; Iedema, P.D.

    2014-01-01

    We present a comprehensive model to predict the molecular weight distribution (MWD),(1) and branching distribution of low-density polyethylene (IdPE),(2) for free radical polymerization system in a continuous stirred tank reactor (CSTR).(3) The model accounts for branching, by branching moment or ps

  6. Proper Motions of Water Masers in Circumstellar Shells

    Science.gov (United States)

    Marvel, K. B.; Diamond, P. J.; Kemball, A. J.

    We present proper motion measurements of circumstellar water masers obtained with the VLBA. The objects observed include S Persei, VX Sagittarii, U Herculis, VY Canis Majoris, NML Cygni, IK Tauri and RX Bootis. Results of the observations and modeling indicate that the water masers exist in a kinematically complex region of the circumstellar envelope, which is not well fit by the standard model of a uniformly expanding spherical wind. Attempts at fitting an ellipsoidal geometric distribution with a variety of kinematic models are presented. Estimates for the distances of the stars are also discussed. A change in position of the maser spots as a function of velocity has been measured. This effect may be used to place limits on accelerations in the masing gas.

  7. High proper motion X-ray binaries from the Yale Southern Proper Motion Survey

    CERN Document Server

    Maccarone, Thomas J; Casetti-Dinescu, Dana I

    2014-01-01

    We discuss the results of cross-correlating catalogs of bright X-ray binaries with the Yale Southern Proper Motion catalog (version 4.0). Several objects already known to have large proper motions from Hipparcos are recovered. Two additional objects are found which show substantial proper motions, both of which are unusual in their X-ray properties. One is IGR J17544-2619, one of the supergiant fast X-ray transients. Assuming the quoted distances in the literature for this source of about 3 kpc are correct, this system has a peculiar velocity of about 275 km/sec -- greater than the velocity of a Keplerian orbit at its location of the Galaxy, and in line with the expectations formed from suggestions that the supergiant fast X-ray transients should be highly eccentric. We discuss the possibility that these objects may help explain the existence of short gamma-ray bursts outside the central regions of galaxies. The other is the source 2A~1822-371, which is a member of the small class of objects which are low mas...

  8. Accounting for the Uncertainty Related to Building Occupants with Regards to Visual Comfort: A Literature Survey on Drivers and Models

    Directory of Open Access Journals (Sweden)

    Valentina Fabi

    2016-02-01

    Full Text Available The interactions between building occupants and control systems have a high influence on energy consumption and on indoor environmental quality. In the perspective of a future of “nearly-zero” energy buildings, it is crucial to analyse the energy-related interactions deeply to predict realistic energy use during the design stage. Since the reaction to thermal, acoustic, or visual stimuli is not the same for every human being, monitoring the behaviour inside buildings is an essential step to assert differences in energy consumption related to different interactions. Reliable information concerning occupants’ behaviours in a building could contribute to a better evaluation of building energy performances and design robustness, as well as supporting the development of occupants’ education to energy awareness. The present literature survey enlarges our understanding of which environmental conditions influence occupants’ manual controlling of the system in offices and by consequence the energy consumption. The purpose of this study was to investigate the possible drivers for light-switching to model occupant behaviour in office buildings. The probability of switching lighting systems on or off was related to the occupancy and differentiated for arrival, intermediate, and departure periods. The switching probability has been reported to be higher during the entering or the leaving time in relation to contextual variables. In the analysis of switch-on actions, users were often clustered between those who take daylight level into account and switch on lights only if necessary and people who totally disregard the natural lighting. This underlines the importance of how individuality is at the base of the definition of the different types of users.

  9. The Perception of the Accounting Students on the Image of the Accountant and the Accounting Profession

    Directory of Open Access Journals (Sweden)

    Lucian Cernuşca

    2015-01-01

    Full Text Available This study aims to present the perception of the accounting students on the accountant image and the accounting profession, thus contributing to a better understanding of the option for the field of accounting and the motivations for choosing this profession. The paper consists of the following parts: introduction, literature review, research methodology, research findings, conclusions and bibliography. The accounting profession must be aligned to the current conditions the Romanian accounting system is going through to harmonize to the IFRS and European regulations and the development of information technologies and the transition to digital era. The role of the accountant changes from a simple digit operator to a modern one. This will be part of the managerial team, provide strategic and financial advice and effective solutions for the proper functioning of the organization, the modern stereotype involving creativity in the accounting activities. The research aims at understanding the role of the accounting profession as a social identity and as a social phenomenon and the implications for academia and professional bodies.

  10. Proper Nouns in Translation: Should They Be Translated?

    Directory of Open Access Journals (Sweden)

    Rouhollah Zarei

    2014-11-01

    Full Text Available The translation of proper nouns is not as easy as that of other parts of speech as this is more challenging for certain reasons. The present article presents a descriptive study of proper nouns in translation, scrutinizing the challenges and exploring the solutions. Building on some scholars’ approach and suggestions from other researchers, the article clarifies the nature and problems of proper nouns in translation; it seeks to answer three questions: 1 Should proper nouns be translated? 2 What are the problems on the way of translation of the proper nouns? 3 How can the translator overcome such problems? Moreover, strategies applied by the researchers to make their translation easier are also discussed. It follows that translating proper nouns is not simple and there is little flexibility about translating proper nouns. Keywords: proper nouns, translation, strategies

  11. 生态环境经济核算模型及其研究%Study on the Accounting Model of Ecological Environment

    Institute of Scientific and Technical Information of China (English)

    刘颖

    2005-01-01

    This paper explains environmental problems from the point of R to the "System of Integrated Environmental and Economical Accounting" (SEEA) promulgated by United Nations in 1993, the authors have set up a model of environmental and economic accounting after the discussion and m tion of the concepts ction, assets, and environmental costs. Taking ecological environment of Chongqing as an the en tal and economic analyses is done. The results of the model reflect sustainability of ecological environment and mental costs directly or indirectly in macro-economy.

  12. Physically-based modifications to the Sacramento Soil Moisture Accounting model. Part A: Modeling the effects of frozen ground on the runoff generation process

    Science.gov (United States)

    Koren, Victor; Smith, Michael; Cui, Zhengtao

    2014-11-01

    This paper presents the first of two physically-based modifications to a widely-used and well-validated hydrologic precipitation-runoff model. Here, we modify the Sacramento Soil Moisture Accounting (SAC-SMA) model to include a physically-based representation of the effects of freezing and thawing soil on the runoff generation process. This model is called the SAC-SMA Heat Transfer model (SAC-HT). The frozen ground physics are taken from the Noah land surface model which serves as the land surface component of several National Center for Environmental Prediction (NCEP) numerical weather prediction models. SAC-HT requires a boundary condition of the soil temperature at the bottom of the soil column (a climatic annual air temperature is typically used, and parameters derived from readily available soil texture data). A noteworthy feature of SAC-HT is that the frozen ground component needs no parameter calibration. SAC-HT was tested at 11 sites in the U.S. for soil temperature, one site in Russia for soil temperature and soil moisture, eight basins in the upper Midwest for the effects of frozen-ground on streamflow, and one location for frost depth. High correlation coefficients for simulated soil temperature at three depths at 11 stations were achieved. Multi-year simulations of soil moisture and soil temperature agreed very well at the Valdai, Russia test location. In eight basins affected by seasonally frozen soil in the upper Midwest, SAC-HT provided improved streamflow simulations compared to SAC-SMA when both models used a priori parameters. Further improvement was gained through calibration of the non-frozen ground a priori parameters. Frost depth computed by SAC-HT compared well with observed values in the Root River basin in Minnesota.

  13. Accounting concept of inventories in postindustrial economy

    Directory of Open Access Journals (Sweden)

    Pravdyuk N.L.

    2017-06-01

    Full Text Available The accounting of inventories has undergone significant changes over a relatively short period of time. It has changed the scientific picture of their definition and classification, measurement and write-offs reflected in the financial statements. However, these changes happen without proper interpretation and system analysis. And, at least in general terms the inventories are conducted in Ukraine according to IFRS; this causes some obstacles to the objective reflection of working capital of enterprises, and the transparency of disclosure and is not conducive to the formation of a proper investment climate. It is established that the information provision inventory control must meet the requirements of the postindustrial economy by the complicating and deepening the complexity of accounting, the introduction of new forms and their synthesis with the current one, a gradual reorganization to ensure the needs of consumers and enterprise evaluation. The results of the study have substantiated the fundamentals of accounting concepts in the postindustrial economy in the part of the circulating capital, which forms inventories. The information support of inventory management should be implemented in a hierarchical way, when it first and foremost analyzes the working capital, and further deals with inventories and stocks as its subordinate components. The author considers the material goods to be a broader concept than reserves, because they have a dual nature both estimated as the share of negotiable assets, and as the physical component of material costs. The paper gives the definition of this category of symbiosis, which is based on P(CBU 9. The general structure of the current inventories are of significant importance, which has differences in industries, the dominant of which is agriculture, industry, construction, trade, material production. The postindustrial economy caused the questions of differentiation of concepts "production" and "material

  14. Whole of Government Accounts

    DEFF Research Database (Denmark)

    Pontoppidan, Caroline Aggestam; Chow, Danny; Day, Ronald

    In our comparative study, we surveyed an emerging literature on the use of consolidation in government accounting and develop a research agenda. We find heterogeneous approaches to the development of consolidation models across the five countries (Australia, New Zealand, UK, Canada and Sweden...... of financial reporting (GAAP)-based reforms when compared with budget-centric systems of accounting, which dominate government decision-making. At a trans-national level, there is a need to examine the embedded or implicit contests or ‘trials of strength’ between nations and/or institutions jockeying...... for influence. We highlight three arenas where such contests are being played out: 1. Statistical versus GAAP notions of accounting value, which features in all accounting debates over the merits and costs of ex-ante versus ex-post notions of value (i.e., the relevance versus reliability debate); 2. Private...

  15. Vertical velocities from proper motions of red clump giants

    Science.gov (United States)

    López-Corredoira, M.; Abedi, H.; Garzón, F.; Figueras, F.

    2014-12-01

    Aims: We derive the vertical velocities of disk stars in the range of Galactocentric radii of R = 5 - 16 kpc within 2 kpc in height from the Galactic plane. This kinematic information is connected to dynamical aspects in the formation and evolution of the Milky Way, such as the passage of satellites and vertical resonance and determines whether the warp is a long-lived or a transient feature. Methods: We used the PPMXL survey, which contains the USNO-B1 proper motions catalog cross-correlated with the astrometry and near-infrared photometry of the 2MASS point source catalog. To improve the accuracy of the proper motions, the systematic shifts from zero were calculated by using the average proper motions of quasars in this PPMXL survey, and we applied the corresponding correction to the proper motions of the whole survey, which reduces the systematic error. From the color-magnitude diagram K versus (J - K) we selected the standard candles corresponding to red clump giants and used the information of their proper motions to build a map of the vertical motions of our Galaxy. We derived the kinematics of the warp both analytically and through a particle simulation to fit these data. Complementarily, we also carried out the same analysis with red clump giants spectroscopically selected with APOGEE data, and we predict the improvements in accuracy that will be reached with future Gaia data. Results: A simple model of warp with the height of the disk zw(R,φ) = γ(R - R⊙)sin(φ - φw) fits the vertical motions if dot {γ }/γ = -34±17 Gyr-1; the contribution to dot {γ } comes from the southern warp and is negligible in the north. If we assume this 2σ detection to be real, the period of this oscillation is shorter than 0.43 Gyr at 68.3% C.L. and shorter than 4.64 Gyr at 95.4% C.L., which excludes with high confidence the slow variations (periods longer than 5 Gyr) that correspond to long-lived features. Our particle simulation also indicates a probable abrupt decrease

  16. The Lorentzian proper vertex amplitude: Classical analysis and quantum derivation

    CERN Document Server

    Engle, Jonathan

    2015-01-01

    Spin foam models, an approach to defining the dynamics of loop quantum gravity, make use of the Plebanski formulation of gravity, in which gravity is recovered from a topological field theory via certain constraints called simplicity constraints. However, the simplicity constraints in their usual form select more than just one gravitational sector as well as a degenerate sector. This was shown, in previous work, to be the reason for the "extra" terms appearing in the semiclassical limit of the Euclidean EPRL amplitude. In this previous work, a way to eliminate the extra sectors, and hence terms, was developed, leading to the what was called the Euclidean proper vertex amplitude. In the present work, these results are extended to the Lorentzian signature, establishing what is called the Lorentzian proper vertex amplitude. This extension is non-trivial and involves a number of new elements since, for Lorentzian bivectors, the split into self-dual and anti-self-dual parts, on which the Euclidean derivation was b...

  17. Proper motions of the optically visible open clusters based on the UCAC4 catalog

    Science.gov (United States)

    Dias, W. S.; Monteiro, H.; Caetano, T. C.; Lépine, J. R. D.; Assafin, M.; Oliveira, A. F.

    2014-04-01

    We present a catalog of mean proper motions and membership probabilities of individual stars for optically visible open clusters, which have been determined using data from the UCAC4 catalog in a homogeneous way. The mean proper motion of the cluster and the membership probabilities of the stars in the region of each cluster were determined by applying the statistical method in a modified fashion. In this study, we applied a global optimization procedure to fit the observed distribution of proper motions with two overlapping normal bivariate frequency functions, which also take the individual proper motion errors into account. For 724 clusters, this is the first determination of proper motion, and for the whole sample, we present results with a much larger number of identified astrometric member stars. Furthermore, it was possible to estimate the mean radial velocity of 364 clusters (102 unpublished so far) with the stellar membership using published radial velocity catalogs. These results provide an increase of 30% and 19% in the sample of open clusters with a determined mean absolute proper motion and mean radial velocity, respectively. Tables 2 to 1809 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A79

  18. VizieR Online Data Catalog: Proper motions of open clusters from UCAC4 (Dias+, 2014)

    Science.gov (United States)

    Dias, W. S.; Monteiro, H.; Caetano, T. C.; Lepine, J. R. D.; Assafin, M.; Oliveira, A. F.

    2014-04-01

    We present a catalog of mean proper motions and membership probabilities of individual stars for optically visible open clusters, which have been determined using data from the UCAC4 catalog in a homogeneous way. The mean proper motion of the cluster and the membership probabilities of the stars in the region of each cluster were determined by applying the statistical method in a modified fashion. In this study, we applied a global optimization procedure to fit the observed distribution of proper motions with two overlapping normal bivariate frequency functions, which also take the individual proper motion errors into account. For 724 clusters, this is the first determination of proper motion, and for the whole sample, we present results with a much larger number of identified astrometric member stars. Furthermore, it was possible to estimate the mean radial velocity of 364 clusters (102 unpublished so far) with the stellar membership using published radial velocity catalogs. These results provide an increase of 30% and 19% in the sample of open clusters with a determined mean absolute proper motion and mean radial velocity, respectively. (5 data files).

  19. Characterizations of Graphs Having Large Proper Connection Numbers

    Directory of Open Access Journals (Sweden)

    Lumduanhom Chira

    2016-05-01

    Full Text Available Let G be an edge-colored connected graph. A path P is a proper path in G if no two adjacent edges of P are colored the same. If P is a proper u − v path of length d(u, v, then P is a proper u − v geodesic. An edge coloring c is a proper-path coloring of a connected graph G if every pair u, v of distinct vertices of G are connected by a proper u − v path in G, and c is a strong proper-path coloring if every two vertices u and v are connected by a proper u− v geodesic in G. The minimum number of colors required for a proper-path coloring or strong proper-path coloring of G is called the proper connection number pc(G or strong proper connection number spc(G of G, respectively. If G is a nontrivial connected graph of size m, then pc(G ≤ spc(G ≤ m and pc(G = m or spc(G = m if and only if G is the star of size m. In this paper, we determine all connected graphs G of size m for which pc(G or spc(G is m − 1,m − 2 or m − 3.

  20. Proper estimation of hydrological parameters from flood forecasting aspects

    Science.gov (United States)

    Miyamoto, Mamoru; Matsumoto, Kazuhiro; Tsuda, Morimasa; Yamakage, Yuzuru; Iwami, Yoichi; Yanami, Hitoshi; Anai, Hirokazu

    2016-04-01

    The hydrological parameters of a flood forecasting model are normally calibrated based on an entire hydrograph of past flood events by means of an error assessment function such as mean square error and relative error. However, the specific parts of a hydrograph, i.e., maximum discharge and rising parts, are particularly important for practical flood forecasting in the sense that underestimation may lead to a more dangerous situation due to delay in flood prevention and evacuation activities. We conducted numerical experiments to find the most proper parameter set for practical flood forecasting without underestimation in order to develop an error assessment method for calibration appropriate for flood forecasting. A distributed hydrological model developed in Public Works Research Institute (PWRI) in Japan was applied to fifteen past floods in the Gokase River basin of 1,820km2 in Japan. The model with gridded two-layer tanks for the entire target river basin included hydrological parameters, such as hydraulic conductivity, surface roughness and runoff coefficient, which were set according to land-use and soil-type distributions. Global data sets, e.g., Global Map and Digital Soil Map of the World (DSMW), were employed as input data for elevation, land use and soil type. The values of fourteen types of parameters were evenly sampled with 10,001 patterns of parameter sets determined by the Latin Hypercube Sampling within the search range of each parameter. Although the best reproduced case showed a high Nash-Sutcliffe Efficiency of 0.9 for all flood events, the maximum discharge was underestimated in many flood cases. Therefore, two conditions, which were non-underestimation in the maximum discharge and rising parts of a hydrograph, were added in calibration as the flood forecasting aptitudes. The cases with non-underestimation in the maximum discharge and rising parts of the hydrograph also showed a high Nash-Sutcliffe Efficiency of 0.9 except two flood cases