WorldWideScience

Sample records for model properly accounts

  1. Ocean Models and Proper Orthogonal Decomposition

    Science.gov (United States)

    Salas-de-Leon, D. A.

    2007-05-01

    The increasing computational developments and the better understanding of mathematical and physical systems resulted in an increasing number of ocean models. Long time ago, modelers were like a secret organization and recognize each other by using secret codes and languages that only a select group of people was able to recognize and understand. The access to computational systems was reduced, on one hand equipment and the using time of computers were expensive and restricted, and on the other hand, they required an advance computational languages that not everybody wanted to learn. Now a days most college freshman own a personal computer (PC or laptop), and/or have access to more sophisticated computational systems than those available for research in the early 80's. The resource availability resulted in a mayor access to all kind models. Today computer speed and time and the algorithms does not seem to be a problem, even though some models take days to run in small computational systems. Almost every oceanographic institution has their own model, what is more, in the same institution from one office to the next there are different models for the same phenomena, developed by different research member, the results does not differ substantially since the equations are the same, and the solving algorithms are similar. The algorithms and the grids, constructed with algorithms, can be found in text books and/or over the internet. Every year more sophisticated models are constructed. The Proper Orthogonal Decomposition is a technique that allows the reduction of the number of variables to solve keeping the model properties, for which it can be a very useful tool in diminishing the processes that have to be solved using "small" computational systems, making sophisticated models available for a greater community.

  2. First-principles supercell calculations of small polarons with proper account for long-range polarization effects

    Science.gov (United States)

    Kokott, Sebastian; Levchenko, Sergey V.; Rinke, Patrick; Scheffler, Matthias

    2018-03-01

    We present a density functional theory (DFT) based supercell approach for modeling small polarons with proper account for the long-range elastic response of the material. Our analysis of the supercell dependence of the polaron properties (e.g., atomic structure, binding energy, and the polaron level) reveals long-range electrostatic effects and the electron–phonon (el–ph) interaction as the two main contributors. We develop a correction scheme for DFT polaron calculations that significantly reduces the dependence of polaron properties on the DFT exchange-correlation functional and the size of the supercell in the limit of strong el–ph coupling. Using our correction approach, we present accurate all-electron full-potential DFT results for small polarons in rocksalt MgO and rutile TiO2.

  3. THE COMPANY ACCOUNTING EVALUATION – PRELIMINARY PHASE OF THE PROPER ANALYSIS OF FINANCIAL STATEMENTS

    Directory of Open Access Journals (Sweden)

    Doina Pacurari

    2008-12-01

    Full Text Available The problem that the accounting information do not always reflect the economic reality may affect the analysis and forecast based on financial statements. This is due both to the accrual accounting limitations and to the fact that this type of accounting allows the result management. In spite of some disadvantages, the accrual accounting is considered superior to cash accounting in measuring the performances and determining financial position as well as in the predicting of future cash flow. In order to limit the negative effects on the results of analysis and forecast based on financial statements, the analists should evaluate the enterprise accounting and if necessary adjust the financial statements so they reflect the economic reality.

  4. The importance of proper feedback modeling in HWR

    Energy Technology Data Exchange (ETDEWEB)

    Saphier, D; Gorelik, Z; Shapira, M [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center

    1996-12-01

    The DSNP simulation language was applied to study the effect of different modeling approximations of feedback phenomena in nuclear power plants. The different methods to model the feedback effects are presented and discussed. It is shown that HWR`s are most sensitive to the correct modeling since the usually have at least three feedback effects acting at different time scales, and to achieve correct kinetics a one dimensional representation is needed with correct modeling of the in core time delays. The simulation methodology of lumped parameters and one dimensional models using the DSNP simulation language is presented (authors).

  5. The importance of proper feedback modeling in HWR

    International Nuclear Information System (INIS)

    Saphier, D.; Gorelik, Z.; Shapira, M.

    1996-01-01

    The DSNP simulation language was applied to study the effect of different modeling approximations of feedback phenomena in nuclear power plants. The different methods to model the feedback effects are presented and discussed. It is shown that HWR's are most sensitive to the correct modeling since the usually have at least three feedback effects acting at different time scales, and to achieve correct kinetics a one dimensional representation is needed with correct modeling of the in core time delays. The simulation methodology of lumped parameters and one dimensional models using the DSNP simulation language is presented (authors)

  6. Kawase & McDermott revisited with a proper ocean model.

    Science.gov (United States)

    Jochum, Markus; Poulsen, Mads; Nuterman, Roman

    2017-04-01

    A suite of experiments with global ocean models is used to test the hypothesis that Southern Ocean (SO) winds can modify the strength of the Atlantic Meridional Overturning Circulation (AMOC). It is found that for 3 and 1 degree resolution models the results are consistent with Toggweiler & Samuels (1995): stronger SO winds lead to a slight increase of the AMOC. In the simulations with 1/10 degree resolution, however, stronger SO winds weaken the AMOC. We show that these different outcomes are determined by the models' representation of topographic Rossby and Kelvin waves. Consistent with previous literature based on theory and idealized models, first baroclinic waves are slower in the coarse resolution models, but still manage to establish a pattern of global response that is similar to the one in the eddy-permitting model. Because of its different stratification, however, the Atlantic signal is transmitted by higher baroclinic modes. In the coarse resolution model these higher modes are dissipated before they reach 30N, whereas in the eddy-permitting model they reach the subpolar gyre undiminished. This inability of non-eddy-permitting ocean models to represent planetary waves with higher baroclinic modes casts doubt on the ability of climate models to represent non-local effects of climate change. Ideas on how to overcome these difficulties will be discussed.

  7. Modelling in Accounting. Theoretical and Practical Dimensions

    Directory of Open Access Journals (Sweden)

    Teresa Szot-Gabryś

    2010-10-01

    Full Text Available Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic measurements areas which have not been hitherto covered by any accounting system (it applies, for example, to small businesses, agricultural farms, human capital, which requires the development of an appropriate theoretical and practical model. The article illustrates the issue of modelling in accounting based on the example of an accounting model developed for small businesses, i.e. economic entities which are not obliged by law to keep accounting records.

  8. Implementing a trustworthy cost-accounting model.

    Science.gov (United States)

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  9. Questioning Stakeholder Legitimacy: A Philanthropic Accountability Model.

    Science.gov (United States)

    Kraeger, Patsy; Robichau, Robbie

    2017-01-01

    Philanthropic organizations contribute to important work that solves complex problems to strengthen communities. Many of these organizations are moving toward engaging in public policy work, in addition to funding programs. This paper raises questions of legitimacy for foundations, as well as issues of transparency and accountability in a pluralistic democracy. Measures of civic health also inform how philanthropic organizations can be accountable to stakeholders. We propose a holistic model for philanthropic accountability that combines elements of transparency and performance accountability, as well as practices associated with the American pluralistic model for democratic accountability. We argue that philanthropic institutions should seek stakeholder and public input when shaping any public policy agenda. This paper suggests a new paradigm, called philanthropic accountability that can be used for legitimacy and democratic governance of private foundations engaged in policy work. The Philanthropic Accountability Model can be empirically tested and used as a governance tool.

  10. Investigating Coherent Structures in the Standard Turbulence Models using Proper Orthogonal Decomposition

    International Nuclear Information System (INIS)

    Eliassen, Lene; Andersen, Søren

    2016-01-01

    The wind turbine design standards recommend two different methods to generate turbulent wind for design load analysis, the Kaimal spectra combined with an exponential coherence function and the Mann turbulence model. The two turbulence models can give very different estimates of fatigue life, especially for offshore floating wind turbines. In this study the spatial distributions of the two turbulence models are investigated using Proper Orthogonal Decomposition, which is used to characterize large coherent structures. The main focus has been on the structures that contain the most energy, which are the lowest POD modes. The Mann turbulence model generates coherent structures that stretches in the horizontal direction for the longitudinal component, while the structures found in the Kaimal model are more random in their shape. These differences in the coherent structures at lower frequencies for the two turbulence models can be the reason for differences in fatigue life estimates for wind turbines. (paper)

  11. Understanding financial crisis through accounting models

    NARCIS (Netherlands)

    Bezemer, D.J.

    2010-01-01

    This paper presents evidence that accounting (or flow-of-funds) macroeconomic models helped anticipate the credit crisis and economic recession Equilibrium models ubiquitous in mainstream policy and research did not This study traces the Intellectual pedigrees of the accounting approach as an

  12. Modeling multipulsing transition in ring cavity lasers with proper orthogonal decomposition

    International Nuclear Information System (INIS)

    Ding, Edwin; Shlizerman, Eli; Kutz, J. Nathan

    2010-01-01

    A low-dimensional model is constructed via the proper orthogonal decomposition (POD) to characterize the multipulsing phenomenon in a ring cavity laser mode locked by a saturable absorber. The onset of the multipulsing transition is characterized by an oscillatory state (created by a Hopf bifurcation) that is then itself destabilized to a double-pulse configuration (by a fold bifurcation). A four-mode POD analysis, which uses the principal components, or singular value decomposition modes, of the mode-locked laser, provides a simple analytic framework for a complete characterization of the entire transition process and its associated bifurcations. These findings are in good agreement with the full governing equation.

  13. Model Reduction Using Proper Orthogonal Decomposition and Predictive Control of Distributed Reactor System

    Directory of Open Access Journals (Sweden)

    Alejandro Marquez

    2013-01-01

    Full Text Available This paper studies the application of proper orthogonal decomposition (POD to reduce the order of distributed reactor models with axial and radial diffusion and the implementation of model predictive control (MPC based on discrete-time linear time invariant (LTI reduced-order models. In this paper, the control objective is to keep the operation of the reactor at a desired operating condition in spite of the disturbances in the feed flow. This operating condition is determined by means of an optimization algorithm that provides the optimal temperature and concentration profiles for the system. Around these optimal profiles, the nonlinear partial differential equations (PDEs, that model the reactor are linearized, and afterwards the linear PDEs are discretized in space giving as a result a high-order linear model. POD and Galerkin projection are used to derive the low-order linear model that captures the dominant dynamics of the PDEs, which are subsequently used for controller design. An MPC formulation is constructed on the basis of the low-order linear model. The proposed approach is tested through simulation, and it is shown that the results are good with regard to keep the operation of the reactor.

  14. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    Science.gov (United States)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the

  15. Development of Boundary Condition Independent Reduced Order Thermal Models using Proper Orthogonal Decomposition

    Science.gov (United States)

    Raghupathy, Arun; Ghia, Karman; Ghia, Urmila

    2008-11-01

    Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.

  16. Hemodynamics of a Patient-Specific Aneurysm Model with Proper Orthogonal Decomposition

    Science.gov (United States)

    Han, Suyue; Chang, Gary Han; Modarres-Sadeghi, Yahya

    2017-11-01

    Wall shear stress (WSS) and oscillatory shear index (OSI) are two of the most-widely studied hemodynamic quantities in cardiovascular systems that have been shown to have the ability to elicit biological responses of the arterial wall, which could be used to predict the aneurysm development and rupture. In this study, a reduced-order model (ROM) of the hemodynamics of a patient-specific cerebral aneurysm is studied. The snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases of the flow using a CFD training set with known inflow parameters. It was shown that the area of low WSS and high OSI is correlated to higher POD modes. The resulting ROM can reproduce both WSS and OSI computationally for future parametric studies with significantly less computational cost. Agreement was observed between the WSS and OSI values obtained using direct CFD results and ROM results.

  17. Evaluation of the Township Proper Carrying Capacity over Qinghai-Tibet plateau by CASA model

    Science.gov (United States)

    Wu, Chengyong; Cao, Guangchao; Xue, Huaju; Jiang, Gang; Wang, Qi; Yuan, Jie; Chen, Kelong

    2018-01-01

    The existing study of proper carrying capacity (PCC) has mostly focused on province or county administrative units, which can only macroscopically master the quantitative characteristics of PCC, but could not effectively take some animal husbandry management measures that are pertinent and operational. At town-scale, this paper used CASA model to estimate the PCC in Mongolian Autonomous County of Henan, Qinghai province, China,with serious grassland degeneration that mainly caused by overgrazing. The results showed that the PCC throughout the County was 950,417 sheep unit. For the township, the PCC of Saierlong and Duosong were the largest (247,100 sheep unit) and the smallest (82,016 sheep unit) respectively. This study will provide reference data for developing sustainable development of town-scale pasture policies and also will help to evaluate the health status of the alpine grassland ecosystem on Qinghai-Tibet plateau.

  18. Low-order modelling of shallow water equations for sensitivity analysis using proper orthogonal decomposition

    Science.gov (United States)

    Zokagoa, Jean-Marie; Soulaïmani, Azzeddine

    2012-06-01

    This article presents a reduced-order model (ROM) of the shallow water equations (SWEs) for use in sensitivity analyses and Monte-Carlo type applications. Since, in the real world, some of the physical parameters and initial conditions embedded in free-surface flow problems are difficult to calibrate accurately in practice, the results from numerical hydraulic models are almost always corrupted with uncertainties. The main objective of this work is to derive a ROM that ensures appreciable accuracy and a considerable acceleration in the calculations so that it can be used as a surrogate model for stochastic and sensitivity analyses in real free-surface flow problems. The ROM is derived using the proper orthogonal decomposition (POD) method coupled with Galerkin projections of the SWEs, which are discretised through a finite-volume method. The main difficulty of deriving an efficient ROM is the treatment of the nonlinearities involved in SWEs. Suitable approximations that provide rapid online computations of the nonlinear terms are proposed. The proposed ROM is applied to the simulation of hypothetical flood flows in the Bordeaux breakwater, a portion of the 'Rivière des Prairies' located near Laval (a suburb of Montreal, Quebec). A series of sensitivity analyses are performed by varying the Manning roughness coefficient and the inflow discharge. The results are satisfactorily compared to those obtained by the full-order finite volume model.

  19. Proper Acknowledgment?

    Science.gov (United States)

    East, Julianne

    2005-01-01

    The concern in Australian universities about the prevalence of plagiarism has led to the development of policies about academic integrity and in turn focused attention on the need to inform students about how to avoid plagiarism and how to properly acknowledge. Teaching students how to avoid plagiarism can appear to be straightforward if based on…

  20. Modelling of functional systems of managerial accounting

    Directory of Open Access Journals (Sweden)

    O.V. Fomina

    2017-12-01

    Full Text Available The modern stage of managerial accounting development takes place under the powerful influence of managerial innovations. The article aimed at the development of integrational model of budgeting and the system of balanced indices in the system of managerial accounting that will contribute the increasing of relevance for making managerial decisions by managers of different levels management. As a result of the study the author proposed the highly pragmatical integration model of budgeting and system of the balanced indices in the system of managerial accounting, which is realized by the development of the system of gathering, consolidation, analysis, and interpretation of financial and nonfinancial information, contributes the increasing of relevance for making managerial decisions on the base of coordination and effective and purpose orientation both strategical and operative resources of an enterprise. The effective integrational process of the system components makes it possible to distribute limited resources rationally taking into account prospective purposes and strategic initiatives, to carry

  1. Dimension invariants for groups admitting a cocompact model for proper actions

    DEFF Research Database (Denmark)

    Degrijse, Dieter Dries; Martínez-Pérez, Conchita

    2016-01-01

    Let G be a group that admits a cocompact classifying space for proper actions X. We derive a formula for the Bredon cohomological dimension for proper actions of G in terms of the relative cohomology with compact support of certain pairs of subcomplexes of X. We use this formula to compute the Br...

  2. Implementation of different turbulence model to find proper model to estimate aerodynamic properties of airfoils

    Science.gov (United States)

    Sogukpinar, Haci; Bozkurt, Ismail

    2018-02-01

    In this paper, aerodynamic calculations of NACA 4 series airfoil of 0012 are performed by using Finite-Volume Method and obtained results are compared with experimental data to correlate the numerical accuracy of CFD approximation. Then other airfoils are simulated with k-ɛ, k-w Spalart-Allmaras and SST model. The governing equations are the Reynolds-Averaged-Navier-Stokes (RANS) equations. The performance of different airfoils (NACA 0008, 0009, 0010, 0012, 0015, 0018, 0021, 0024) at different angle of attack are investigated and compared with most used turbulence models for industrial applications. According to the results of the comparison of numerical calculations and experimental data, k-w and SST models are considered to be closest to experimental results for the calculation of the lift coefficient.

  3. Media Accountability Systems: Models, proposals and outlooks

    Directory of Open Access Journals (Sweden)

    Luiz Martins da Silva

    2007-06-01

    Full Text Available This paper analyzes one of the basic actions of SOS-Imprensa, the mechanism to assure Media Accountability with the goal of proposing a synthesis of models for the Brazilian reality. The article aims to address the possibilities of creating and improving mechanisms to stimulate the democratic press process and to mark out and assure freedom of speech and personal rights with respect to the media. Based on the Press Social Responsibility Theory, the hypothesis is that the experiences analyzed (Communication Council, Press Council, Ombudsman and Readers Council are alternatives for accountability, mediation and arbitration, seeking visibility, trust and public support in favor of fairer media.

  4. Fusion of expertise among accounting accounting faculty. Towards an expertise model for academia in accounting.

    NARCIS (Netherlands)

    Njoku, Jonathan C.; van der Heijden, Beatrice; Inanga, Eno L.

    2010-01-01

    This paper aims to portray an accounting faculty expert. It is argued that neither the academic nor the professional orientation alone appears adequate in developing accounting faculty expertise. The accounting faculty expert is supposed to develop into a so-called ‘flexpert’ (Van der Heijden, 2003)

  5. A simulation model for material accounting systems

    International Nuclear Information System (INIS)

    Coulter, C.A.; Thomas, K.E.

    1987-01-01

    A general-purpose model that was developed to simulate the operation of a chemical processing facility for nuclear materials has been extended to describe material measurement and accounting procedures as well. The model now provides descriptors for material balance areas, a large class of measurement instrument types and their associated measurement errors for various classes of materials, the measurement instruments themselves with their individual calibration schedules, and material balance closures. Delayed receipt of measurement results (as for off-line analytical chemistry assay), with interim use of a provisional measurement value, can be accurately represented. The simulation model can be used to estimate inventory difference variances for processing areas that do not operate at steady state, to evaluate the timeliness of measurement information, to determine process impacts of measurement requirements, and to evaluate the effectiveness of diversion-detection algorithms. Such information is usually difficult to obtain by other means. Use of the measurement simulation model is illustrated by applying it to estimate inventory difference variances for two material balance area structures of a fictitious nuclear material processing line

  6. Model Reduction Based on Proper Generalized Decomposition for the Stochastic Steady Incompressible Navier--Stokes Equations

    KAUST Repository

    Tamellini, L.; Le Maî tre, O.; Nouy, A.

    2014-01-01

    In this paper we consider a proper generalized decomposition method to solve the steady incompressible Navier-Stokes equations with random Reynolds number and forcing term. The aim of such a technique is to compute a low-cost reduced basis approximation of the full stochastic Galerkin solution of the problem at hand. A particular algorithm, inspired by the Arnoldi method for solving eigenproblems, is proposed for an efficient greedy construction of a deterministic reduced basis approximation. This algorithm decouples the computation of the deterministic and stochastic components of the solution, thus allowing reuse of preexisting deterministic Navier-Stokes solvers. It has the remarkable property of only requiring the solution of m uncoupled deterministic problems for the construction of an m-dimensional reduced basis rather than M coupled problems of the full stochastic Galerkin approximation space, with m l M (up to one order of magnitudefor the problem at hand in this work). © 2014 Society for Industrial and Applied Mathematics.

  7. Modelling in Accounting. Theoretical and Practical Dimensions

    OpenAIRE

    Teresa Szot -Gabryś

    2010-01-01

    Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic...

  8. An adaptive proper orthogonal decomposition method for model order reduction of multi-disc rotor system

    Science.gov (United States)

    Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu

    2017-12-01

    The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.

  9. Model of accounting regulation in Lithuania

    OpenAIRE

    Rudžionienė, Kristina; Gipienė, Gailutė

    2008-01-01

    This paper analyses the regulation of accounting system in Lithuania. There are different approaches to accounting regulation. For example, the "free market" approach is against any regulation; it says that each process in the market will be in equilibrium itself. Mostly it is clear that regulation is important and useful, especially in financial accounting. It makes information in financial reports understandable and comparable in one country or different countries. There are three theories ...

  10. An object-oriented model for ex ante accounting information

    NARCIS (Netherlands)

    Verdaasdonk, P.J.A.

    2003-01-01

    Present accounting data models such as the Research-Event-Agent (REA) model merely focus on the modeling of static accounting phenomena. In this paper, it is argued that these models are not able to provide relevant ex ante accounting data for operations management decisions. These decisions require

  11. Linear indices in nonlinear structural equation models : best fitting proper indices and other composites

    NARCIS (Netherlands)

    Dijkstra, T.K.; Henseler, J.

    2011-01-01

    The recent advent of nonlinear structural equation models with indices poses a new challenge to the measurement of scientific constructs. We discuss, exemplify and add to a family of statistical methods aimed at creating linear indices, and compare their suitability in a complex path model with

  12. What is a Proper Resolution of Weather Radar Precipitation Estimates for Urban Drainage Modelling?

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Rasmussen, Michael R.; Thorndahl, Søren Liedtke

    2012-01-01

    The resolution of distributed rainfall input for drainage models is the topic of this paper. The study is based on data from high resolution X-band weather radar used together with an urban drainage model of a medium size Danish village. The flow, total run-off volume and CSO volume are evaluated...

  13. THE ROLE AND THE IMPORTANCE IN CHOOSING THE PROPER MANAGERIAL ACCOUNTING CONCEPTS REGARDING THE NEED FOR INFORMATION ON THE DECISION MAKING FACTORS WITHIN THE COMPANIES

    Directory of Open Access Journals (Sweden)

    Delia David

    2014-09-01

    Full Text Available Both the theory and modern practice of the management accounting took over two general concepts regarding its organizational process, structuring accounting in either an integrated system or a dualist one. We aim at emphasizing the characteristics, the role and the importance of these concepts in regards to the calculation process and the accounting entries of the costs generated by economic entities with an eye at gaining profit. Choosing one of the previously mentioned concepts must be done taking into consideration the specific of the company in question as well as the information necessary to the manager who needs to find out the optimum solution in order to achieve the rehabilitation and efficiency of the activity which is supposed to be carried out. The subject of this work is approached both theoretically and practically, relying on the following research methods: the comparison method, the observation method and the case study method.

  14. Prediction of Proper Temperatures for the Hot Stamping Process Based on the Kinetics Models

    Science.gov (United States)

    Samadian, P.; Parsa, M. H.; Mirzadeh, H.

    2015-02-01

    Nowadays, the application of kinetics models for predicting microstructures of steels subjected to thermo-mechanical treatments has increased to minimize direct experimentation, which is costly and time consuming. In the current work, the final microstructures of AISI 4140 steel sheets after the hot stamping process were predicted using the Kirkaldy and Li kinetics models combined with new thermodynamically based models in order for the determination of the appropriate process temperatures. In this way, the effect of deformation during hot stamping on the Ae3, Acm, and Ae1 temperatures was considered, and then the equilibrium volume fractions of phases at different temperatures were calculated. Moreover, the ferrite transformation rate equations of the Kirkaldy and Li models were modified by a term proposed by Åkerström to consider the influence of plastic deformation. Results showed that the modified Kirkaldy model is satisfactory for the determination of appropriate austenitization temperatures for the hot stamping process of AISI 4140 steel sheets because of agreeable microstructure predictions in comparison with the experimental observations.

  15. Key Aspects of the Proper Formulation of the Model in Numerical Analysis of the Influence of Mining Exploitation on Buildings

    Directory of Open Access Journals (Sweden)

    Florkowska Lucyna

    2015-02-01

    Full Text Available Numerical modelling is an important tool used to analyse various aspects of the impact of underground mining on existing and planned buildings. The interaction between the building and the soil is a complex matter and in many cases a numerical simulation is the only way of making calculations which will take into consideration the co–existence of a number of factors which have a significant influence on the solution. The complexity of the matter also makes it a difficult task to elaborate a proper mathematical model – it requires both a thorough knowledge of geologic conditions of the subsoil and the structural characteristics of the building.

  16. Nonlinear Model of Vibrating Screen to Determine Permissible Spring Deterioration for Proper Separation

    Directory of Open Access Journals (Sweden)

    Cristian G. Rodriguez

    2016-01-01

    Full Text Available Springs of vibrating screens are prone to fatigue induced failure because they operate in a heavy duty environment, with abrasive dust and under heavy cyclic loads. If a spring breaks, the stiffness at supporting positions changes, and therefore the amplitude of motion and the static and dynamic angular inclination of deck motion also change. This change in the amplitude and in the inclination of motion produces a reduction in separation efficiency. Available models are useful to determine motion under nominal operating conditions when angular displacement is not significant. However in practice there is significant angular motion during startup, during shutdown, or under off-design operating conditions. In this article, a two-dimensional three-degree-of-freedom nonlinear model that considers significant angular motion and damping is developed. The proposed model allows the prediction of vibrating screen behavior when there is a reduction in spring stiffness. Making use of this model for an actual vibrating screen in operation in industry has permitted determining a limit for spring’s failure before separation efficiency is affected. This information is of practical value for operation and maintenance staff helping to determine whether or not it is necessary to change springs, and hence optimizing stoppage time.

  17. Assessing Sexual Dicromatism: The Importance of Proper Parameterization in Tetrachromatic Visual Models.

    Directory of Open Access Journals (Sweden)

    Pierre-Paul Bitton

    Full Text Available Perceptual models of animal vision have greatly contributed to our understanding of animal-animal and plant-animal communication. The receptor-noise model of color contrasts has been central to this research as it quantifies the difference between two colors for any visual system of interest. However, if the properties of the visual system are unknown, assumptions regarding parameter values must be made, generally with unknown consequences. In this study, we conduct a sensitivity analysis of the receptor-noise model using avian visual system parameters to systematically investigate the influence of variation in light environment, photoreceptor sensitivities, photoreceptor densities, and light transmission properties of the ocular media and the oil droplets. We calculated the chromatic contrast of 15 plumage patches to quantify a dichromatism score for 70 species of Galliformes, a group of birds that display a wide range of sexual dimorphism. We found that the photoreceptor densities and the wavelength of maximum sensitivity of the short-wavelength-sensitive photoreceptor 1 (SWS1 can change dichromatism scores by 50% to 100%. In contrast, the light environment, transmission properties of the oil droplets, transmission properties of the ocular media, and the peak sensitivities of the cone photoreceptors had a smaller impact on the scores. By investigating the effect of varying two or more parameters simultaneously, we further demonstrate that improper parameterization could lead to differences between calculated and actual contrasts of more than 650%. Our findings demonstrate that improper parameterization of tetrachromatic visual models can have very large effects on measures of dichromatism scores, potentially leading to erroneous inferences. We urge more complete characterization of avian retinal properties and recommend that researchers either determine whether their species of interest possess an ultraviolet or near-ultraviolet sensitive SWS1

  18. DMFCA Model as a Possible Way to Detect Creative Accounting and Accounting Fraud in an Enterprise

    Directory of Open Access Journals (Sweden)

    Jindřiška Kouřilová

    2013-05-01

    Full Text Available The quality of reported accounting data as well as the quality and behaviour of their users influence the efficiency of an enterprise’s management. Its assessment could therefore be changed as well. To identify creative accounting and fraud, several methods and tools were used. In this paper we would like to present our proposal of the DMFCA (Detection model Material Flow Cost Accounting balance model based on environmental accounting and the MFCA (Material Flow Cost Accounting as its method. The following balance areas are included: material, financial and legislative. Using the analysis of strengths and weaknesses of the model, its possible use within a production and business company were assessed. Its possible usage to the detection of some creative accounting techniques was also assessed. The Model is developed in details for practical use and describing theoretical aspects.

  19. Establishing a Proper Model of Tobacco Dependence: Influence of Age and Tobacco Smoke Constituents

    OpenAIRE

    Gellner, Candice Ann

    2017-01-01

    Cigarette smoking is the leading preventable cause of death in the United States. Of those who smoke, 9 out of 10 report trying their first cigarette before the age of 18. Although most people who initiate tobacco use are teenagers, animal models for studying tobacco dependence have traditionally focused on how adult animals initiate, withdrawal from and relapse to cigarette smoking. Furthermore, cigarette smoke contains more than 7,000 constituents, including nicotine, yet pre-clinical resea...

  20. Proper interpretation of dissolved nitrous oxide isotopes, production pathways, and emissions requires a modelling approach.

    Science.gov (United States)

    Thuss, Simon J; Venkiteswaran, Jason J; Schiff, Sherry L

    2014-01-01

    Stable isotopes ([Formula: see text]15N and [Formula: see text]18O) of the greenhouse gas N2O provide information about the sources and processes leading to N2O production and emission from aquatic ecosystems to the atmosphere. In turn, this describes the fate of nitrogen in the aquatic environment since N2O is an obligate intermediate of denitrification and can be a by-product of nitrification. However, due to exchange with the atmosphere, the [Formula: see text] values at typical concentrations in aquatic ecosystems differ significantly from both the source of N2O and the N2O emitted to the atmosphere. A dynamic model, SIDNO, was developed to explore the relationship between the isotopic ratios of N2O, N2O source, and the emitted N2O. If the N2O production rate or isotopic ratios vary, then the N2O concentration and isotopic ratios may vary or be constant, not necessarily concomitantly, depending on the synchronicity of production rate and source isotopic ratios. Thus prima facie interpretation of patterns in dissolved N2O concentrations and isotopic ratios is difficult. The dynamic model may be used to correctly interpret diel field data and allows for the estimation of the gas exchange coefficient, N2O production rate, and the production-weighted [Formula: see text] values of the N2O source in aquatic ecosystems. Combining field data with these modelling efforts allows this critical piece of nitrogen cycling and N2O flux to the atmosphere to be assessed.

  1. Display of the information model accounting system

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2011-12-01

    Full Text Available This paper presents the accounting information system in public companies, business technology matrix and data flow diagram. The paper describes the purpose and goals of the accounting process, matrix sub-process and data class. Data flow in the accounting process and the so-called general ledger module are described in detail. Activities of the financial statements and determining the financial statements of the companies are mentioned as well. It is stated how the general ledger module should function and what characteristics it must have. Line graphs will depict indicators of the company’s business success, indebtedness and company’s efficiency coefficients based on financial balance reports, and profit and loss report.

  2. A low-cost, goal-oriented ‘compact proper orthogonal decomposition’ basis for model reduction of static systems

    KAUST Repository

    Carlberg, Kevin; Farhat, Charbel

    2010-01-01

    A novel model reduction technique for static systems is presented. The method is developed using a goal-oriented framework, and it extends the concept of snapshots for proper orthogonal decomposition (POD) to include (sensitivity) derivatives of the state with respect to system input parameters. The resulting reduced-order model generates accurate approximations due to its goal-oriented construction and the explicit 'training' of the model for parameter changes. The model is less computationally expensive to construct than typical POD approaches, since efficient multiple right-hand side solvers can be used to compute the sensitivity derivatives. The effectiveness of the method is demonstrated on a parameterized aerospace structure problem. © 2010 John Wiley & Sons, Ltd.

  3. A low-cost, goal-oriented ‘compact proper orthogonal decomposition’ basis for model reduction of static systems

    KAUST Repository

    Carlberg, Kevin

    2010-12-10

    A novel model reduction technique for static systems is presented. The method is developed using a goal-oriented framework, and it extends the concept of snapshots for proper orthogonal decomposition (POD) to include (sensitivity) derivatives of the state with respect to system input parameters. The resulting reduced-order model generates accurate approximations due to its goal-oriented construction and the explicit \\'training\\' of the model for parameter changes. The model is less computationally expensive to construct than typical POD approaches, since efficient multiple right-hand side solvers can be used to compute the sensitivity derivatives. The effectiveness of the method is demonstrated on a parameterized aerospace structure problem. © 2010 John Wiley & Sons, Ltd.

  4. Modelling Seasonal GWR of Daily PM2.5 with Proper Auxiliary Variables for the Yangtze River Delta

    Directory of Open Access Journals (Sweden)

    Man Jiang

    2017-04-01

    Full Text Available Over the past decades, regional haze episodes have frequently occurred in eastern China, especially in the Yangtze River Delta (YRD. Satellite derived Aerosol Optical Depth (AOD has been used to retrieve the spatial coverage of PM2.5 concentrations. To improve the retrieval accuracy of the daily AOD-PM2.5 model, various auxiliary variables like meteorological or geographical factors have been adopted into the Geographically Weighted Regression (GWR model. However, these variables are always arbitrarily selected without deep consideration of their potentially varying temporal or spatial contributions in the model performance. In this manuscript, we put forward an automatic procedure to select proper auxiliary variables from meteorological and geographical factors and obtain their optimal combinations to construct four seasonal GWR models. We employ two different schemes to comprehensively test the performance of our proposed GWR models: (1 comparison with other regular GWR models by varying the number of auxiliary variables; and (2 comparison with observed ground-level PM2.5 concentrations. The result shows that our GWR models of “AOD + 3” with three common meteorological variables generally perform better than all the other GWR models involved. Our models also show powerful prediction capabilities in PM2.5 concentrations with only slight overfitting. The determination coefficients R2 of our seasonal models are 0.8259 in spring, 0.7818 in summer, 0.8407 in autumn, and 0.7689 in winter. Also, the seasonal models in summer and autumn behave better than those in spring and winter. The comparison between seasonal and yearly models further validates the specific seasonal pattern of auxiliary variables of the GWR model in the YRD. We also stress the importance of key variables and propose a selection process in the AOD-PM2.5 model. Our work validates the significance of proper auxiliary variables in modelling the AOD-PM2.5 relationships and

  5. The Financial Accounting Model from a System Dynamics' Perspective

    OpenAIRE

    Melse, Eric

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal conflict that resides in a stock and flow model. Through formal analysis the accounting equation is redefined as a cybernetic model by expressing the temporal and dynamic properties of its terms. Articu...

  6. Phase field theory of proper displacive phase transformations: Structural anisotropy and directional flexibility, a vector model, and the transformation kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Rao Weifeng [Department of Materials Science and Engineering, Rutgers University, 607 Taylor Road, Piscataway, NJ 08854 (United States); Khachaturyan, Armen G., E-mail: khach@jove.rutgers.edu [Department of Materials Science and Engineering, Rutgers University, 607 Taylor Road, Piscataway, NJ 08854 (United States)

    2011-06-15

    A phase field theory of proper displacive transformations is developed to address the microstructure evolution and its response to applied fields in decomposing and martensitic systems. The theory is based on the explicit equation for the non-equilibrium free energy function of the transformation strain obtained by a consistent separation of the total strain into transformation and elastic strains. The transformation strain is considered to be a relaxing long-range order parameter evolving in accordance with the system energetics rather than as a fixed material constant used in the conventional Eshelby theory of coherent inclusions. The elastic strain is defined as a coherency strain recovering the crystal lattice compatibility. The obtained free energy function of the transformation strain leads to the concepts of structural anisotropy and directional flexibility of low symmetry phases. The formulated vector model of displacive transformation makes apparent a similarity between proper displacive transformation and ferromagnetic/ferroelectric transformation and, in particular, a similarity between the structural anisotropy and magnetic/polar anisotropy of ferromagnetic/ferroelectric materials. It even predicts the feasibility of a glass-like structural state with unlimited directional flexibility of the transformation strain that is conceptually similar to a ferromagnetic glass. The thermodynamics of the equilibrium between low symmetry phases and the thermodynamic conditions leading to the formation of adaptive states are formulated.

  7. Accountability

    Science.gov (United States)

    Fielding, Michael; Inglis, Fred

    2017-01-01

    This contribution republishes extracts from two important articles published around 2000 concerning the punitive accountability system suffered by English primary and secondary schools. The first concerns the inspection agency Ofsted, and the second managerialism. Though they do not directly address assessment, they are highly relevant to this…

  8. The financial accounting model from a system dynamics' perspective

    NARCIS (Netherlands)

    Melse, E.

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal

  9. Leadership in Force XXI: Is the Army's Current Leadership Model and Leader Development Doctrine Properly Addressing the Challenges Brought About by the Transition to Force XXI

    National Research Council Canada - National Science Library

    Johnson, Carl

    1999-01-01

    .... The purpose of this research paper is to answer the question, Is the Army's current leadership model and leader development doctrine properly addressing the challenges brought about by the transition to Force XXI...

  10. Proper Orthogonal Decomposition of Pressure Fields in a Draft Tube Cone of the Francis (Tokke) Turbine Model

    International Nuclear Information System (INIS)

    Stefan, D; Rudolf, P

    2015-01-01

    The simulations of high head Francis turbine model (Tokke) are performed for three operating conditions - Part Load, Best Efficiency Point (BEP) and Full Load using software Ansys Fluent R15 and alternatively OpenFOAM 2.2.2. For both solvers the simulations employ Realizable k-e turbulence model. The unsteady pressure pulsations of pressure signal from two monitoring points situated in the draft tube cone and one behind the guide vanes are evaluated for all three operating conditions in order to compare frequencies and amplitudes with the experimental results. The computed velocity fields are compared with the experimental ones using LDA measurements in two locations situated in the draft tube cone. The proper orthogonal decomposition (POD) is applied on a longitudinal slice through the draft tube cone. The unsteady static pressure fields are decomposed and a spatio-temporal behavior of modes is correlated with amplitude-frequency results obtained from the pressure signal in monitoring points. The main application of POD is to describe which modes are related to an interaction between rotor (turbine runner) and stator (spiral casing and guide vanes) and cause dynamic flow behavior in the draft tube. The numerically computed efficiency is correlated with the experimental one in order to verify the simulation accuracy

  11. Modelling adversary actions against a nuclear material accounting system

    International Nuclear Information System (INIS)

    Lim, J.J.; Huebel, J.G.

    1979-01-01

    A typical nuclear material accounting system employing double-entry bookkeeping is described. A logic diagram is used to model the interactions of the accounting system and the adversary when he attempts to thwart it. Boolean equations are derived from the logic diagram; solution of these equations yields the accounts and records through which the adversary may disguise a SSNM theft and the collusion requirements needed to accomplish this feat. Some technical highlights of the logic diagram are also discussed

  12. A new DFT approach to model small polarons in oxides with proper account for long-range polarization

    Science.gov (United States)

    Kokott, Sebastian; Levchenko, Sergey V.; Scheffler, Matthias; Theory Department Team

    In this work, we address two important challenges in the DFT description of small polarons (excess charges localized within one unit cell): sensitivity to the errors in exchange-correlation (XC) treatment and finite-size effects in supercell calculations. The polaron properties are obtained using a modified neutral potential-energy surface (PES). Using the hybrid HSE functional and considering the whole range 0 Deutsche Forschungsgemeinschaft).

  13. The Accounting Class as Accounting Firm: A Model Program for Developing Technical and Managerial Skills

    Science.gov (United States)

    Docherty, Gary

    1976-01-01

    One way to bring the accounting office into the classroom is to conduct the class as a "company." Such a class is aimed at developing students' technical and managerial skills, as well as their career awareness and career goals. Performance goals, a course description, and overall objectives of the course are given and might serve as a model.…

  14. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Niculae Feleaga

    2006-04-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  15. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Liliana Feleaga

    2006-06-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  16. Modelling solar cells with thermal phenomena taken into account

    International Nuclear Information System (INIS)

    Górecki, K; Górecki, P; Paduch, K

    2014-01-01

    The paper is devoted to modelling properties of solar cells. The authors' electrothermal model of such cells is described. This model takes into account the influence of temperature on its characteristics. Some results of calculations and measurements of selected solar cells are presented and discussed. The good agreement between the results of calculations and measurements was obtained, which proves the correctness of the elaborated model.

  17. PROFESSIONAL COMPETITIVE EVOLUTION AND QUANTIFICATION MODELS IN ACCOUNTING SERVICE ELABORATION

    Directory of Open Access Journals (Sweden)

    Gheorghe FATACEAN

    2013-12-01

    Full Text Available The objective of this article consists in using an assessment framework of the accounting service elaboration. The purpose of this model is the identification and revaluation of an elite group of expert accounts from Romania, which should provide solutions to solve the most complex legal matters in the legal field, in the field of criminal, tax, civil, or commercial clauses making the object of law suits.

  18. Individual Learning Accounts and Other Models of Financing Lifelong Learning

    Science.gov (United States)

    Schuetze, Hans G.

    2007-01-01

    To answer the question "Financing what?" this article distinguishes several models of lifelong learning as well as a variety of lifelong learning activities. Several financing methods are briefly reviewed, however the principal focus is on Individual Learning Accounts (ILAs) which were seen by some analysts as a promising model for…

  19. Can An Amended Standard Model Account For Cold Dark Matter?

    International Nuclear Information System (INIS)

    Goldhaber, Maurice

    2004-01-01

    It is generally believed that one has to invoke theories beyond the Standard Model to account for cold dark matter particles. However, there may be undiscovered universal interactions that, if added to the Standard Model, would lead to new members of the three generations of elementary fermions that might be candidates for cold dark matter particles

  20. The CARE model of social accountability: promoting cultural change.

    Science.gov (United States)

    Meili, Ryan; Ganem-Cuenca, Alejandra; Leung, Jannie Wing-sea; Zaleschuk, Donna

    2011-09-01

    On the 10th anniversary of Health Canada and the Association of Faculties of Medicine of Canada's publication in 2001 of Social Accountability: A Vision for Canadian Medical Schools, the authors review the progress at one Canadian medical school, the College of Medicine at the University of Saskatchewan, in developing a culture of social accountability. They review the changes that have made the medical school more socially accountable and the steps taken to make those changes possible. In response to calls for socially accountable medical schools, the College of Medicine created a Social Accountability Committee to oversee the integration of these principles into the college. The committee developed the CARE model (Clinical activity, Advocacy, Research, Education and training) as a guiding tool for social accountability initiatives toward priority health concerns and as a means of evaluation. Diverse faculty and student committees have emerged as a result and have had far-reaching impacts on the college and communities: from changes in curricula and admissions to community programming and international educational experiences. Although a systematic assessment of the CARE model is needed, early evidence shows that the most significant effects can be found in the cultural shift in the college, most notably among students. The CARE model may serve as an important example for other educational institutions in the development of health practitioners and research that is responsive to the needs of their communities.

  1. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.

    2000-01-01

    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest

  2. Driving Strategic Risk Planning With Predictive Modelling For Managerial Accounting

    DEFF Research Database (Denmark)

    Nielsen, Steen; Pontoppidan, Iens Christian

    for managerial accounting and shows how it can be used to determine the impact of different types of risk assessment input parameters on the variability of important outcome measures. The purpose is to: (i) point out the theoretical necessity of a stochastic risk framework; (ii) present a stochastic framework......Currently, risk management in management/managerial accounting is treated as deterministic. Although it is well-known that risk estimates are necessarily uncertain or stochastic, until recently the methodology required to handle stochastic risk-based elements appear to be impractical and too...... mathematical. The ultimate purpose of this paper is to “make the risk concept procedural and analytical” and to argue that accountants should now include stochastic risk management as a standard tool. Drawing on mathematical modelling and statistics, this paper methodically develops risk analysis approach...

  3. Application of a predictive Bayesian model to environmental accounting.

    Science.gov (United States)

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  4. Stochastic scalar mixing models accounting for turbulent frequency multiscale fluctuations

    International Nuclear Information System (INIS)

    Soulard, Olivier; Sabel'nikov, Vladimir; Gorokhovski, Michael

    2004-01-01

    Two new scalar micromixing models accounting for a turbulent frequency scale distribution are investigated. These models were derived by Sabel'nikov and Gorokhovski [Second International Symposium on Turbulence and Shear FLow Phenomena, Royal Institute of technology (KTH), Stockholm, Sweden, June 27-29, 2001] using a multiscale extension of the classical interaction by exchange with the mean (IEM) and Langevin models. They are, respectively, called Extended IEM (EIEM) and Extended Langevin (ELM) models. The EIEM and ELM models are tested against DNS results in the case of the decay of a homogeneous scalar field in homogeneous turbulence. This comparison leads to a reformulation of the law governing the mixing frequency distribution. Finally, the asymptotic behaviour of the modeled PDF is discussed

  5. The shift of accounting models and accounting quality: the case of Norwegian GAAP

    OpenAIRE

    Stenheim, Tonny; Madsen, Dag Øivind

    2017-01-01

    This is an Open Access journal available from http://www.virtusinterpress.org This paper investigates the change in accounting quality w hen firms shift from a revenue - oriented historical cost accounting regime as Norwegian GAAP (NGAAP) to a balance - oriented fair value accounting regime as International Financial Reporting Standards (IFRS). Previous studies have demonstrated mixed effects o n the accounting quality upon IFRS adoption. One possible reason is that the investigated domest...

  6. Accounting for Business Models: Increasing the Visibility of Stakeholders

    Directory of Open Access Journals (Sweden)

    Colin Haslam

    2015-01-01

    Full Text Available Purpose: This paper conceptualises a firm’s business model employing stakeholder theory as a central organising element to help inform the purpose and objective(s of business model financial reporting and disclosure. Framework: Firms interact with a complex network of primary and secondary stakeholders to secure the value proposition of a firm’s business model. This value proposition is itself a complex amalgam of value creating, value capturing and value manipulating arrangements with stakeholders. From a financial accounting perspective the purpose of the value proposition for a firm’s business model is to sustain liquidity and solvency as a going concern. Findings: This article argues that stakeholder relations impact upon the financial viability of a firm’s business model value proposition. However current financial reporting by function of expenses and the central organising objectives of the accounting conceptual framework conceal firm-stakeholder relations and their impact on reported financials. Practical implications: The practical implication of our paper is that ‘Business Model’ financial reporting would require a reorientation in the accounting conceptual framework that defines the objectives and purpose of financial reporting. This reorientation would involve reporting about stakeholder relations and their impact on a firms financials not simply reporting financial information to ‘investors’. Social Implications: Business model financial reporting has the potential to be stakeholder inclusive because the numbers and narratives reported by firms in their annual financial statements will increase the visibility of stakeholder relations and how these are being managed. What is original/value of paper: This paper’s original perspective is that it argues that a firm’s business model is structured out of stakeholder relations. It presents the firm’s value proposition as the product of value creating, capturing and

  7. Modeling of Accounting Doctoral Thesis with Emphasis on Solution for Financial Problems

    Directory of Open Access Journals (Sweden)

    F. Mansoori

    2015-02-01

    Full Text Available By passing the instruction period and increase of graduate students and also research budget, knowledge of accounting in Iran entered to the field of research in a way that number of accounting projects has been implemented in the real world. Because of that different experience in implementing the accounting standards were achieved. So, it was expected the mentioned experiences help to solve the financial problems in country, in spite of lots of efforts which were done for researching; we still have many financial and accounting problems in our country. PHD projects could be considered as one of the important solutions to improve the University subjects including accounting. PHD projects are considered as team work job and it will be legitimate by supervisor teams in universities.It is obvious that applied projects should solve part of the problems in accounting field but unfortunately it is not working in the real world. The question which came in to our mind is how come that the out put of the applied and knowledge base projects could not make the darkness of the mentioned problems clear and also why politicians in difficult situations prefer to use their own previous experiences in important decision makings instead of using the consultant’s knowledge base suggestions.In this research I’m going to study, the reasons behind that prevent the applied PHD projects from success in real world which relates to the point of view that consider the political suggestions which are out put of knowledge base projects are not qualified enough for implementation. For this purpose, the indicators of an applied PHD project were considered and 110 vise people were categorized the mentioned indicators and then in a comprehensive study other applied PHD accounting projects were compared to each other.As result, in this study problems of the studied researches were identified and a proper and applied model for creating applied research was developed.

  8. Accounting for microbial habitats in modeling soil organic matter dynamics

    Science.gov (United States)

    Chenu, Claire; Garnier, Patricia; Nunan, Naoise; Pot, Valérie; Raynaud, Xavier; Vieublé, Laure; Otten, Wilfred; Falconer, Ruth; Monga, Olivier

    2017-04-01

    The extreme heterogeneity of soils constituents, architecture and inhabitants at the microscopic scale is increasingly recognized. Microbial communities exist and are active in a complex 3-D physical framework of mineral and organic particles defining pores of various sizes, more or less inter-connected. This results in a frequent spatial disconnection between soil carbon, energy sources and the decomposer organisms and a variety of microhabitats that are more or less suitable for microbial growth and activity. However, current biogeochemical models account for C dynamics at the macroscale (cm, m) and consider time- and spatially averaged relationships between microbial activity and soil characteristics. Different modelling approaches have intended to account for this microscale heterogeneity, based either on considering aggregates as surrogates for microbial habitats, or pores. Innovative modelling approaches are based on an explicit representation of soil structure at the fine scale, i.e. at µm to mm scales: pore architecture and their saturation with water, localization of organic resources and of microorganisms. Three recent models are presented here, that describe the heterotrophic activity of either bacteria or fungi and are based upon different strategies to represent the complex soil pore system (Mosaic, LBios and µFun). These models allow to hierarchize factors of microbial activity in soil's heterogeneous architecture. Present limits of these approaches and challenges are presented, regarding the extensive information required on soils at the microscale and to up-scale microbial functioning from the pore to the core scale.

  9. Accounting for small scale heterogeneity in ecohydrologic watershed models

    Science.gov (United States)

    Burke, W.; Tague, C.

    2017-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach

  10. Accounting for household heterogeneity in general equilibrium economic growth models

    International Nuclear Information System (INIS)

    Melnikov, N.B.; O'Neill, B.C.; Dalton, M.G.

    2012-01-01

    We describe and evaluate a new method of aggregating heterogeneous households that allows for the representation of changing demographic composition in a multi-sector economic growth model. The method is based on a utility and labor supply calibration that takes into account time variations in demographic characteristics of the population. We test the method using the Population-Environment-Technology (PET) model by comparing energy and emissions projections employing the aggregate representation of households to projections representing different household types explicitly. Results show that the difference between the two approaches in terms of total demand for energy and consumption goods is negligible for a wide range of model parameters. Our approach allows the effects of population aging, urbanization, and other forms of compositional change on energy demand and CO 2 emissions to be estimated and compared in a computationally manageable manner using a representative household under assumptions and functional forms that are standard in economic growth models.

  11. PropeR revisited

    NARCIS (Netherlands)

    van der Linden, Helma; Talmon, Jan; Tange, Huibert; Grimson, Jane; Hasman, Arie

    2005-01-01

    INTRODUCTION: The PropeR EHR system (PropeRWeb) is a multidisciplinary electronic health record (EHR) system for multidisciplinary use in extramural patient care for stroke patients. DESIGN: The system is built using existing open source components and is based on open standards. It is implemented

  12. Development of a Reduced-Order Model for Reacting Gas-Solids Flow using Proper Orthogonal Decomposition

    Energy Technology Data Exchange (ETDEWEB)

    McDaniel, Dwayne [Florida International Univ., Miami, FL (United States); Dulikravich, George [Florida International Univ., Miami, FL (United States); Cizmas, Paul [Florida International Univ., Miami, FL (United States)

    2017-11-27

    This report summarizes the objectives, tasks and accomplishments made during the three year duration of this research project. The report presents the results obtained by applying advanced computational techniques to develop reduced-order models (ROMs) in the case of reacting multiphase flows based on high fidelity numerical simulation of gas-solids flow structures in risers and vertical columns obtained by the Multiphase Flow with Interphase eXchanges (MFIX) software. The research includes a numerical investigation of reacting and non-reacting gas-solids flow systems and computational analysis that will involve model development to accelerate the scale-up process for the design of fluidization systems by providing accurate solutions that match the full-scale models. The computational work contributes to the development of a methodology for obtaining ROMs that is applicable to the system of gas-solid flows. Finally, the validity of the developed ROMs is evaluated by comparing the results against those obtained using the MFIX code. Additionally, the robustness of existing POD-based ROMs for multiphase flows is improved by avoiding non-physical solutions of the gas void fraction and ensuring that the reduced kinetics models used for reactive flows in fluidized beds are thermodynamically consistent.

  13. Accounting for Trust: A Conceptual Model for the Determinants of Trust in the Australian Public Accountant – SME Client Relationship

    Directory of Open Access Journals (Sweden)

    Michael Cherry

    2016-06-01

    Full Text Available This paper investigates trust as it relates to the relationship between Australia’s public accountants and their SME clients. It describes the contribution of the accountancy profession to the SME market, as well as the key challenges faced by accountants and their SME clients. Following the review of prior scholarly studies, a working definition of trust as it relates to this important relationship is also developed and presented. A further consequence of prior academic work is the development of a comprehensive conceptual model to describe the determinants of trust in the Australian public accountant – SME client relationship, which requires testing via empirical studies.

  14. Nonlinear model-based control of the Czochralski process III: Proper choice of manipulated variables and controller parameter scheduling

    Science.gov (United States)

    Neubert, M.; Winkler, J.

    2012-12-01

    This contribution continues an article series [1,2] about the nonlinear model-based control of the Czochralski crystal growth process. The key idea of the presented approach is to use a sophisticated combination of nonlinear model-based and conventional (linear) PI controllers for tracking of both, crystal radius and growth rate. Using heater power and pulling speed as manipulated variables several controller structures are possible. The present part tries to systematize the properties of the materials to be grown in order to get unambiguous decision criteria for a most profitable choice of the controller structure. For this purpose a material specific constant M called interface mobility and a more process specific constant S called system response number are introduced. While the first one summarizes important material properties like thermal conductivity and latent heat the latter one characterizes the process by evaluating the average axial thermal gradients at the phase boundary and the actual growth rate at which the crystal is grown. Furthermore these characteristic numbers are useful for establishing a scheduling strategy for the PI controller parameters in order to improve the controller performance. Finally, both numbers give a better understanding of the general thermal system dynamics of the Czochralski technique.

  15. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    Science.gov (United States)

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics

  16. Characterizations of proper actions

    Science.gov (United States)

    Biller, Harald

    2004-03-01

    Three kinds of proper actions of increasing strength are defined. We prove that the three definitions specialize to the definitions by Bourbaki, by Palais and by Baum, Connes and Higson in their respective settings. The third of these, which thus turns out to be the strongest, originally only concerns actions of second countable locally compact groups on metrizable spaces. In this situation, it is shown to coincide with the other two definitions if the total space locally has the Lindelöf property and the orbit space is regular.

  17. Proper Islamic Consumption

    DEFF Research Database (Denmark)

    Fischer, Johan

    mobile, religiously committed communities to the opportunities and perils presented by modernisation. It also tells us something about the debates concerning the meanings and practices of Islam within an aggressive, globalised, secularised modernity. In Malaysia this is an especially intriguing issue...... spite of a long line of social theory analyzing the spiritual in the economic, and vice versa, very little of the recent increase in scholarship on Islam addresses its relationship with capitalism. Johan Fischer’s book,Proper Islamic Consumption, begins to fill this gap. […] Fischer’s detailed...

  18. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    Science.gov (United States)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  19. Accounting for Water Insecurity in Modeling Domestic Water Demand

    Science.gov (United States)

    Galaitsis, S. E.; Huber-lee, A. T.; Vogel, R. M.; Naumova, E.

    2013-12-01

    Water demand management uses price elasticity estimates to predict consumer demand in relation to water pricing changes, but studies have shown that many additional factors effect water consumption. Development scholars document the need for water security, however, much of the water security literature focuses on broad policies which can influence water demand. Previous domestic water demand studies have not considered how water security can affect a population's consumption behavior. This study is the first to model the influence of water insecurity on water demand. A subjective indicator scale measuring water insecurity among consumers in the Palestinian West Bank is developed and included as a variable to explore how perceptions of control, or lack thereof, impact consumption behavior and resulting estimates of price elasticity. A multivariate regression model demonstrates the significance of a water insecurity variable for data sets encompassing disparate water access. When accounting for insecurity, the R-squaed value improves and the marginal price a household is willing to pay becomes a significant predictor for the household quantity consumption. The model denotes that, with all other variables held equal, a household will buy more water when the users are more water insecure. Though the reasons behind this trend require further study, the findings suggest broad policy implications by demonstrating that water distribution practices in scarcity conditions can promote consumer welfare and efficient water use.

  20. Accrual based accounting implementation: An approach for modelling major decisions

    OpenAIRE

    Ratno Agriyanto; Abdul Rohman; Dwi Ratmono; Imam Ghozali

    2016-01-01

    Over the last three decades the main issues of implementation of accrual based accounting government institutions in Indonesia. Implementation of accrual based accounting in government institutions amid debate about the usefulness of accounting information for decision-making. Empirical study shows that the accrual based of accounting information on a government institution is not used for decision making. The research objective was to determine the impact of the implementation of the accrual...

  1. Bedrijfsrisico's van de accountant en het Audit Risk Model

    NARCIS (Netherlands)

    Wallage, Ph.; Klijnsmit, P.; Sodekamp, M.

    2003-01-01

    In de afgelopen jaren is het bedrijfsrisico van de controlerend accountant sterk toegenomen. De bedrijfsrisico’s van de accountant beginnen in toenemende mate een belemmering te vormen voor het aanvaarden van opdrachten. In dit artikel wordt aandacht besteed aan de wijze waarop de bedrijfsrisico’s

  2. Accrual based accounting implementation: An approach for modelling major decisions

    Directory of Open Access Journals (Sweden)

    Ratno Agriyanto

    2016-12-01

    Full Text Available Over the last three decades the main issues of implementation of accrual based accounting government institutions in Indonesia. Implementation of accrual based accounting in government institutions amid debate about the usefulness of accounting information for decision-making. Empirical study shows that the accrual based of accounting information on a government institution is not used for decision making. The research objective was to determine the impact of the implementation of the accrual based accounting to the accrual basis of accounting information use for decision-making basis. We used the survey questionnaires. The data were processed by SEM using statistical software WarpPLS. The results showed that the implementation of the accrual based accounting in City Government Semarang has significantly positively associated with decision-making. Another important finding is the City Government officials of Semarang have personality, low tolerance of ambiguity is a negative effect on the relationship between the implementation of the accrual based accounting for decision making

  3. A generic accounting model to support operations management decisions

    NARCIS (Netherlands)

    Verdaasdonk, P.J.A.; Wouters, M.J.F.

    2001-01-01

    Information systems are generally unable to generate information about the financial consequences of operations management decisions. This is because the procedures for determining the relevant accounting information for decision support are not formalised in ways that can be implemented in

  4. MAIN COORDINATES OF ACCOUNTING PROFESSION CO-OPETITIONAL MODEL

    Directory of Open Access Journals (Sweden)

    MARIOARA AVRAM

    2012-01-01

    Full Text Available The accounting profession fulfills a vital role in the development of modern economy, contributing to the thorough knowledge of business environment, the improvement of economic performance and solving some of the many problems the post-modern society is facing. Accounting profession fulfills a vital role in modern economy, contributing to a thorough knowledge of business to improve economic performance and to resolve some of the many problems facing post-modern society. Currently, the accounting profession is characterized by the expansion of information technology, internationalization of businesses and professional specialization which has made possible the creation of several professional bodies. Against this background, it becomes urgent to discover new perspectives on strategies able to maintain and increase business success, based on the simultaneous combination of the elements of cooperation and competition, which involves a new type of relation, called by the North - American literature "co-opetition".

  5. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    to suffer from the same deficiencies. The PPV are universal and can be applied to all situations in which the assessment of performance is desired: prediction-, classification-, time series forecasting-, modeling validation. The key element of PPV is the Theory of Sampling (TOS), which allow insight......) is critically necessary for the inclusion of the sampling errors incurred in all 'future' situations in which the validated model must perform. Logically, therefore, all one data set re-sampling approaches for validation, especially cross-validation and leverage-corrected validation, should be terminated...

  6. Models and Rules of Evaluation in International Accounting

    OpenAIRE

    Liliana Feleaga; Niculae Feleaga

    2006-01-01

    The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is freque...

  7. What is the proper evaluation method: Some basic considerations

    International Nuclear Information System (INIS)

    Leeb, Helmut; Schnabel, Georg; Srdinko, Thomas

    2014-01-01

    Recent developments and applications demand for an extension of the energy range and the inclusion of reliable uncertainty information in nuclear data libraries. Due to the scarcity of neutron-induced reaction data beyond 20 MeV the extension of the energy range up to at least 150 MeV is not trivial because the corresponding nuclear data evaluations depend heavily on nuclear models and proper evaluation methods are still under discussion. Restricting to evaluation techniques based on Bayesian statistics the influence of the a priori knowledge on the final result of the evaluation is considered. The study clearly indicates the need to account properly for the deficiencies of the nuclear model. Concerning the covariance matrices it is argued that they depend not only on the model, but also on the method of generation and an additional consent is required for the comparison of different evaluations of the same data sets. (authors)

  8. Accounting for heterogeneity of public lands in hedonic property models

    Science.gov (United States)

    Charlotte Ham; Patricia A. Champ; John B. Loomis; Robin M. Reich

    2012-01-01

    Open space lands, national forests in particular, are usually treated as homogeneous entities in hedonic price studies. Failure to account for the heterogeneous nature of public open spaces may result in inappropriate inferences about the benefits of proximate location to such lands. In this study the hedonic price method is used to estimate the marginal values for...

  9. Modelling Financial-Accounting Decisions by Means of OLAP Tools

    Directory of Open Access Journals (Sweden)

    Diana Elena CODREAN

    2011-03-01

    Full Text Available At present, one can say that a company’s good running largely depends on the information quantity and quality it relies on when making decisions. The information needed to underlie decisions and be obtained due to the existence of a high-performing information system which makes it possible for the data to be shown quickly, synthetically and truly, also providing the opportunity for complex analyses and predictions. In such circumstances, computerized accounting systems, too, have grown their complexity by means of data analyzing information solutions such as OLAP and Data Mining which help perform a multidimensional analysis of financial-accounting data, potential frauds can be detected and data hidden information can be revealed, trends for certain indicators can be set up, therefore ensuring useful information to a company’s decision making.

  10. Proper Motion and Secular Variations of Keplerian Orbital Elements

    Directory of Open Access Journals (Sweden)

    Alexey G. Butkevich

    2018-05-01

    Full Text Available High-precision observations require accurate modeling of secular changes in the orbital elements in order to extrapolate measurements over long time intervals, and to detect deviation from pure Keplerian motion caused, for example, by other bodies or relativistic effects. We consider the evolution of the Keplerian elements resulting from the gradual change of the apparent orbit orientation due to proper motion. We present rigorous formulae for the transformation of the orbit inclination, longitude of the ascending node and argument of the pericenter from one epoch to another, assuming uniform stellar motion and taking radial velocity into account. An approximate treatment, accurate to the second-order terms in time, is also given. The proper motion effects may be significant for long-period transiting planets. These theoretical results are applicable to the modeling of planetary transits and precise Doppler measurements as well as analysis of pulsar and eclipsing binary timing observations.

  11. Resource Allocation Models and Accountability: A Jamaican Case Study

    Science.gov (United States)

    Nkrumah-Young, Kofi K.; Powell, Philip

    2008-01-01

    Higher education institutions (HEIs) may be funded privately, by the state or by a mixture of the two. Nevertheless, any state financing of HE necessitates a mechanism to determine the level of support and the channels through which it is to be directed; that is, a resource allocation model. Public funding, through resource allocation models,…

  12. 76 FR 34712 - Medicare Program; Pioneer Accountable Care Organization Model; Extension of the Submission...

    Science.gov (United States)

    2011-06-14

    ... stakeholders to develop initiatives to test innovative payment and service delivery models to reduce program...] Medicare Program; Pioneer Accountable Care Organization Model; Extension of the Submission Deadlines for... of the Pioneer Accountable Care Organization Model letters of intent to June 30, 2011 and the...

  13. A novel Atoh1 "self-terminating" mouse model reveals the necessity of proper Atoh1 level and duration for hair cell differentiation and viability.

    Directory of Open Access Journals (Sweden)

    Ning Pan

    Full Text Available Atonal homolog1 (Atoh1 is a bHLH transcription factor essential for inner ear hair cell differentiation. Targeted expression of Atoh1 at various stages in development can result in hair cell differentiation in the ear. However, the level and duration of Atoh1 expression required for proper hair cell differentiation and maintenance remain unknown. We generated an Atoh1 conditional knockout (CKO mouse line using Tg(Atoh1-cre, in which the cre expression is driven by an Atoh1 enhancer element that is regulated by Atoh1 protein to "self-terminate" its expression. The mutant mice show transient, limited expression of Atoh1 in all hair cells in the ear. In the organ of Corti, reduction and delayed deletion of Atoh1 result in progressive loss of almost all the inner hair cells and the majority of the outer hair cells within three weeks after birth. The remaining cells express hair cell marker Myo7a and attract nerve fibers, but do not differentiate normal stereocilia bundles. Some Myo7a-positive cells persist in the cochlea into adult stages in the position of outer hair cells, flanked by a single row of pillar cells and two to three rows of disorganized Deiters cells. Gene expression analyses of Atoh1, Barhl1 and Pou4f3, genes required for survival and maturation of hair cells, reveal earlier and higher expression levels in the inner compared to the outer hair cells. Our data show that Atoh1 is crucial for hair cell mechanotransduction development, viability, and maintenance and also suggest that Atoh1 expression level and duration may play a role in inner vs. outer hair cell development. These genetically engineered Atoh1 CKO mice provide a novel model for establishing critical conditions needed to regenerate viable and functional hair cells with Atoh1 therapy.

  14. Accountability: a missing construct in models of adherence behavior and in clinical practice.

    Science.gov (United States)

    Oussedik, Elias; Foy, Capri G; Masicampo, E J; Kammrath, Lara K; Anderson, Robert E; Feldman, Steven R

    2017-01-01

    Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients' motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8-12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability) to patients' autonomous internal desire to please a respected health care provider (autonomous accountability), the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura's Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well as the testing and refinement of adherence interventions that make use of this critical determinant of human behavior.

  15. Behavioural Procedural Models – a multipurpose mechanistic account

    Directory of Open Access Journals (Sweden)

    Leonardo Ivarola

    2012-05-01

    Full Text Available In this paper we outline an epistemological defence of what wecall Behavioural Procedural Models (BPMs, which represent the processes of individual decisions that lead to relevant economic patterns as psychologically (rather than rationally driven. Their general structure, and the way in which they may be incorporated to a multipurpose view of models, where the representational and interventionist goals are combined, is shown. It is argued that BPMs may provide “mechanistic-based explanations” in the sense defended by Hedström and Ylikoski (2010, which involve invariant regularities in Woodward’s sense. Such mechanisms provide a causal sort of explanation of anomalous economic patterns, which allow for extra marketintervention and manipulability in order to correct and improve some key individual decisions. This capability sets the basis for the so called libertarian paternalism (Sunstein and Thaler 2003.

  16. ACCOUNTING FUNDAMENTALS AND VARIATIONS OF STOCK PRICE: METHODOLOGICAL REFINEMENT WITH RECURSIVE SIMULTANEOUS MODEL

    OpenAIRE

    Sumiyana, Sumiyana; Baridwan, Zaki

    2015-01-01

    This study investigates association between accounting fundamentals and variations of stock prices using recursive simultaneous equation model. The accounting fundamentalsconsist of earnings yield, book value, profitability, growth opportunities and discount rate. The prior single relationships model has been investigated by Chen and Zhang (2007),Sumiyana (2011) and Sumiyana et al. (2010). They assume that all accounting fundamentals associate direct-linearly to the stock returns. This study ...

  17. Accounting Fundamentals and Variations of Stock Price: Methodological Refinement with Recursive Simultaneous Model

    OpenAIRE

    Sumiyana, Sumiyana; Baridwan, Zaki

    2013-01-01

    This study investigates association between accounting fundamentals and variations of stock prices using recursive simultaneous equation model. The accounting fundamentalsconsist of earnings yield, book value, profitability, growth opportunities and discount rate. The prior single relationships model has been investigated by Chen and Zhang (2007),Sumiyana (2011) and Sumiyana et al. (2010). They assume that all accounting fundamentals associate direct-linearly to the stock returns. This study ...

  18. Applying the International Medical Graduate Program Model to Alleviate the Supply Shortage of Accounting Doctoral Faculty

    Science.gov (United States)

    HassabElnaby, Hassan R.; Dobrzykowski, David D.; Tran, Oanh Thikie

    2012-01-01

    Accounting has been faced with a severe shortage in the supply of qualified doctoral faculty. Drawing upon the international mobility of foreign scholars and the spirit of the international medical graduate program, this article suggests a model to fill the demand in accounting doctoral faculty. The underlying assumption of the suggested model is…

  19. Towards accounting for dissolved iron speciation in global ocean models

    Directory of Open Access Journals (Sweden)

    A. Tagliabue

    2011-10-01

    Full Text Available The trace metal iron (Fe is now routinely included in state-of-the-art ocean general circulation and biogeochemistry models (OGCBMs because of its key role as a limiting nutrient in regions of the world ocean important for carbon cycling and air-sea CO2 exchange. However, the complexities of the seawater Fe cycle, which impact its speciation and bioavailability, are simplified in such OGCBMs due to gaps in understanding and to avoid high computational costs. In a similar fashion to inorganic carbon speciation, we outline a means by which the complex speciation of Fe can be included in global OGCBMs in a reasonably cost-effective manner. We construct an Fe speciation model based on hypothesised relationships between rate constants and environmental variables (temperature, light, oxygen, pH, salinity and assumptions regarding the binding strengths of Fe complexing organic ligands and test hypotheses regarding their distributions. As a result, we find that the global distribution of different Fe species is tightly controlled by spatio-temporal environmental variability and the distribution of Fe binding ligands. Impacts on bioavailable Fe are highly sensitive to assumptions regarding which Fe species are bioavailable and how those species vary in space and time. When forced by representations of future ocean circulation and climate we find large changes to the speciation of Fe governed by pH mediated changes to redox kinetics. We speculate that these changes may exert selective pressure on phytoplankton Fe uptake strategies in the future ocean. In future work, more information on the sources and sinks of ocean Fe ligands, their bioavailability, the cycling of colloidal Fe species and kinetics of Fe-surface coordination reactions would be invaluable. We hope our modeling approach can provide a means by which new observations of Fe speciation can be tested against hypotheses of the processes present in governing the ocean Fe cycle in an

  20. Spherical Detector Device Mathematical Modelling with Taking into Account Detector Module Symmetry

    International Nuclear Information System (INIS)

    Batyj, V.G.; Fedorchenko, D.V.; Prokopets, S.I.; Prokopets, I.M.; Kazhmuradov, M.A.

    2005-01-01

    Mathematical Model for spherical detector device accounting to symmetry properties is considered. Exact algorithm for simulation of measurement procedure with multiple radiation sources is developed. Modelling results are shown to have perfect agreement with calibration measurements

  1. A mathematical model of sentimental dynamics accounting for marital dissolution.

    Science.gov (United States)

    Rey, José-Manuel

    2010-03-31

    Marital dissolution is ubiquitous in western societies. It poses major scientific and sociological problems both in theoretical and therapeutic terms. Scholars and therapists agree on the existence of a sort of second law of thermodynamics for sentimental relationships. Effort is required to sustain them. Love is not enough. Building on a simple version of the second law we use optimal control theory as a novel approach to model sentimental dynamics. Our analysis is consistent with sociological data. We show that, when both partners have similar emotional attributes, there is an optimal effort policy yielding a durable happy union. This policy is prey to structural destabilization resulting from a combination of two factors: there is an effort gap because the optimal policy always entails discomfort and there is a tendency to lower effort to non-sustaining levels due to the instability of the dynamics. These mathematical facts implied by the model unveil an underlying mechanism that may explain couple disruption in real scenarios. Within this framework the apparent paradox that a union consistently planned to last forever will probably break up is explained as a mechanistic consequence of the second law.

  2. A mathematical model of sentimental dynamics accounting for marital dissolution.

    Directory of Open Access Journals (Sweden)

    José-Manuel Rey

    Full Text Available BACKGROUND: Marital dissolution is ubiquitous in western societies. It poses major scientific and sociological problems both in theoretical and therapeutic terms. Scholars and therapists agree on the existence of a sort of second law of thermodynamics for sentimental relationships. Effort is required to sustain them. Love is not enough. METHODOLOGY/PRINCIPAL FINDINGS: Building on a simple version of the second law we use optimal control theory as a novel approach to model sentimental dynamics. Our analysis is consistent with sociological data. We show that, when both partners have similar emotional attributes, there is an optimal effort policy yielding a durable happy union. This policy is prey to structural destabilization resulting from a combination of two factors: there is an effort gap because the optimal policy always entails discomfort and there is a tendency to lower effort to non-sustaining levels due to the instability of the dynamics. CONCLUSIONS/SIGNIFICANCE: These mathematical facts implied by the model unveil an underlying mechanism that may explain couple disruption in real scenarios. Within this framework the apparent paradox that a union consistently planned to last forever will probably break up is explained as a mechanistic consequence of the second law.

  3. Nurse-directed care model in a psychiatric hospital: a model for clinical accountability.

    Science.gov (United States)

    E-Morris, Marlene; Caldwell, Barbara; Mencher, Kathleen J; Grogan, Kimberly; Judge-Gorny, Margaret; Patterson, Zelda; Christopher, Terrian; Smith, Russell C; McQuaide, Teresa

    2010-01-01

    The focus on recovery for persons with severe and persistent mental illness is leading state psychiatric hospitals to transform their method of care delivery. This article describes a quality improvement project involving a hospital's administration and multidisciplinary state-university affiliation that collaborated in the development and implementation of a nursing care delivery model in a state psychiatric hospital. The quality improvement project team instituted a new model to promote the hospital's vision of wellness and recovery through utilization of the therapeutic relationship and greater clinical accountability. Implementation of the model was accomplished in 2 phases: first, the establishment of a structure to lay the groundwork for accountability and, second, the development of a mechanism to provide a clinical supervision process for staff in their work with clients. Effectiveness of the model was assessed by surveys conducted at baseline and after implementation. Results indicated improvement in clinical practices and client living environment. As a secondary outcome, these improvements appeared to be associated with increased safety on the units evidenced by reduction in incidents of seclusion and restraint. Restructuring of the service delivery system of care so that clients are the center of clinical focus improves safety and can enhance the staff's attention to work with clients on their recovery. The role of the advanced practice nurse can influence the recovery of clients in state psychiatric hospitals. Future research should consider the impact on clients and their perceptions of the new service models.

  4. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  5. Larmor time and proper time

    Energy Technology Data Exchange (ETDEWEB)

    Kudaka, Shoju [Department of Physics, University of the Ryukyus, Okinawa 903-0129 (Japan); Matsumoto, Shuichi, E-mail: shuichi@edu.u-ryukyu.ac.jp [Department of Mathematics, University of the Ryukyus, Okinawa 903-0129 (Japan)

    2012-10-01

    The idea of a Larmor clock is reexamined in the relativistic regime. We propose a concept of proper time for quantum theoretical particles. The Larmor clock can measure, under some relevant conditions, the proper time that passes while the particle stays in a space region. Our approach to Larmor clock is different than those of other researchers in the following two aspects: our concept of Larmor clock does not distinguish whether the particle is transmitted or reflected at the end of its stay, and pointer of our Larmor clock is not the spin but the total angular momentum. -- Highlights: ► The idea of a Larmor clock is reexamined in the relativistic regime. ► We propose a concept of proper time for quantum theoretical particles. ► The Larmor clock measures the passage of this quantum theoretical proper time.

  6. TOWARDS PROPER CULTURAL RESOURCE MANAGEMENT ...

    African Journals Online (AJOL)

    GRACE

    proper harnessing and management of cultural resources in Nigeria for sustainable development .... and knowledge) to organize the resources available to man with the aim of optimizing their use in the ... needs‖ (World Bank 1992). Thus, as ...

  7. A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles

    Science.gov (United States)

    Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.

    2016-12-01

    Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.

  8. MODEL OF ACCOUNTING ENGINEERING IN VIEW OF EARNINGS MANAGEMENT IN POLAND

    Directory of Open Access Journals (Sweden)

    Leszek Michalczyk

    2012-10-01

    Full Text Available The article introduces the theoretical foundations of the author’s original concept of accounting engineering. We assume a theoretical premise whereby accounting engineering is understood as a system of accounting practice utilising differences in economic events resultant from the use of divergent accounting methods. Unlike, for instance, creative or praxeological accounting, accounting engineering is composed only, and under all circumstances, of lawful activities and adheres to the current regulations of the balance sheet law. The aim of the article is to construct a model of accounting engineering exploiting taking into account differences inherently present in variant accounting. These differences result in disparate financial results of identical economic events. Given the fact that regardless of which variant is used in accounting, all settlements are eventually equal to one another, a new class of differences emerges - the accounting engineering potential. It is transferred to subsequent reporting (balance sheet periods. In the end, the profit “made” in a given period reduces the financial result of future periods. This effect is due to the “transfer” of costs from one period to another. Such actions may have sundry consequences and are especially dangerous whenever many individuals are concerned with the profit of a given company, e.g. on a stock exchange. The reverse may be observed when a company is privatised and its value is being intentionally reduced by a controlled recording of accounting provisions, depending on the degree to which they are justified. The reduction of a company’s goodwill in Balcerowicz’s model of no-tender privatisation allows to justify the low value of the purchased company. These are only some of many manifestations of variant accounting which accounting engineering employs. A theoretical model of the latter is presented in this article.

  9. A 3D finite-strain-based constitutive model for shape memory alloys accounting for thermomechanical coupling and martensite reorientation

    Science.gov (United States)

    Wang, Jun; Moumni, Ziad; Zhang, Weihong; Xu, Yingjie; Zaki, Wael

    2017-06-01

    The paper presents a finite-strain constitutive model for shape memory alloys (SMAs) that accounts for thermomechanical coupling and martensite reorientation. The finite-strain formulation is based on a two-tier, multiplicative decomposition of the deformation gradient into thermal, elastic, and inelastic parts, where the inelastic deformation is further split into phase transformation and martensite reorientation components. A time-discrete formulation of the constitutive equations is proposed and a numerical integration algorithm is presented featuring proper symmetrization of the tensor variables and explicit formulation of the material and spatial tangent operators involved. The algorithm is used for finite element analysis of SMA components subjected to various loading conditions, including uniaxial, non-proportional, isothermal and adiabatic loading cases. The analysis is carried out using the FEA software Abaqus by means of a user-defined material subroutine, which is then utilized to simulate a SMA archwire undergoing large strains and rotations.

  10. A Case Study of the Accounting Models for the Participants in an Emissions Trading Scheme

    Directory of Open Access Journals (Sweden)

    Marius Deac

    2013-10-01

    Full Text Available As emissions trading schemes are becoming more popular across the world, accounting has to keep up with these new economic developments. The absence of guidance regarding the accounting for greenhouse gases (GHGs emissions generated by the withdrawal of IFRIC 3- Emission Rights - is the main reason why there is a diversity of accounting practices. This diversity of accounting methods makes the financial statements of companies that are taking part in emissions trading schemes like EU ETS, difficult to compare. The present paper uses a case study that assumes the existence of three entities that have chosen three different accounting methods: the IFRIC 3 cost model, the IFRIC 3 revaluation model and the “off balance sheet” approach. This illustrates how the choice of an accounting method regarding GHGs emissions influences their interim and annual reports through the chances in the companies’ balance sheet and financial results.

  11. A new model in achieving Green Accounting at hotels in Bali

    Science.gov (United States)

    Astawa, I. P.; Ardina, C.; Yasa, I. M. S.; Parnata, I. K.

    2018-01-01

    The concept of green accounting becomes a debate in terms of its implementation in a company. The result of previous studies indicates that there are no standard model regarding its implementation to support performance. The research aims to create a different green accounting model to other models by using local cultural elements as the variables in building it. The research is conducted in two steps. The first step is designing the model based on theoretical studies by considering the main and supporting elements in building the concept of green accounting. The second step is conducting a model test at 60 five stars hotels started with data collection through questionnaire and followed by data processing using descriptive statistic. The result indicates that the hotels’ owner has implemented green accounting attributes and it supports previous studies. Another result, which is a new finding, shows that the presence of local culture, government regulation, and the awareness of hotels’ owner has important role in the development of green accounting concept. The results of the research give contribution to accounting science in terms of green reporting. The hotel management should adopt local culture in building the character of accountant hired in the accounting department.

  12. Properly colored connectivity of graphs

    CERN Document Server

    Li, Xueliang; Qin, Zhongmei

    2018-01-01

    A comprehensive survey of proper connection of graphs is discussed in this book with real world applications in computer science and network security. Beginning with a brief introduction, comprising relevant definitions and preliminary results, this book moves on to consider a variety of properties of graphs that imply bounds on the proper connection number. Detailed proofs of significant advancements toward open problems and conjectures are presented with complete references. Researchers and graduate students with an interest in graph connectivity and colorings will find this book useful as it builds upon fundamental definitions towards modern innovations, strategies, and techniques. The detailed presentation lends to use as an introduction to proper connection of graphs for new and advanced researchers, a solid book for a graduate level topics course, or as a reference for those interested in expanding and further developing research in the area.

  13. Boltzmann babies in the proper time measure

    Energy Technology Data Exchange (ETDEWEB)

    Bousso, Raphael; Bousso, Raphael; Freivogel, Ben; Yang, I-Sheng

    2007-12-20

    After commenting briefly on the role of the typicality assumption in science, we advocate a phenomenological approach to the cosmological measure problem. Like any other theory, a measure should be simple, general, well defined, and consistent with observation. This allows us to proceed by elimination. As an example, we consider the proper time cutoff on a geodesic congruence. It predicts that typical observers are quantum fluctuations in the early universe, or Boltzmann babies. We sharpen this well-known youngness problem by taking into account the expansion and open spatial geometry of pocket universes. Moreover, we relate the youngness problem directly to the probability distribution for observables, such as the temperature of the cosmic background radiation. We consider a number of modifications of the proper time measure, but find none that would make it compatible with observation.

  14. An Integrative Model of the Strategic Management Accounting at the Enterprises of Chemical Industry

    Directory of Open Access Journals (Sweden)

    Aleksandra Vasilyevna Glushchenko

    2016-06-01

    Full Text Available Currently, the issues of information and analytical support of strategic management enabling to take timely and high-quality management decisions, are extremely relevant. Conflicting and poor information, haphazard collected in the practice of large companies from unreliable sources, affects the effective implementation of their development strategies and carries the threat of risk, by the increasing instability of the external environment. Thus chemical industry is one of the central places in the industry of Russia and, of course, has its specificity in the formation of the informationsupport system. Such an information system suitable for the development and implementation of strategic directions, changes in recognized competitive advantages of strategic management accounting. The issues of the lack of requirements for strategic accounting information, its inconsistency in the result of simultaneous accumulation in different parts and using different methods of calculation and assessment of indicators is impossible without a well-constructed model of organization of strategic management accounting. The purpose of this study is to develop such a model, the implementation of which will allow realizing the possibility of achieving strategic goals by harmonizing information from the individual objects of the strategic account to increase the functional effectiveness of management decisions with a focus on strategy. Case study was based on dialectical logic and methods of system analysis, and identifying causal relationships in building a model of strategic management accounting that contributes to the forecasts of its development. The study proposed to implement an integrative model of organization of strategic management accounting. The purpose of a phased implementation of this model defines the objects and tools of strategic management accounting. Moreover, it is determined that from the point of view of increasing the usefulness of management

  15. A Dynamic Simulation Model of the Management Accounting Information Systems (MAIS)

    Science.gov (United States)

    Konstantopoulos, Nikolaos; Bekiaris, Michail G.; Zounta, Stella

    2007-12-01

    The aim of this paper is to examine the factors which determine the problems and the advantages on the design of management accounting information systems (MAIS). A simulation is carried out with a dynamic model of the MAIS design.

  16. Illusory inferences from a disjunction of conditionals: a new mental models account.

    Science.gov (United States)

    Barrouillet, P; Lecas, J F

    2000-08-14

    (Johnson-Laird, P.N., & Savary, F. (1999, Illusory inferences: a novel class of erroneous deductions. Cognition, 71, 191-229.) have recently presented a mental models account, based on the so-called principle of truth, for the occurrence of inferences that are compelling but invalid. This article presents an alternative account of the illusory inferences resulting from a disjunction of conditionals. In accordance with our modified theory of mental models of the conditional, we show that the way individuals represent conditionals leads them to misinterpret the locus of the disjunction and prevents them from drawing conclusions from a false conditional, thus accounting for the compelling character of the illusory inference.

  17. Taking into account of the Pauli principle in the quasiparticle-phonon nuclear model

    International Nuclear Information System (INIS)

    Solov'ev, V.G.

    1979-01-01

    The effect of an exact account taken of the Pauli principle and correlations in ground states in calculations in the framework of the quasiparticle-phonon model of a nucleus has been studied. It is elucidated when it is possible to use the random phase approximation (RPA) and when the Pauli principle should be exactly taken into account. It has been shown that in the quasiparticle-phonon model of a nucleus one may perform calculations with a precise account of the Pauli principle. In most of the problems calculations can be carried out with RPA-phonons

  18. Mutual Calculations in Creating Accounting Models: A Demonstration of the Power of Matrix Mathematics in Accounting Education

    Science.gov (United States)

    Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg

    2016-01-01

    The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…

  19. 76 FR 29249 - Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications

    Science.gov (United States)

    2011-05-20

    ... Affordable Care Act, to test innovative payment and service delivery models that reduce spending under.... This Model will test the effectiveness of a combination of the following: Payment arrangements that...] Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications AGENCY: Centers for...

  20. Accountability: a missing construct in models of adherence behavior and in clinical practice

    Directory of Open Access Journals (Sweden)

    Oussedik E

    2017-07-01

    Full Text Available Elias Oussedik,1 Capri G Foy,2 E J Masicampo,3 Lara K Kammrath,3 Robert E Anderson,1 Steven R Feldman1,4,5 1Center for Dermatology Research, Department of Dermatology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 2Department of Social Sciences and Health Policy, Wake Forest School of Medicine, Winston-Salem, NC, USA; 3Department of Psychology, Wake Forest University, Winston-Salem, NC, USA; 4Department of Pathology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 5Department of Public Health Sciences, Wake Forest School of Medicine, Winston-Salem, NC, USA Abstract: Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients’ motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8–12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability to patients’ autonomous internal desire to please a respected health care provider (autonomous accountability, the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura’s Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well

  1. N-body modeling of globular clusters: detecting intermediate-mass black holes by non-equipartition in HST proper motions

    Science.gov (United States)

    Trenti, Michele

    2010-09-01

    Intermediate Mass Black Holes {IMBHs} are objects of considerable astrophysical significance. They have been invoked as possible remnants of Population III stars, precursors of supermassive black holes, sources of ultra-luminous X-ray emission, and emitters of gravitational waves. The centers of globular clusters, where they may have formed through runaway collapse of massive stars, may be our best chance of detecting them. HST studies of velocity dispersions have provided tentative evidence, but the measurements are difficult and the results have been disputed. It is thus important to explore and develop additional indicators of the presence of an IMBH in these systems. In a Cycle 16 theory project we focused on the fingerprints of an IMBH derived from HST photometry. We showed that an IMBH leads to a detectable quenching of mass segregation. Analysis of HST-ACS data for NGC 2298 validated the method, and ruled out an IMBH of more than 300 solar masses. We propose here to extend the search for IMBH signatures from photometry to kinematics. The velocity dispersion of stars in collisionally relaxed stellar systems such as globular clusters scales with main sequence mass as sigma m^alpha. A value alpha = -0.5 corresponds to equipartition. Mass-dependent kinematics can now be measured from HST proper motion studies {e.g., alpha = -0.21 for Omega Cen}. Preliminary analysis shows that the value of alpha can be used as indicator of the presence of an IMBH. In fact, the quenching of mass segregation is a result of the degree of equipartition that the system attains. However, detailed numerical simulations are required to quantify this. Therefore we propose {a} to carry out a new, larger set of realistic N-body simulations of star clusters with IMBHs, primordial binaries and stellar evolution to predict in detail the expected kinematic signatures and {b} to compare these predictions to datasets that are {becoming} available. Considerable HST resources have been invested in

  2. Development and application of a large scale river system model for National Water Accounting in Australia

    Science.gov (United States)

    Dutta, Dushmanta; Vaze, Jai; Kim, Shaun; Hughes, Justin; Yang, Ang; Teng, Jin; Lerat, Julien

    2017-04-01

    Existing global and continental scale river models, mainly designed for integrating with global climate models, are of very coarse spatial resolutions and lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing water accounts, which have become increasingly important for water resources planning and management at regional and national scales. A continental scale river system model called Australian Water Resource Assessment River System model (AWRA-R) has been developed and implemented for national water accounting in Australia using a node-link architecture. The model includes major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. Two key components of the model are an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. The results in the Murray-Darling Basin shows highly satisfactory performance of the model with median daily Nash-Sutcliffe Efficiency (NSE) of 0.64 and median annual bias of less than 1% for the period of calibration (1970-1991) and median daily NSE of 0.69 and median annual bias of 12% for validation period (1992-2014). The results have demonstrated that the performance of the model is less satisfactory when the key processes such as overbank flow, groundwater seepage and irrigation diversion are switched off. The AWRA-R model, which has been operationalised by the Australian Bureau of Meteorology for continental scale water accounting, has contributed to improvements in the national water account by substantially reducing accounted different volume (gain/loss).

  3. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    OpenAIRE

    Mitchell, Lewis; Carrassi, Alberto

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are describe...

  4. A Social Accountable Model for Medical Education System in Iran: A Grounded-Theory

    Directory of Open Access Journals (Sweden)

    Mohammadreza Abdolmaleki

    2017-10-01

    Full Text Available Social accountability has been increasingly discussed over the past three decades in various fields providing service to the community and has been expressed as a goal for various areas. In medical education system, like other social accountability areas, it is considered as one of the main objectives globally. The aim of this study was to seek a social accountability theory in the medical education system that is capable of identifying all the standards, norms, and conditions within the country related to the study subject and recognize their relationship. In this study, a total of eight experts in the field of social accountability in medical education system with executive or study experience were interviewedpersonally. After analysis of interviews, 379 codes, 59 secondary categories, 16 subcategories, and 9 main categories were obtained. The resulting data was collected and analyzed at three levels of open coding, axial coding, and selective coding in the form of grounded theory study of “Accountability model of medical education in Iran”, which can be used in education system’s policies and planning for social accountability, given that almost all effective components of social accountability in highereducation health system with causal and facilitator associations were determined.Keywords: SOCIAL ACCOUNTABILITY, COMMUNITY–ORIENTED MEDICINE, COMMUNITY MEDICINE, EDUCATION SYSTEM, GROUNDED THEORY

  5. Toward a Useful Model for Group Mentoring in Public Accounting Firms

    Directory of Open Access Journals (Sweden)

    Steven J. Johnson

    2013-07-01

    Full Text Available Today’s public accounting firms face a number of challenges in relation to their most valuable resource and primary revenue generator, human capital. Expanding regulations, technology advances, increased competition and high turnover rates are just a few of the issues confronting public accounting leaders in today’s complex business environment. In recent years, some public accounting firms have attempted to combat low retention and high burnout rates with traditional one-to-one mentoring programs, with varying degrees of success. Many firms have found that they lack the resources necessary to successfully implement and maintain such programs. In other industries, organizations have used a group mentoring approach in attempt to remove potential barriers to mentoring success. Although the research regarding group mentoring shows promise for positive organizational outcomes, no cases could be found in the literature regarding its usage in a public accounting firm. Because of the unique challenges associated with public accounting firms, this paper attempts to answer two questions: (1Does group mentoring provide a viable alternative to traditional mentoring in a public accounting firm? (2 If so, what general model might be used for implementing such a program? In answering these questions, a review of the group mentoring literature is provided, along with a suggested model for the implementation of group mentoring in a public accounting firm.

  6. A simulation model of hospital management based on cost accounting analysis according to disease.

    Science.gov (United States)

    Tanaka, Koji; Sato, Junzo; Guo, Jinqiu; Takada, Akira; Yoshihara, Hiroyuki

    2004-12-01

    Since a little before 2000, hospital cost accounting has been increasingly performed at Japanese national university hospitals. At Kumamoto University Hospital, for instance, departmental costs have been analyzed since 2000. And, since 2003, the cost balance has been obtained according to certain diseases for the preparation of Diagnosis-Related Groups and Prospective Payment System. On the basis of these experiences, we have constructed a simulation model of hospital management. This program has worked correctly at repeated trials and with satisfactory speed. Although there has been room for improvement of detailed accounts and cost accounting engine, the basic model has proved satisfactory. We have constructed a hospital management model based on the financial data of an existing hospital. We will later improve this program from the viewpoint of construction and using more various data of hospital management. A prospective outlook may be obtained for the practical application of this hospital management model.

  7. A New Form of Nondestructive Strength-Estimating Statistical Models Accounting for Uncertainty of Model and Aging Effect of Concrete

    International Nuclear Information System (INIS)

    Hong, Kee Jeung; Kim, Jee Sang

    2009-01-01

    As concrete ages, the surrounding environment is expected to have growing influences on the concrete. As all the impacts of the environment cannot be considered in the strength-estimating model of a nondestructive concrete test, the increase in concrete age leads to growing uncertainty in the strength-estimating model. Therefore, the variation of the model error increases. It is necessary to include those impacts in the probability model of concrete strength attained from the nondestructive tests so as to build a more accurate reliability model for structural performance evaluation. This paper reviews and categorizes the existing strength-estimating statistical models of nondestructive concrete test, and suggests a new form of the strength-estimating statistical models to properly reflect the model uncertainty due to aging of the concrete. This new form of the statistical models will lay foundation for more accurate structural performance evaluation.

  8. Facility level SSAC for model country - an introduction and material balance accounting principles

    International Nuclear Information System (INIS)

    Jones, R.J.

    1989-01-01

    A facility level State System of Accounting for and Control of Nuclear Materials (SSAC) for a model country and the principles of materials balance accounting relating to that country are described. The seven principal elements of a SSAC are examined and a facility level system based on them discussed. The seven elements are organization and management; nuclear material measurements; measurement quality; records and reports; physical inventory taking; material balance closing; containment and surveillance. 11 refs., 19 figs., 5 tabs

  9. Assimilation of tourism satellite accounts and applied general equilibrium models to inform tourism policy analysis

    OpenAIRE

    Rossouw, Riaan; Saayman, Melville

    2011-01-01

    Historically, tourism policy analysis in South Africa has posed challenges to accurate measurement. The primary reason for this is that tourism is not designated as an 'industry' in standard economic accounts. This paper therefore demonstrates the relevance and need for applied general equilibrium (AGE) models to be completed and extended through an integration with tourism satellite accounts (TSAs) as a tool for policy makers (especially tourism policy makers) in South Africa. The paper sets...

  10. Shadow Segmentation and Augmentation Using á-overlay Models that Account for Penumbra

    DEFF Research Database (Denmark)

    Nielsen, Michael; Madsen, Claus B.

    2006-01-01

    that an augmented virtual object can cast an exact shadow. The penumbras (half-shadows) must be taken into account so that we can model the soft shadows.We hope to achieve this by modelling the shadow regions (umbra and penumbra alike) with a transparent overlay. This paper reviews the state-of-the-art shadow...

  11. School Board Improvement Plans in Relation to the AIP Model of Educational Accountability: A Content Analysis

    Science.gov (United States)

    van Barneveld, Christina; Stienstra, Wendy; Stewart, Sandra

    2006-01-01

    For this study we analyzed the content of school board improvement plans in relation to the Achievement-Indicators-Policy (AIP) model of educational accountability (Nagy, Demeris, & van Barneveld, 2000). We identified areas of congruence and incongruence between the plans and the model. Results suggested that the content of the improvement…

  12. Lessons learned for spatial modelling of ecosystem services in support of ecosystem accounting

    NARCIS (Netherlands)

    Schroter, M.; Remme, R.P.; Sumarga, E.; Barton, D.N.; Hein, L.G.

    2015-01-01

    Assessment of ecosystem services through spatial modelling plays a key role in ecosystem accounting. Spatial models for ecosystem services try to capture spatial heterogeneity with high accuracy. This endeavour, however, faces several practical constraints. In this article we analyse the trade-offs

  13. INTERNAL PROPER MOTIONS IN THE ESKIMO NEBULA

    International Nuclear Information System (INIS)

    García-Díaz, Ma. T.; Gutiérrez, L.; Steffen, W.; López, J. A.; Beckman, J.

    2015-01-01

    We present measurements of internal proper motions at more than 500 positions of NGC 2392, the Eskimo Nebula, based on images acquired with WFPC2 on board the Hubble Space Telescope at two epochs separated by 7.695 yr. Comparisons of the two observations clearly show the expansion of the nebula. We measured the amplitude and direction of the motion of local structures in the nebula by determining their relative shift during that interval. In order to assess the potential uncertainties in the determination of proper motions in this object, in general, the measurements were performed using two different methods, used previously in the literature. We compare the results from the two methods, and to perform the scientific analysis of the results we choose one, the cross-correlation method, because it is more reliable. We go on to perform a ''criss-cross'' mapping analysis on the proper motion vectors, which helps in the interpretation of the velocity pattern. By combining our results of the proper motions with radial velocity measurements obtained from high resolution spectroscopic observations, and employing an existing 3D model, we estimate the distance to the nebula to be 1.3 kpc

  14. INTERNAL PROPER MOTIONS IN THE ESKIMO NEBULA

    Energy Technology Data Exchange (ETDEWEB)

    García-Díaz, Ma. T.; Gutiérrez, L.; Steffen, W.; López, J. A. [Instituto de Astronomía, Universidad Nacional Autónoma de México, Km 103 Carretera Tijuana-Ensenada, 22860 Ensenada, B.C. (Mexico); Beckman, J., E-mail: tere@astro.unam.mx, E-mail: leonel@astro.unam.mx, E-mail: wsteffen@astro.unam.mx, E-mail: jal@astro.unam.mx, E-mail: jeb@iac.es [Instituto de Astrofísica de Canarias, La Laguna, Tenerife (Spain)

    2015-01-10

    We present measurements of internal proper motions at more than 500 positions of NGC 2392, the Eskimo Nebula, based on images acquired with WFPC2 on board the Hubble Space Telescope at two epochs separated by 7.695 yr. Comparisons of the two observations clearly show the expansion of the nebula. We measured the amplitude and direction of the motion of local structures in the nebula by determining their relative shift during that interval. In order to assess the potential uncertainties in the determination of proper motions in this object, in general, the measurements were performed using two different methods, used previously in the literature. We compare the results from the two methods, and to perform the scientific analysis of the results we choose one, the cross-correlation method, because it is more reliable. We go on to perform a ''criss-cross'' mapping analysis on the proper motion vectors, which helps in the interpretation of the velocity pattern. By combining our results of the proper motions with radial velocity measurements obtained from high resolution spectroscopic observations, and employing an existing 3D model, we estimate the distance to the nebula to be 1.3 kpc.

  15. Model Selection and Accounting for Model Uncertainty in Graphical Models Using OCCAM’s Window

    Science.gov (United States)

    1991-07-22

    mental work; C, strenuous physical work; D, systolic blood pressure: E. ratio of 13 and Qt proteins; F, family anamnesis of coronary heart disease...of F, family anamnesis . The models are shown in Figure 4. 12 Table 1: Risk factors for Coronary lfeart Disea:W B No Yes A No Yes No Yes F E D C...a link from smoking (A) to systolic blood pressure (D). There is decisive evidence in favour of the marginal independence of family anamnesis of

  16. The Effect of Platelet-Rich Plasma on Survival of the Composite Graft and the Proper Time of Injection in a Rabbit Ear Composite Graft Model

    Directory of Open Access Journals (Sweden)

    Hyun Nam Choi

    2014-11-01

    Full Text Available BackgroundAdministration of growth factors has been associated with increased viability of composite grafts greater than 1-cm in diameter. Platelet-rich plasma (PRP contains many of the growth factors studied. In this study, we evaluate the effect of PRP injection on composite graft viability and the proper time for injection.MethodsA total of 24 New Zealand White rabbits were divided into four groups. Autologous PRP was injected into the recipient sites three days before grafting in group 1, on the day of grafting in group 2, and three days after grafting in group 3. Group 4 served as control without PRP administration. Auricular composite grafts of 3-cm diameter were harvested and grafted back into place after being rotated 180 degrees. Median graft viability and microvessel density were evaluated at day 21 of graft via macroscopic photographs and immunofluorescent staining, respectively.ResultsThe median graft survival rate was 97.8% in group 1, 69.2% in group 2, 55.7% in group 3, and 40.8% in the control group. The median vessel counts were 34 (per ×200 HPF in group 1, 24.5 in group 2, 19.5 in group 3, and 10.5 in the control group.ConclusionsThis study demonstrates that PRP administration is associated with increased composite graft viability. All experimental groups showed a significantly higher survival rate and microvessel density, compared with the control group. Pre-administration of PRP was followed by the highest graft survival rate and revascularization. PRP treatments are minimally invasive, fast, easily applicable, and inexpensive, and offer a potential clinical pathway to larger composite grafts.

  17. Accounting for differences in dieting status: steps in the refinement of a model.

    Science.gov (United States)

    Huon, G; Hayne, A; Gunewardene, A; Strong, K; Lunn, N; Piira, T; Lim, J

    1999-12-01

    The overriding objective of this paper is to outline the steps involved in refining a structural model to explain differences in dieting status. Cross-sectional data (representing the responses of 1,644 teenage girls) derive from the preliminary testing in a 3-year longitudinal study. A battery of measures assessed social influence, vulnerability (to conformity) disposition, protective (social coping) skills, and aspects of positive familial context as core components in a model proposed to account for the initiation of dieting. Path analyses were used to establish the predictive ability of those separate components and their interrelationships in accounting for differences in dieting status. Several components of the model were found to be important predictors of dieting status. The model incorporates significant direct, indirect (or mediated), and moderating relationships. Taking all variables into account, the strongest prediction of dieting status was from peer competitiveness, using a new scale developed specifically for this study. Systematic analyses are crucial for the refinement of models to be used in large-scale multivariate studies. In the short term, the model investigated in this study has been shown to be useful in accounting for cross-sectional differences in dieting status. The refined model will be most powerfully employed in large-scale time-extended studies of the initiation of dieting to lose weight. Copyright 1999 by John Wiley & Sons, Inc.

  18. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    DEFF Research Database (Denmark)

    Jensen, Ninna Reitzel; Schomacker, Kristian Juul

    2015-01-01

    Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death......, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance...... product types. This enables comparison of participating life insurance products and unit-linked insurance products, thus building a bridge between the two different ways of formalizing life insurance products. Finally, our model distinguishes itself from the existing literature by taking into account...

  19. Microscopic model accounting of 2p2p configurations in magic nuclei

    International Nuclear Information System (INIS)

    Kamerdzhiev, S.P.

    1983-01-01

    A model for account of the 2p2h configurations in magic nuclei is described in the framework of the Green function formalism. The model is formulated in the lowest order in the phonon production amplitude, so that the series are expansions not over pure 2p2h configurations, but over con figurations of the type ''1p1h+phonon''. Equations are obtained for the vertex and the density matrix, as well as an expression for the transition probabilities, that are extensions of the corresponding results of the theory of finite Fermi systems, or of the random-phase approximation to the case where the ''1p1h+phonon'' configurations are taken into account. Corrections to the one-particle phenomenological basis which arise with account for complicated configurations are obtained. Comparison with other approaches, using phonons, has shown that they are particular cases of the described model

  20. Accounting for imperfect forward modeling in geophysical inverse problems — Exemplified for crosshole tomography

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Holm Jacobsen, Bo

    2014-01-01

    forward models, can be more than an order of magnitude larger than the measurement uncertainty. We also found that the modeling error is strongly linked to the spatial variability of the assumed velocity field, i.e., the a priori velocity model.We discovered some general tools by which the modeling error...... synthetic ground-penetrating radar crosshole tomographic inverse problems. Ignoring the modeling error can lead to severe artifacts, which erroneously appear to be well resolved in the solution of the inverse problem. Accounting for the modeling error leads to a solution of the inverse problem consistent...

  1. Modelling characteristics of photovoltaic panels with thermal phenomena taken into account

    International Nuclear Information System (INIS)

    Krac, Ewa; Górecki, Krzysztof

    2016-01-01

    In the paper a new form of the electrothermal model of photovoltaic panels is proposed. This model takes into account the optical, electrical and thermal properties of the considered panels, as well as electrical and thermal properties of the protecting circuit and thermal inertia of the considered panels. The form of this model is described and some results of measurements and calculations of mono-crystalline and poly-crystalline panels are presented

  2. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  3. The Anachronism of the Local Public Accountancy Determinate by the Accrual European Model

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2009-01-01

    Full Text Available Placing the European accrual model upon cash accountancy model,presently used in Romania, at the level of the local communities, makespossible that the anachronism of the model to manifest itself on the discussion’sconcentration at the nominalization about the model’s inclusion in everydaypublic practice. The basis of the accrual model were first defined in the lawregarding the commercial societies adopted in Great Britain in 1985, when theydetermined that all income and taxes referring to the financial year “will betaken into consideration without any boundary to the reception or paymentdate.”1 The accrual model in accountancy needs the recording of the non-casheffects in transactions or financial events for their appearance periods and not inany generated cash, received or paid. The business development was the basisfor “sophistication” of the recordings of the transactions and financial events,being prerequisite for recording the debtors’ or creditors’ sums.

  4. Multiple imputation to account for measurement error in marginal structural models

    Science.gov (United States)

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  5. Multiple Imputation to Account for Measurement Error in Marginal Structural Models.

    Science.gov (United States)

    Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J

    2015-09-01

    Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.

  6. Machine safety: proper safeguarding techniques.

    Science.gov (United States)

    Martin, K J

    1992-06-01

    1. OSHA mandates certain safeguarding of machinery to prevent accidents and protect machine operators. OSHA specifies moving parts that must be guarded and sets criteria for the guards. 2. A 1989 OSHA standard for lockout/tagout requires locking the energy source during maintenance, periodically inspecting for power transmission, and training maintenance workers. 3. In an amputation emergency, first aid for cardiopulmonary resuscitation, shock, and bleeding are the first considerations. The amputated part should be wrapped in moist gauze, placed in a sealed plastic bag, and placed in a container of 50% water and 50% ice for transport. 4. The role of the occupational health nurse in machine safety is to conduct worksite analyses to identify proper safeguarding and to communicate deficiencies to appropriate personnel; to train workers in safe work practices and observe compliance in the use of machine guards; to provide care to workers injured by machines; and to reinforce safe work practices among machine operators.

  7. Computer modelling of structures with account of the construction stages and the time dependent material properties

    Directory of Open Access Journals (Sweden)

    Traykov Alexander

    2015-01-01

    Full Text Available Numerical studies are performed on computer models taking into account the stages of construction and time dependent material properties defined in two forms. A 2D model of three storey two spans frame is created. The first form deals with material defined in the usual design practice way - without taking into account the time dependent properties of the concrete. The second form creep and shrinkage of the concrete are taken into account. Displacements and internal forces in specific elements and sections are reported. The influence of the time dependent material properties on the displacement and the internal forces in the main structural elements is tracked down. The results corresponding to the two forms of material definition are compared together as well as with the results obtained by the usual design calculations. Conclusions on the influence of the concrete creep and shrinkage during the construction towards structural behaviour are made.

  8. Proper motion survey for solar nearby stars

    International Nuclear Information System (INIS)

    Goldman, Bertrand

    2001-01-01

    For its microlensing observations EROS 2 built one of the largest CCD mosaic opera ting since 1996. This instrument allowed us to survey a large area of the sky, to look for faint, cool compact objects in the Solar neighborhood that may contribute to the Dark Matter revealed by flat rotation curves of spiral galaxies and the Milky Way. We imaged over 400 square degrees, at least three times over four years, with a single, stable instrument. The aim of this work is the reduction, the analysis and the detection of high proper motion objects that would look like those expected in a dark halo. We selected and analyzed thousands of images taken in two bands, visible and near-infrared, and obtained a catalogue of several thousand stars with proper motion typically higher than 80 milli-arc-seconds per year. None of these candidates displays the expected properties of the halo objects: very high proper motion and faintness. The second part of our work was to put constraints on the contributions of white dwarfs and brown dwarfs ta the halo. To do that, we simulated our data set and estimated our sensitivity to halo objects. We compared our results about moderately high proper motion stars with existing Galactic models, and confirmed the robustness of these models. We deduced a upper limit ta the contribution of M_v = 17.5 white dwarfs to the standard halo of 10% (at the 95% confidence level), or 5% of a 14 Gyr old halo, and to the contribution of brown dwarfs of 7% (95% C.L.). Finally, among our candidates, several interesting objects, that do not belong to the halo but are among the coolest and faintest known, have been discovered. Systematic search for faint, nearby objects thus lead us to study disk L dwarfs, as well as old white dwarfs of the disk. (author) [fr

  9. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    Directory of Open Access Journals (Sweden)

    Ninna Reitzel Jensen

    2015-06-01

    Full Text Available Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance and unit-linked insurance. By use of a two-account model, we are able to illustrate general concepts without making the model too abstract. To allow for complicated financial markets without dramatically increasing the mathematical complexity, we focus on economic scenarios. We illustrate the use of our model by conducting scenario analysis based on Monte Carlo simulation, but the model applies to scenarios in general and to worst-case and best-estimate scenarios in particular. In addition to easy computations, our model offers a common framework for the valuation of life insurance payments across product types. This enables comparison of participating life insurance products and unit-linked insurance products, thus building a bridge between the two different ways of formalizing life insurance products. Finally, our model distinguishes itself from the existing literature by taking into account the Markov model for the state of the policyholder and, hereby, facilitating event risk.

  10. Supportive Accountability: A Model for Providing Human Support to Enhance Adherence to eHealth Interventions

    Science.gov (United States)

    2011-01-01

    The effectiveness of and adherence to eHealth interventions is enhanced by human support. However, human support has largely not been manualized and has usually not been guided by clear models. The objective of this paper is to develop a clear theoretical model, based on relevant empirical literature, that can guide research into human support components of eHealth interventions. A review of the literature revealed little relevant information from clinical sciences. Applicable literature was drawn primarily from organizational psychology, motivation theory, and computer-mediated communication (CMC) research. We have developed a model, referred to as “Supportive Accountability.” We argue that human support increases adherence through accountability to a coach who is seen as trustworthy, benevolent, and having expertise. Accountability should involve clear, process-oriented expectations that the patient is involved in determining. Reciprocity in the relationship, through which the patient derives clear benefits, should be explicit. The effect of accountability may be moderated by patient motivation. The more intrinsically motivated patients are, the less support they likely require. The process of support is also mediated by the communications medium (eg, telephone, instant messaging, email). Different communications media each have their own potential benefits and disadvantages. We discuss the specific components of accountability, motivation, and CMC medium in detail. The proposed model is a first step toward understanding how human support enhances adherence to eHealth interventions. Each component of the proposed model is a testable hypothesis. As we develop viable human support models, these should be manualized to facilitate dissemination. PMID:21393123

  11. Accounting for Co-Teaching: A Guide for Policymakers and Developers of Value-Added Models

    Science.gov (United States)

    Isenberg, Eric; Walsh, Elias

    2015-01-01

    We outline the options available to policymakers for addressing co-teaching in a value-added model. Building on earlier work, we propose an improvement to a method of accounting for co-teaching that treats co-teachers as teams, with each teacher receiving equal credit for co-taught students. Hock and Isenberg (2012) described a method known as the…

  12. Accounting for the influence of vegetation and landscape improves model transferability in a tropical savannah region

    NARCIS (Netherlands)

    Gao, H.; Hrachowitz, M.; Sriwongsitanon, Nutchanart; Fenicia, F.; Gharari, S.; Savenije, H.H.G.

    2016-01-01

    Understanding which catchment characteristics dominate hydrologic response and how to take them into account remains a challenge in hydrological modeling, particularly in ungauged basins. This is even more so in nontemperate and nonhumid catchments, where—due to the combination of seasonality and

  13. The Politics and Statistics of Value-Added Modeling for Accountability of Teacher Preparation Programs

    Science.gov (United States)

    Lincove, Jane Arnold; Osborne, Cynthia; Dillon, Amanda; Mills, Nicholas

    2014-01-01

    Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring…

  14. Developing a Model for Identifying Students at Risk of Failure in a First Year Accounting Unit

    Science.gov (United States)

    Smith, Malcolm; Therry, Len; Whale, Jacqui

    2012-01-01

    This paper reports on the process involved in attempting to build a predictive model capable of identifying students at risk of failure in a first year accounting unit in an Australian university. Identifying attributes that contribute to students being at risk can lead to the development of appropriate intervention strategies and support…

  15. A creep rupture model accounting for cavitation at sliding grain boundaries

    NARCIS (Netherlands)

    Giessen, Erik van der; Tvergaard, Viggo

    1991-01-01

    An axisymmetric cell model analysis is used to study creep failure by grain boundary cavitation at facets normal to the maximum principal tensile stress, taking into account the influence of cavitation and sliding at adjacent inclined grain boundaries. It is found that the interaction between the

  16. Accounting assessment

    Directory of Open Access Journals (Sweden)

    Kafka S.М.

    2017-03-01

    Full Text Available The proper evaluation of accounting objects influences essentially upon the reliability of assessing the financial situation of a company. Thus, the problem in accounting estimate is quite relevant. The works of home and foreign scholars on the issues of assessment of accounting objects, regulatory and legal acts of Ukraine controlling the accounting and compiling financial reporting are a methodological basis for the research. The author uses the theoretical methods of cognition (abstraction and generalization, analysis and synthesis, induction and deduction and other methods producing conceptual knowledge for the synthesis of theoretical and methodological principles in the evaluation of assets accounting, liabilities and equity. The tabular presentation and information comparison methods are used for analytical researches. The article considers the modern approaches to the issue of evaluation of accounting objects and financial statements items. The expedience to keep records under historical value is proved and the articles of financial statements are to be presented according to the evaluation on the reporting date. In connection with the evaluation the depreciation of fixed assets is considered as a process of systematic return into circulation of the before advanced funds on the purchase (production, improvement of fixed assets and intangible assets by means of including the amount of wear in production costs. Therefore it is proposed to amortize only the actual costs incurred, i.e. not to depreciate the fixed assets received free of charge and surplus valuation of different kinds.

  17. Mass estimates from stellar proper motions: the mass of ω Centauri

    Science.gov (United States)

    D'Souza, Richard; Rix, Hans-Walter

    2013-03-01

    We lay out and apply methods to use proper motions of individual kinematic tracers for estimating the dynamical mass of star clusters. We first describe a simple projected mass estimator and then develop an approach that evaluates directly the likelihood of the discrete kinematic data given the model predictions. Those predictions may come from any dynamical modelling approach, and we implement an analytic King model, a spherical isotropic Jeans equation model and an axisymmetric, anisotropic Jeans equation model. This maximum likelihood modelling (MLM) provides a framework for a model-data comparison, and a resulting mass estimate, which accounts explicitly for the discrete nature of the data for individual stars, the varying error bars for proper motions of differing signal-to-noise ratio, and for data incompleteness. Both of these two methods are evaluated for their practicality and are shown to provide an unbiased and robust estimate of the cluster mass. We apply these approaches to the enigmatic globular cluster ω Centauri, combining the proper motion from van Leeuwen et al. with improved photometric cluster membership probabilities. We show that all mass estimates based on spherical isotropic models yield (4.55 ± 0.1) × 106 M⊙[D/5.5 ± 0.2 kpc]3, where our modelling allows us to show how the statistical precision of this estimate improves as more proper motion data of lower signal-to-noise ratio are included. MLM predictions, based on an anisotropic axisymmetric Jeans model, indicate for ω Cen that the inclusion of anisotropies is not important for the mass estimates, but that accounting for the flattening is: flattened models imply (4.05 ± 0.1) × 106 M⊙[D/5.5 ± 0.2 kpc]3, 10 per cent lower than when restricting the analysis to a spherical model. The best current distance estimates imply an additional uncertainty in the mass estimate of 12 per cent.

  18. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  19. One Model Fits All: Explaining Many Aspects of Number Comparison within a Single Coherent Model-A Random Walk Account

    Science.gov (United States)

    Reike, Dennis; Schwarz, Wolf

    2016-01-01

    The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…

  20. Taking into account for the Pauli principle in particle-vibrator model

    International Nuclear Information System (INIS)

    Knyaz'kov, O.M.

    1985-01-01

    To construct Hamiltonian of the particle interaction and phonons a semimicroscopic approach developed by the author earlier is used. At that the Pauli principle is taken account of in local formalism of density matrix. Analytical expressions permitting in a closed form to solve a task of taking account of the Pauli principle in the particle-vibrator model have been derived. Unlike a phenomenological approach form factors of inelastic transitions are determined with parameters of effective nucleon-nucleon forces, central and transition densities and contain no free parameters

  1. Accounting of inter-electron correlations in the model of mobile electron shells

    International Nuclear Information System (INIS)

    Panov, Yu.D.; Moskvin, A.S.

    2000-01-01

    One studied the basic peculiar features of the model for mobile electron shells for multielectron atom or cluster. One offered a variation technique to take account of the electron correlations where the coordinates of the centre of single-particle atomic orbital served as variation parameters. It enables to interpret dramatically variation of electron density distribution under anisotropic external effect in terms of the limited initial basis. One studied specific correlated states that might make correlation contribution into the orbital current. Paper presents generalization of the typical MO-LCAO pattern with the limited set of single particle functions enabling to take account of additional multipole-multipole interactions in the cluster [ru

  2. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    Science.gov (United States)

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  4. Predicting NonInertial Effects with Algebraic Stress Models which Account for Dissipation Rate Anisotropies

    Science.gov (United States)

    Jongen, T.; Machiels, L.; Gatski, T. B.

    1997-01-01

    Three types of turbulence models which account for rotational effects in noninertial frames of reference are evaluated for the case of incompressible, fully developed rotating turbulent channel flow. The different types of models are a Coriolis-modified eddy-viscosity model, a realizable algebraic stress model, and an algebraic stress model which accounts for dissipation rate anisotropies. A direct numerical simulation of a rotating channel flow is used for the turbulent model validation. This simulation differs from previous studies in that significantly higher rotation numbers are investigated. Flows at these higher rotation numbers are characterized by a relaminarization on the cyclonic or suction side of the channel, and a linear velocity profile on the anticyclonic or pressure side of the channel. The predictive performance of the three types of models are examined in detail, and formulation deficiencies are identified which cause poor predictive performance for some of the models. Criteria are identified which allow for accurate prediction of such flows by algebraic stress models and their corresponding Reynolds stress formulations.

  5. Accounting for Local Dependence with the Rasch Model: The Paradox of Information Increase.

    Science.gov (United States)

    Andrich, David

    Test theories imply statistical, local independence. Where local independence is violated, models of modern test theory that account for it have been proposed. One violation of local independence occurs when the response to one item governs the response to a subsequent item. Expanding on a formulation of this kind of violation between two items in the dichotomous Rasch model, this paper derives three related implications. First, it formalises how the polytomous Rasch model for an item constituted by summing the scores of the dependent items absorbs the dependence in its threshold structure. Second, it shows that as a consequence the unit when the dependence is accounted for is not the same as if the items had no response dependence. Third, it explains the paradox, known, but not explained in the literature, that the greater the dependence of the constituent items the greater the apparent information in the constituted polytomous item when it should provide less information.

  6. Cost accounting models used for price-setting of health services: an international review.

    Science.gov (United States)

    Raulinajtys-Grzybek, Monika

    2014-12-01

    The aim of the article was to present and compare cost accounting models which are used in the area of healthcare for pricing purposes in different countries. Cost information generated by hospitals is further used by regulatory bodies for setting or updating prices of public health services. The article presents a set of examples from different countries of the European Union, Australia and the United States and concentrates on DRG-based payment systems as they primarily use cost information for pricing. Differences between countries concern the methodology used, as well as the data collection process and the scope of the regulations on cost accounting. The article indicates that the accuracy of the calculation is only one of the factors that determine the choice of the cost accounting methodology. Important aspects are also the selection of the reference hospitals, precise and detailed regulations and the existence of complex healthcare information systems in hospitals. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Accounting for sex differences in PTSD: A multi-variable mediation model

    DEFF Research Database (Denmark)

    Christiansen, Dorte M.; Hansen, Maj

    2015-01-01

    methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually....... Objective: The present study is the first to systematically test the hypothesis that a combination of pre-, peri-, and posttraumatic risk factors more prevalent in females can account for sex differences in PTSD severity. Method: The study was a quasi-prospective questionnaire survey assessing PTSD...... cognitions about self and the world, and feeling let down. These variables were included in the model as potential mediators. The combination of risk factors significantly mediated the association between sex and PTSD severity, accounting for 83% of the association. Conclusion: The findings suggest...

  8. Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide.

    Science.gov (United States)

    Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark

    2011-01-01

    Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.

  9. Accounting for measurement error in log regression models with applications to accelerated testing.

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    Full Text Available In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  10. Accounting for measurement error in log regression models with applications to accelerated testing.

    Science.gov (United States)

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  11. The self-consistent field model for Fermi systems with account of three-body interactions

    Directory of Open Access Journals (Sweden)

    Yu.M. Poluektov

    2015-12-01

    Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.

  12. Proper alignment of the microscope.

    Science.gov (United States)

    Rottenfusser, Rudi

    2013-01-01

    The light microscope is merely the first element of an imaging system in a research facility. Such a system may include high-speed and/or high-resolution image acquisition capabilities, confocal technologies, and super-resolution methods of various types. Yet more than ever, the proverb "garbage in-garbage out" remains a fact. Image manipulations may be used to conceal a suboptimal microscope setup, but an artifact-free image can only be obtained when the microscope is optimally aligned, both mechanically and optically. Something else is often overlooked in the quest to get the best image out of the microscope: Proper sample preparation! The microscope optics can only do its job when its design criteria are matched to the specimen or vice versa. The specimen itself, the mounting medium, the cover slip, and the type of immersion medium (if applicable) are all part of the total optical makeup. To get the best results out of a microscope, understanding the functions of all of its variable components is important. Only then one knows how to optimize these components for the intended application. Different approaches might be chosen to discuss all of the microscope's components. We decided to follow the light path which starts with the light source and ends at the camera or the eyepieces. To add more transparency to this sequence, the section up to the microscope stage was called the "Illuminating Section", to be followed by the "Imaging Section" which starts with the microscope objective. After understanding the various components, we can start "working with the microscope." To get the best resolution and contrast from the microscope, the practice of "Koehler Illumination" should be understood and followed by every serious microscopist. Step-by-step instructions as well as illustrations of the beam path in an upright and inverted microscope are included in this chapter. A few practical considerations are listed in Section 3. Copyright © 2013 Elsevier Inc. All rights

  13. Two-species occupancy modeling accounting for species misidentification and nondetection

    Science.gov (United States)

    Chambert, Thierry; Grant, Evan H. Campbell; Miller, David A. W.; Nichols, James; Mulder, Kevin P.; Brand, Adrianne B,

    2018-01-01

    1. In occupancy studies, species misidentification can lead to false positive detections, which can cause severe estimator biases. Currently, all models that account for false positive errors only consider omnibus sources of false detections and are limited to single species occupancy. 2. However, false detections for a given species often occur because of the misidentification with another, closely-related species. To exploit this explicit source of false positive detection error, we develop a two-species occupancy model that accounts for misidentifications between two species of interest. As with other false positive models, identifiability is greatly improved by the availability of unambiguous detections at a subset of site-occasions. Here, we consider the case where some of the field observations can be confirmed using laboratory or other independent identification methods (“confirmatory data”). 3. We performed three simulation studies to (1) assess the model’s performance under various realistic scenarios, (2) investigate the influence of the proportion of confirmatory data on estimator accuracy, and (3) compare the performance of this two-species model with that of the single-species false positive model. The model shows good performance under all scenarios, even when only small proportions of detections are confirmed (e.g., 5%). It also clearly outperforms the single-species model.

  14. Analysis of the microscopic model taking into account of the 2p2h configurations

    International Nuclear Information System (INIS)

    Kamerdzhiev, S.P.; Tkachev, V.N.

    1986-01-01

    A general equation for the effective field inside the nucleus, which takes into account both 1p1h and 2p2h configurations, is derived by the Green function method. This equation is used as a starting point to derive the previously developed microscopic model for account of the > configurations in magic nuclei. The equations for the density matrix are analyzed in this model. It is shown that the quasiparticle number conservation law is valid. The equation for the effective field is written in the coordinate representation. As a result, the problem acquires the formulation in the > approximation. The equation in the space of one-phonon states is derived and quantitatively analyzed

  15. Analysis of a microscopic model of taking into account 2p2h configurations

    International Nuclear Information System (INIS)

    Kamerdzhiev, S.P.; Tkachev, V.N.

    1986-01-01

    The Green's-function method has been used to obtain a general equation for the effective field in a nucleus, taking into account both 1p1h and 2p2h configurations. This equation has been used as the starting point for derivation of a previously developed microscopic model of taking 1p1h+phonon configurations into account in magic nuclei. The equation for the density matrix is analyzed in this model. It is shown that the number of quasiparticles is conserved. An equation is obtained for the effective field in the coordinate representation, which provides a formulation of the problem in the 1p1h+2p2h+continuum approximation. The equation is derived and quantitatively analyzed in the space of one-phonon states

  16. A Simple Accounting-based Valuation Model for the Debt Tax Shield

    Directory of Open Access Journals (Sweden)

    Andreas Scholze

    2010-05-01

    Full Text Available This paper describes a simple way to integrate the debt tax shield into an accounting-based valuation model. The market value of equity is determined by forecasting residual operating income, which is calculated by charging operating income for the operating assets at a required return that accounts for the tax benefit that comes from borrowing to raise cash for the operations. The model assumes that the firm maintains a deterministic financial leverage ratio, which tends to converge quickly to typical steady-state levels over time. From a practical point of view, this characteristic is of particular help, because it allows a continuing value calculation at the end of a short forecast period.

  17. Evaluating the predictive abilities of community occupancy models using AUC while accounting for imperfect detection

    Science.gov (United States)

    Zipkin, Elise F.; Grant, Evan H. Campbell; Fagan, William F.

    2012-01-01

    The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multi-species hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions of species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation dataset. We found that wetland hydroperiod (the length of time that a wetland holds water) as well as the occurrence state in the prior year were generally the most important factors in determining occupancy. The model with only habitat covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multi-species models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for

  18. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect

    Science.gov (United States)

    Carenzo, M.; Pellicciotti, F.; Mabillard, J.; Reid, T.; Brock, B. W.

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the

  19. Models and error analyses of measuring instruments in accountability systems in safeguards control

    International Nuclear Information System (INIS)

    Dattatreya, E.S.

    1977-05-01

    Essentially three types of measuring instruments are used in plutonium accountability systems: (1) the bubblers, for measuring the total volume of liquid in the holding tanks, (2) coulometers, titration apparatus and calorimeters, for measuring the concentration of plutonium; and (3) spectrometers, for measuring isotopic composition. These three classes of instruments are modeled and analyzed. Finally, the uncertainty in the estimation of total plutonium in the holding tank is determined

  20. Accounting outsourcing and some problems of selected software for accounting

    OpenAIRE

    Turková, Lenka

    2009-01-01

    Diploma thesis on Accounting outsourcing and key problems of selected software for accounting deals with the accounting outsourcing. Work focuses here on the question of the proper selection of an accounting firm and on the conditions of cooperation with it. In this work the reader is also acquainted with some software for accounting and with their advantages and disadvantages.

  1. [Application of detecting and taking overdispersion into account in Poisson regression model].

    Science.gov (United States)

    Bouche, G; Lepage, B; Migeot, V; Ingrand, P

    2009-08-01

    Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.

  2. A database model for evaluating material accountability safeguards effectiveness against protracted theft

    International Nuclear Information System (INIS)

    Sicherman, A.; Fortney, D.S.; Patenaude, C.J.

    1993-07-01

    DOE Material Control and Accountability Order 5633.3A requires that facilities handling special nuclear material evaluate their effectiveness against protracted theft (repeated thefts of small quantities of material, typically occurring over an extended time frame, to accumulate a goal quantity). Because a protracted theft attempt can extend over time, material accountability-like (MA) safeguards may help detect a protracted theft attempt in progress. Inventory anomalies, and material not in its authorized location when requested for processing are examples of MA detection mechanisms. Crediting such detection in evaluations, however, requires taking into account potential insider subversion of MA safeguards. In this paper, the authors describe a database model for evaluating MA safeguards effectiveness against protracted theft that addresses potential subversion. The model includes a detailed yet practical structure for characterizing various types of MA activities, lists of potential insider MA defeat methods and access/authority related to MA activities, and an initial implementation of built-in MA detection probabilities. This database model, implemented in the new Protracted Insider module of ASSESS (Analytic System and Software for Evaluating Safeguards and Security), helps facilitate the systematic collection of relevant information about MA activity steps, and ''standardize'' MA safeguards evaluations

  3. Modelling Job-Related and Personality Predictors of Intention to Pursue Accounting Careers among Undergraduate Students in Ghana

    Science.gov (United States)

    Mbawuni, Joseph; Nimako, Simon Gyasi

    2015-01-01

    This study principally investigates job-related and personality factors that determine Ghanaian accounting students' intentions to pursue careers in accounting. It draws on a rich body of existing literature to develop a research model. Primary data were collected from a cross-sectional survey of 516 final year accounting students in a Ghanaian…

  4. INVESTIGATION INTO ACCOUNT OF A TIME VALUE OF MONEY IN CLASSICAL MULTITOPIC INVENTORY MODELS

    Directory of Open Access Journals (Sweden)

    Natalya A. Chernyaeva

    2013-01-01

    Full Text Available The article describes two types of models. The first is a traditional multitopic inventory model with constant demand and the second is a model based on the average cost of inventory in optimizing inventory management system. The authors taking into account the time value of money in the models study three possible schemes for the payment of costs: «prenumerando» (at the time of the general batch order delivery, «postnumerando» (at the time of the general next batch order delivery and the scheme of payment of costs in the mid-term.Maximization of the total intensity of revenue for outgoing and incoming cash flows occurring in the inventory management system that characterize the analyzed models was adopted as the criterion of optimization of inventory control strategy.

  5. A multiscale active structural model of the arterial wall accounting for smooth muscle dynamics.

    Science.gov (United States)

    Coccarelli, Alberto; Edwards, David Hughes; Aggarwal, Ankush; Nithiarasu, Perumal; Parthimos, Dimitris

    2018-02-01

    Arterial wall dynamics arise from the synergy of passive mechano-elastic properties of the vascular tissue and the active contractile behaviour of smooth muscle cells (SMCs) that form the media layer of vessels. We have developed a computational framework that incorporates both these components to account for vascular responses to mechanical and pharmacological stimuli. To validate the proposed framework and demonstrate its potential for testing hypotheses on the pathogenesis of vascular disease, we have employed a number of pharmacological probes that modulate the arterial wall contractile machinery by selectively inhibiting a range of intracellular signalling pathways. Experimental probes used on ring segments from the rabbit central ear artery are: phenylephrine, a selective α 1-adrenergic receptor agonist that induces vasoconstriction; cyclopiazonic acid (CPA), a specific inhibitor of sarcoplasmic/endoplasmic reticulum Ca 2+ -ATPase; and ryanodine, a diterpenoid that modulates Ca 2+ release from the sarcoplasmic reticulum. These interventions were able to delineate the role of membrane versus intracellular signalling, previously identified as main factors in smooth muscle contraction and the generation of vessel tone. Each SMC was modelled by a system of nonlinear differential equations that account for intracellular ionic signalling, and in particular Ca 2+ dynamics. Cytosolic Ca 2+ concentrations formed the catalytic input to a cross-bridge kinetics model. Contractile output from these cellular components forms the input to the finite-element model of the arterial rings under isometric conditions that reproduces the experimental conditions. The model does not account for the role of the endothelium, as the nitric oxide production was suppressed by the action of L-NAME, and also due to the absence of shear stress on the arterial ring, as the experimental set-up did not involve flow. Simulations generated by the integrated model closely matched experimental

  6. Model of inventory replenishment in periodic review accounting for the occurrence of shortages

    Directory of Open Access Journals (Sweden)

    Stanisław Krzyżaniak

    2014-03-01

    Full Text Available Background: Despite the development of alternative concepts of goods flow management, the inventory management under conditions of random variations of demand is still an important issue, both from the point of view of inventory keeping and replenishment costs and the service level measured as the level of inventory availability. There is a number of inventory replenishment systems used in these conditions, but they are mostly developments of two basic systems: reorder point-based and periodic review-based. The paper deals with the latter system. Numerous researches indicate the need to improve the classical models describing that system, the reason being mainly the necessity to adapt the model better to the actual conditions. This allows a correct selection of parameters that control the used inventory replenishment system and - as a result - to obtain expected economic effects. Methods: This research aimed at building a model of the periodic review system to reflect the relations (observed during simulation tests between the volume of inventory shortages and the degree of accounting for so-called deferred demand, and the service level expressed as the probability of satisfying the demand in the review and the inventory replenishment cycle. The following model building and testing method has been applied: numerical simulation of inventory replenishment - detailed analysis of simulation results - construction of the model taking into account the regularities observed during the simulations - determination of principles of solving the system of relations creating the model - verification of the results obtained from the model using the results from simulation. Results: Presented are selected results of calculations based on classical formulas and using the developed model, which describe the relations between the service level and the parameters controlling the discussed inventory replenishment system. The results are compared to the simulation

  7. Testing the limits of the 'joint account' model of genetic information: a legal thought experiment.

    Science.gov (United States)

    Foster, Charles; Herring, Jonathan; Boyd, Magnus

    2015-05-01

    We examine the likely reception in the courtroom of the 'joint account' model of genetic confidentiality. We conclude that the model, as modified by Gilbar and others, is workable and reflects, better than more conventional legal approaches, both the biological and psychological realities and the obligations owed under Articles 8 and 10 of the European Convention on Human Rights (ECHR). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Generation of SEEAW asset accounts based on water resources management models

    Science.gov (United States)

    Pedro-Monzonís, María; Solera, Abel; Andreu, Joaquín

    2015-04-01

    One of the main challenges in the XXI century is related with the sustainable use of water. This is due to the fact that water is an essential element for the life of all who inhabit our planet. In many cases, the lack of economic valuation of water resources causes an inefficient water use. In this regard, society expects of policymakers and stakeholders maximise the profit produced per unit of natural resources. Water planning and the Integrated Water Resources Management (IWRM) represent the best way to achieve this goal. The System of Environmental-Economic Accounting for Water (SEEAW) is displayed as a tool for water allocation which enables the building of water balances in a river basin. The main concern of the SEEAW is to provide a standard approach which allows the policymakers to compare results between different territories. But building water accounts is a complex task due to the difficulty of the collection of the required data. Due to the difficulty of gauging the components of the hydrological cycle, the use of simulation models has become an essential tool extensively employed in last decades. The target of this paper is to present the building up of a database that enables the combined use of hydrological models and water resources models developed with AQUATOOL DSSS to fill in the SEEAW tables. This research is framed within the Water Accounting in a Multi-Catchment District (WAMCD) project, financed by the European Union. Its main goal is the development of water accounts in the Mediterranean Andalusian River Basin District, in Spain. This research pretends to contribute to the objectives of the "Blueprint to safeguard Europe's water resources". It is noteworthy that, in Spain, a large part of these methodological decisions are included in the Spanish Guideline of Water Planning with normative status guaranteeing consistency and comparability of the results.

  9. @AACAnatomy twitter account goes live: A sustainable social media model for professional societies.

    Science.gov (United States)

    Benjamin, Hannah K; Royer, Danielle F

    2018-05-01

    Social media, with its capabilities of fast, global information sharing, provides a useful medium for professional development, connecting and collaborating with peers, and outreach. The goals of this study were to describe a new, sustainable model for Twitter use by professional societies, and analyze its impact on @AACAnatomy, the Twitter account of the American Association of Clinical Anatomists. Under supervision of an Association committee member, an anatomy graduate student developed a protocol for publishing daily tweets for @AACAnatomy. Five tweet categories were used: Research, Announcements, Replies, Engagement, and Community. Analytics from the 6-month pilot phase were used to assess the impact of the new model. @AACAnatomy had a steady average growth of 33 new followers per month, with less than 10% likely representing Association members. Research tweets, based on Clinical Anatomy articles with an abstract link, were the most shared, averaging 5,451 impressions, 31 link clicks, and nine #ClinAnat hashtag clicks per month. However, tweets from non-Research categories accounted for the highest impression and engagement metrics in four out of six months. For all tweet categories, monthly averages show consistent interaction of followers with the account. Daily tweet publication resulted in a 103% follower increase. An active Twitter account successfully facilitated regular engagement with @AACAnatomy followers and the promotion of clinical anatomy topics within a broad community. This Twitter model has the potential for implementation by other societies as a sustainable medium for outreach, networking, collaboration, and member engagement. Clin. Anat. 31:566-575, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Directional harmonic theory: a computational Gestalt model to account for illusory contour and vertex formation.

    Science.gov (United States)

    Lehar, Steven

    2003-01-01

    Visual illusions and perceptual grouping phenomena offer an invaluable tool for probing the computational mechanism of low-level visual processing. Some illusions, like the Kanizsa figure, reveal illusory contours that form edges collinear with the inducing stimulus. This kind of illusory contour has been modeled by neural network models by way of cells equipped with elongated spatial receptive fields designed to detect and complete the collinear alignment. There are, however, other illusory groupings which are not so easy to account for in neural network terms. The Ehrenstein illusion exhibits an illusory contour that forms a contour orthogonal to the stimulus instead of collinear with it. Other perceptual grouping effects reveal illusory contours that exhibit a sharp corner or vertex, and still others take the form of vertices defined by the intersection of three, four, or more illusory contours that meet at a point. A direct extension of the collinear completion models to account for these phenomena tends towards a combinatorial explosion, because it would suggest cells with specialized receptive fields configured to perform each of those completion types, each of which would have to be replicated at every location and every orientation across the visual field. These phenomena therefore challenge the adequacy of the neural network approach to account for these diverse perceptual phenomena. I have proposed elsewhere an alternative paradigm of neurocomputation in the harmonic resonance theory (Lehar 1999, see website), whereby pattern recognition and completion are performed by spatial standing waves across the neural substrate. The standing waves perform a computational function analogous to that of the spatial receptive fields of the neural network approach, except that, unlike that paradigm, a single resonance mechanism performs a function equivalent to a whole array of spatial receptive fields of different spatial configurations and of different orientations

  11. Feeling-of-knowing for proper names.

    Science.gov (United States)

    Izaute, Marie; Chambres, Patrick; Larochelle, Serge

    2002-12-01

    The main objective of the presented study was to study feeling-of-knowing (FOK) in proper name retrieval. Many studies show that FOK can predict performance on a subsequent criterion test. Although feeling-of-knowing studies involve questions about proper names, none make this distinction between proper names and common names. Nevertheless, the specific character of proper names as a unique label referring to a person should allow participants to target precisely the desired verbal label. Our idea here was that the unique character of proper name information should result in more accurate FOK evaluations. In the experiment, participants evaluated feeling-of-knowing for proper and common name descriptions. The study demonstrates that FOK judgments are more accurate for proper names than for common names. The implications of the findings for proper names are briefly discussed in terms of feeling-of-knowing hypotheses.

  12. A Buffer Model Account of Behavioral and ERP Patterns in the Von Restorff Paradigm

    Directory of Open Access Journals (Sweden)

    Siri-Maria Kamp

    2016-06-01

    Full Text Available We combined a mechanistic model of episodic encoding with theories on the functional significance of two event-related potential (ERP components to develop an integrated account for the Von Restorff effect, which refers to the enhanced recall probability for an item that deviates in some feature from other items in its study list. The buffer model of Lehman and Malmberg (2009, 2013 can account for this effect such that items encountered during encoding enter an episodic buffer where they are actively rehearsed. When a deviant item is encountered, in order to re-allocate encoding resources towards this item the buffer is emptied from its prior content, a process labeled “compartmentalization”. Based on theories on their functional significance, the P300 component of the ERP may co-occur with this hypothesized compartmentalization process, while the frontal slow wave may index rehearsal. We derived predictions from this integrated model for output patterns in free recall, systematic variance in ERP components, as well as associations between the two types of measures in a dataset of 45 participants who studied and freely recalled lists of the Von Restorff type. Our major predictions were confirmed and the behavioral and physiological results were consistent with the predictions derived from the model. These findings demonstrate that constraining mechanistic models of episodic memory with brain activity patterns and generating predictions for relationships between brain activity and behavior can lead to novel insights into the relationship between the brain, the mind, and behavior.

  13. Accounting for standard errors of vision-specific latent trait in regression models.

    Science.gov (United States)

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits

  14. 7 CFR 29.112 - Proper light.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Proper light. 29.112 Section 29.112 Agriculture... INSPECTION Regulations Inspectors, Samplers, and Weighers § 29.112 Proper light. Tobacco shall not be inspected or sampled for the purposes of the Act except when displayed in proper light for correct...

  15. Towards proper name generation : A corpus analysis

    NARCIS (Netherlands)

    Castro Ferreira, Thiago; Wubben, Sander; Krahmer, Emiel

    We introduce a corpus for the study of proper name generation. The corpus consists of proper name references to people in webpages, extracted from the Wikilinks corpus. In our analyses, we aim to identify the different ways, in terms of length and form, in which a proper names are produced

  16. An analytical model accounting for tip shape evolution during atom probe analysis of heterogeneous materials.

    Science.gov (United States)

    Rolland, N; Larson, D J; Geiser, B P; Duguay, S; Vurpillot, F; Blavette, D

    2015-12-01

    An analytical model describing the field evaporation dynamics of a tip made of a thin layer deposited on a substrate is presented in this paper. The difference in evaporation field between the materials is taken into account in this approach in which the tip shape is modeled at a mesoscopic scale. It was found that the non-existence of sharp edge on the surface is a sufficient condition to derive the morphological evolution during successive evaporation of the layers. This modeling gives an instantaneous and smooth analytical representation of the surface that shows good agreement with finite difference simulations results, and a specific regime of evaporation was highlighted when the substrate is a low evaporation field phase. In addition, the model makes it possible to calculate theoretically the tip analyzed volume, potentially opening up new horizons for atom probe tomographic reconstruction. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Spectral Neugebauer-based color halftone prediction model accounting for paper fluorescence.

    Science.gov (United States)

    Hersch, Roger David

    2014-08-20

    We present a spectral model for predicting the fluorescent emission and the total reflectance of color halftones printed on optically brightened paper. By relying on extended Neugebauer models, the proposed model accounts for the attenuation by the ink halftones of both the incident exciting light in the UV wavelength range and the emerging fluorescent emission in the visible wavelength range. The total reflectance is predicted by adding the predicted fluorescent emission relative to the incident light and the pure reflectance predicted with an ink-spreading enhanced Yule-Nielsen modified Neugebauer reflectance prediction model. The predicted fluorescent emission spectrum as a function of the amounts of cyan, magenta, and yellow inks is very accurate. It can be useful to paper and ink manufacturers who would like to study in detail the contribution of the fluorescent brighteners and the attenuation of the fluorescent emission by ink halftones.

  18. Regional Balance Model of Financial Flows through Sectoral Approaches System of National Accounts

    Directory of Open Access Journals (Sweden)

    Ekaterina Aleksandrovna Zaharchuk

    2017-03-01

    Full Text Available The main purpose of the study, the results of which are reflected in this article, is the theoretical and methodological substantiation of possibilities to build a regional balance model of financial flows consistent with the principles of the construction of the System of National Accounts (SNA. The paper summarizes the international experience of building regional accounts in the SNA as well as reflects the advantages and disadvantages of the existing techniques for constructing Social Accounting Matrix. The authors have proposed an approach to build the regional balance model of financial flows, which is based on the disaggregated tables of the formation, distribution and use of the added value of territory in the framework of institutional sectors of SNA (corporations, public administration, households. Within the problem resolution of the transition of value added from industries to sectors, the authors have offered an approach to the accounting of development, distribution and use of value added within the institutional sectors of the territories. The methods of calculation are based on the publicly available information base of statistics agencies and federal services. The authors provide the scheme of the interrelations of the indicators of the regional balance model of financial flows. It allows to coordinate mutually the movement of regional resources by the sectors of «corporation», «public administration» and «households» among themselves, and cash flows of the region — by the sectors and directions of use. As a result, they form a single account of the formation and distribution of territorial financial resources, which is a regional balance model of financial flows. This matrix shows the distribution of financial resources by income sources and sectors, where the components of the formation (compensation, taxes and gross profit, distribution (transfers and payments and use (final consumption, accumulation of value added are

  19. Developing a tuberculosis transmission model that accounts for changes in population health.

    Science.gov (United States)

    Oxlade, Olivia; Schwartzman, Kevin; Benedetti, Andrea; Pai, Madhukar; Heymann, Jody; Menzies, Dick

    2011-01-01

    Simulation models are useful in policy planning for tuberculosis (TB) control. To accurately assess interventions, important modifiers of the epidemic should be accounted for in evaluative models. Improvements in population health were associated with the declining TB epidemic in the pre-antibiotic era and may be relevant today. The objective of this study was to develop and validate a TB transmission model that accounted for changes in population health. We developed a deterministic TB transmission model, using reported data from the pre-antibiotic era in England. Change in adjusted life expectancy, used as a proxy for general health, was used to determine the rate of change of key epidemiological parameters. Predicted outcomes included risk of TB infection and TB mortality. The model was validated in the setting of the Netherlands and then applied to modern Peru. The model, developed in the setting of England, predicted TB trends in the Netherlands very accurately. The R(2) value for correlation between observed and predicted data was 0.97 and 0.95 for TB infection and mortality, respectively. In Peru, the predicted decline in incidence prior to the expansion of "Directly Observed Treatment Short Course" (The DOTS strategy) was 3.7% per year (observed = 3.9% per year). After DOTS expansion, the predicted decline was very similar to the observed decline of 5.8% per year. We successfully developed and validated a TB model, which uses a proxy for population health to estimate changes in key epidemiology parameters. Population health contributed significantly to improvement in TB outcomes observed in Peru. Changing population health should be incorporated into evaluative models for global TB control.

  20. Accounting for Zero Inflation of Mussel Parasite Counts Using Discrete Regression Models

    Directory of Open Access Journals (Sweden)

    Emel Çankaya

    2017-06-01

    Full Text Available In many ecological applications, the absences of species are inevitable due to either detection faults in samples or uninhabitable conditions for their existence, resulting in high number of zero counts or abundance. Usual practice for modelling such data is regression modelling of log(abundance+1 and it is well know that resulting model is inadequate for prediction purposes. New discrete models accounting for zero abundances, namely zero-inflated regression (ZIP and ZINB, Hurdle-Poisson (HP and Hurdle-Negative Binomial (HNB amongst others are widely preferred to the classical regression models. Due to the fact that mussels are one of the economically most important aquatic products of Turkey, the purpose of this study is therefore to examine the performances of these four models in determination of the significant biotic and abiotic factors on the occurrences of Nematopsis legeri parasite harming the existence of Mediterranean mussels (Mytilus galloprovincialis L.. The data collected from the three coastal regions of Sinop city in Turkey showed more than 50% of parasite counts on the average are zero-valued and model comparisons were based on information criterion. The results showed that the probability of the occurrence of this parasite is here best formulated by ZINB or HNB models and influential factors of models were found to be correspondent with ecological differences of the regions.

  1. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  2. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  3. Possible Relativistic Definitions of Parallax, Proper Motion and Radial Velocity

    National Research Council Canada - National Science Library

    Klioner, S

    2000-01-01

    .... In this paper, the authors briefly describe a relativistic model of space-based optical positional observations valid at a high level of accuracy, and suggest definitions of parallax, proper motion...

  4. Accounting for and predicting the influence of spatial autocorrelation in water quality modeling

    Science.gov (United States)

    Miralha, L.; Kim, D.

    2017-12-01

    Although many studies have attempted to investigate the spatial trends of water quality, more attention is yet to be paid to the consequences of considering and ignoring the spatial autocorrelation (SAC) that exists in water quality parameters. Several studies have mentioned the importance of accounting for SAC in water quality modeling, as well as the differences in outcomes between models that account for and ignore SAC. However, the capacity to predict the magnitude of such differences is still ambiguous. In this study, we hypothesized that SAC inherently possessed by a response variable (i.e., water quality parameter) influences the outcomes of spatial modeling. We evaluated whether the level of inherent SAC is associated with changes in R-Squared, Akaike Information Criterion (AIC), and residual SAC (rSAC), after accounting for SAC during modeling procedure. The main objective was to analyze if water quality parameters with higher Moran's I values (inherent SAC measure) undergo a greater increase in R² and a greater reduction in both AIC and rSAC. We compared a non-spatial model (OLS) to two spatial regression approaches (spatial lag and error models). Predictor variables were the principal components of topographic (elevation and slope), land cover, and hydrological soil group variables. We acquired these data from federal online sources (e.g. USGS). Ten watersheds were selected, each in a different state of the USA. Results revealed that water quality parameters with higher inherent SAC showed substantial increase in R² and decrease in rSAC after performing spatial regressions. However, AIC values did not show significant changes. Overall, the higher the level of inherent SAC in water quality variables, the greater improvement of model performance. This indicates a linear and direct relationship between the spatial model outcomes (R² and rSAC) and the degree of SAC in each water quality variable. Therefore, our study suggests that the inherent level of

  5. ACCOUNTING AND AUDIT OPERATIONS ON CURRENT ACCOUNT

    Directory of Open Access Journals (Sweden)

    Koblyanska Olena

    2018-03-01

    Full Text Available Introduction. The article is devoted to theoretical, methodical and practical issues of accounting and auditing of operations on the current account. The purpose of the study is to deepen and consolidate the theoretical and practical knowledge of the issues of accounting and auditing of operations on the current account, identify practical problems with the implementation of the methodology and organization of accounting and auditing of operations on the current account and develop recommendations for the elimination of deficiencies and improve the accounting and auditing. Results. The issue of the relevance of proper accounting and audit of transactions on the current account in the bank is considered. The research of typical operations on the current account was carried out with using of the method of their reflection in the account on practical examples. Features of the audit of transactions on the current account are examined, the procedure for its implementation is presented, and types of abuses and violations that occur while performing operations on the current account are identified. The legal regulation of accounting, analysis and control of operations with cash on current accounts is considered. The problem issues related to the organization and conducting of the audit of funds in the accounts of the bank are analyzed, as well as the directions of their solution are determined. The proposals for determining the sequence of actions of the auditor during the check of cash flow on accounts in the bank are provided. Conclusions. The questions about theoretical, methodological and practical issues of accounting and auditing of operations on the current account in the bank. A study of typical operations with cash on the current account was carried out with the use of the method of their reflection in the accounts and the features of the auditing of cash on the account.

  6. THE PROPER MOTION OF PALOMAR 5

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, T. K.; Kallivayalil, N., E-mail: tkf4w@astro.virginia.edu [Department of Astronomy, University of Virginia, Charlottesville, 3530 McCormick Road, VA 22904-4325 (United States)

    2015-10-01

    Palomar 5 (Pal 5) is a faint halo globular cluster associated with narrow tidal tails. It is a useful system to understand the process of tidal dissolution, as well as to constrain the potential of the Milky Way. A well-determined orbit for Pal 5 would enable detailed study of these open questions. We present here the first CCD-based proper motion measurement of Pal 5 obtained using SDSS as a first epoch and new Large Binocular Telescope/Large Binocular Camera (LBC) images as a second, giving a baseline of 15 years. We perform relative astrometry, using SDSS as a distortion-free reference, and images of the cluster and also of the Pal 5 stream for the derivation of the distortion correction for LBC. The reference frame is made up of background galaxies. We correct for differential chromatic refraction using relations obtained from SDSS colors as well as from flux-calibrated spectra, finding that the correction relations for stars and for galaxies are different. We obtain μ{sub α} = −2.296 ± 0.186 mas yr{sup −1} and μ{sub δ} = −2.257 ± 0.181 mas yr{sup −1} for the proper motion of Pal 5. We use this motion, and the publicly available code galpy, to model the disruption of Pal 5 in different Milky Way models consisting of a bulge, a disk, and a spherical dark matter halo. Our fits to the observed stream properties (streak and radial velocity gradient) result in a preference for a relatively large Pal 5 distance of around 24 kpc. A slightly larger absolute proper motion than what we measure also results in better matches but the best solutions need a change in distance. We find that a spherical Milky Way model, with V{sub 0} = 220 km s{sup −1} and V{sub 20} {sub kpc}, i.e., approximately at the apocenter of Pal 5, of 218 km s{sup −1}, can match the data well, at least for our choice of disk and bulge parametrization.

  7. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    Energy Technology Data Exchange (ETDEWEB)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO{sub 2} and NO{sub x} emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner.

  8. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    International Nuclear Information System (INIS)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO 2 and NO x emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner

  9. SDOF models for reinforced concrete beams under impulsive loads accounting for strain rate effects

    Energy Technology Data Exchange (ETDEWEB)

    Stochino, F., E-mail: fstochino@unica.it [Department of Civil and Environmental Engineering and Architecture, University of Cagliari, Via Marengo 2, 09123 Cagliari (Italy); Carta, G., E-mail: giorgio_carta@unica.it [Department of Mechanical, Chemical and Materials Engineering, University of Cagliari, Via Marengo 2, 09123 Cagliari (Italy)

    2014-09-15

    Highlights: • Flexural failure of reinforced concrete beams under blast and impact loads is studied. • Two single degree of freedom models are formulated to predict the beam response. • Strain rate effects are taken into account for both models. • The theoretical response obtained from each model is compared with experimental data. • The two models give a good estimation of the maximum deflection at collapse. - Abstract: In this paper, reinforced concrete beams subjected to blast and impact loads are examined. Two single degree of freedom models are proposed to predict the response of the beam. The first model (denoted as “energy model”) is developed from the law of energy balance and assumes that the deformed shape of the beam is represented by its first vibration mode. In the second model (named “dynamic model”), the dynamic behavior of the beam is simulated by a spring-mass oscillator. In both formulations, the strain rate dependencies of the constitutive properties of the beams are considered by varying the parameters of the models at each time step of the computation according to the values of the strain rates of the materials (i.e. concrete and reinforcing steels). The efficiency of each model is evaluated by comparing the theoretical results with experimental data found in literature. The comparison shows that the energy model gives a good estimation of the maximum deflection of the beam at collapse, defined as the attainment of the ultimate strain in concrete. On the other hand, the dynamic model generally provides a smaller value of the maximum displacement. However, both approaches yield reliable results, even though they are based on some approximations. Being also very simple to implement, they may serve as an useful tool in practical applications.

  10. Internet accounting dictionaries

    DEFF Research Database (Denmark)

    Nielsen, Sandro; Mourier, Lise

    2005-01-01

    An examination of existing accounting dictionaries on the Internet reveals a general need for a new type of dictionary. In contrast to the dictionaries now accessible, the future accounting dictionaries should be designed as proper Internet dictionaries based on a functional approach so they can...

  11. What can Gaia proper motions tell us about Milky Way dwarf galaxies?

    NARCIS (Netherlands)

    Jin, S.; Helmi, A.; Breddels, M.

    We present a proper-motion study on models of the dwarf spheroidal galaxy Sculptor, based on the predicted proper-motion accuracy of Gaia measurements. Gaia will measure proper motions of several hundreds of stars for a Sculptor-like system. Even with an uncertainty on the proper motion of order 1.5

  12. A three-dimensional model of mammalian tyrosinase active site accounting for loss of function mutations.

    Science.gov (United States)

    Schweikardt, Thorsten; Olivares, Concepción; Solano, Francisco; Jaenicke, Elmar; García-Borrón, José Carlos; Decker, Heinz

    2007-10-01

    Tyrosinases are the first and rate-limiting enzymes in the synthesis of melanin pigments responsible for colouring hair, skin and eyes. Mutation of tyrosinases often decreases melanin production resulting in albinism, but the effects are not always understood at the molecular level. Homology modelling of mouse tyrosinase based on recently published crystal structures of non-mammalian tyrosinases provides an active site model accounting for loss-of-function mutations. According to the model, the copper-binding histidines are located in a helix bundle comprising four densely packed helices. A loop containing residues M374, S375 and V377 connects the CuA and CuB centres, with the peptide oxygens of M374 and V377 serving as hydrogen acceptors for the NH-groups of the imidazole rings of the copper-binding His367 and His180. Therefore, this loop is essential for the stability of the active site architecture. A double substitution (374)MS(375) --> (374)GG(375) or a single M374G mutation lead to a local perturbation of the protein matrix at the active site affecting the orientation of the H367 side chain, that may be unable to bind CuB reliably, resulting in loss of activity. The model also accounts for loss of function in two naturally occurring albino mutations, S380P and V393F. The hydroxyl group in S380 contributes to the correct orientation of M374, and the substitution of V393 for a bulkier phenylalanine sterically impedes correct side chain packing at the active site. Therefore, our model explains the mechanistic necessity for conservation of not only active site histidines but also adjacent amino acids in tyrosinase.

  13. Proper body mechanics from an engineering perspective.

    Science.gov (United States)

    Mohr, Edward G

    2010-04-01

    The economic viability of the manual therapy practitioner depends on the number of massages/treatments that can be given in a day or week. Fatigue or injuries can have a major impact on the income potential and could ultimately reach the point which causes the practitioner to quit the profession, and seek other, less physically demanding, employment. Manual therapy practitioners in general, and massage therapists in particular, can utilize a large variety of body postures while giving treatment to a client. The hypothesis of this paper is that there is an optimal method for applying force to the client, which maximizes the benefit to the client, and at the same time minimizes the strain and effort required by the practitioner. Two methods were used to quantifiably determine the effect of using "poor" body mechanics (Improper method) and "best" body mechanics (Proper/correct method). The first approach uses computer modeling to compare the two methods. Both postures were modeled, such that the biomechanical effects on the practitioner's elbow, shoulder, hip, knee and ankle joints could be calculated. The force applied to the client, along with the height and angle of application of the force, was held constant for the comparison. The second approach was a field study of massage practitioners (n=18) to determine their maximal force capability, again comparing methods using "Improper and Proper body mechanics". Five application methods were tested at three different application heights, using a digital palm force gauge. Results showed that there was a definite difference between the two methods, and that the use of correct body mechanics can have a large impact on the health and well being of the massage practitioner over both the short and long term. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    Directory of Open Access Journals (Sweden)

    Czoli Christine

    2011-10-01

    Full Text Available Abstract Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories. These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed. Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment.

  15. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Science.gov (United States)

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  16. A Bidirectional Subsurface Remote Sensing Reflectance Model Explicitly Accounting for Particle Backscattering Shapes

    Science.gov (United States)

    He, Shuangyan; Zhang, Xiaodong; Xiong, Yuanheng; Gray, Deric

    2017-11-01

    The subsurface remote sensing reflectance (rrs, sr-1), particularly its bidirectional reflectance distribution function (BRDF), depends fundamentally on the angular shape of the volume scattering functions (VSFs, m-1 sr-1). Recent technological advancement has greatly expanded the collection, and the knowledge of natural variability, of the VSFs of oceanic particles. This allows us to test the Zaneveld's theoretical rrs model that explicitly accounts for particle VSF shapes. We parameterized the rrs model based on HydroLight simulations using 114 VSFs measured in three coastal waters around the United States and in oceanic waters of North Atlantic Ocean. With the absorption coefficient (a), backscattering coefficient (bb), and VSF shape as inputs, the parameterized model is able to predict rrs with a root mean square relative error of ˜4% for solar zenith angles from 0 to 75°, viewing zenith angles from 0 to 60°, and viewing azimuth angles from 0 to 180°. A test with the field data indicates the performance of our model, when using only a and bb as inputs and selecting the VSF shape using bb, is comparable to or slightly better than the currently used models by Morel et al. and Lee et al. Explicitly expressing VSF shapes in rrs modeling has great potential to further constrain the uncertainty in the ocean color studies as our knowledge on the VSFs of natural particles continues to improve. Our study represents a first effort in this direction.

  17. Palaeomagnetic dating method accounting for post-depositional remanence and its application to geomagnetic field modelling

    Science.gov (United States)

    Nilsson, A.; Suttie, N.

    2016-12-01

    Sedimentary palaeomagnetic data may exhibit some degree of smoothing of the recorded field due to the gradual processes by which the magnetic signal is `locked-in' over time. Here we present a new Bayesian method to construct age-depth models based on palaeomagnetic data, taking into account and correcting for potential lock-in delay. The age-depth model is built on the widely used "Bacon" dating software by Blaauw and Christen (2011, Bayesian Analysis 6, 457-474) and is designed to combine both radiocarbon and palaeomagnetic measurements. To our knowledge, this is the first palaeomagnetic dating method that addresses the potential problems related post-depositional remanent magnetisation acquisition in age-depth modelling. Age-depth models, including site specific lock-in depth and lock-in filter function, produced with this method are shown to be consistent with independent results based on radiocarbon wiggle match dated sediment sections. Besides its primary use as a dating tool, our new method can also be used specifically to identify the most likely lock-in parameters for a specific record. We explore the potential to use these results to construct high-resolution geomagnetic field models based on sedimentary palaeomagnetic data, adjusting for smoothing induced by post-depositional remanent magnetisation acquisition. Potentially, this technique could enable reconstructions of Holocene geomagnetic field with the same amplitude of variability observed in archaeomagnetic field models for the past three millennia.

  18. Toward a formalized account of attitudes: The Causal Attitude Network (CAN) model.

    Science.gov (United States)

    Dalege, Jonas; Borsboom, Denny; van Harreveld, Frenk; van den Berg, Helma; Conner, Mark; van der Maas, Han L J

    2016-01-01

    This article introduces the Causal Attitude Network (CAN) model, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions. Relevant evaluative reactions include beliefs, feelings, and behaviors toward the attitude object. Interactions between these reactions arise through direct causal influences (e.g., the belief that snakes are dangerous causes fear of snakes) and mechanisms that support evaluative consistency between related contents of evaluative reactions (e.g., people tend to align their belief that snakes are useful with their belief that snakes help maintain ecological balance). In the CAN model, the structure of attitude networks conforms to a small-world structure: evaluative reactions that are similar to each other form tight clusters, which are connected by a sparser set of "shortcuts" between them. We argue that the CAN model provides a realistic formalized measurement model of attitudes and therefore fills a crucial gap in the attitude literature. Furthermore, the CAN model provides testable predictions for the structure of attitudes and how they develop, remain stable, and change over time. Attitude strength is conceptualized in terms of the connectivity of attitude networks and we show that this provides a parsimonious account of the differences between strong and weak attitudes. We discuss the CAN model in relation to possible extensions, implication for the assessment of attitudes, and possibilities for further study. (c) 2015 APA, all rights reserved).

  19. Accounting for sex differences in PTSD: A multi-variable mediation model.

    Science.gov (United States)

    Christiansen, Dorte M; Hansen, Maj

    2015-01-01

    Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD). However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually. The present study is the first to systematically test the hypothesis that a combination of pre-, peri-, and posttraumatic risk factors more prevalent in females can account for sex differences in PTSD severity. The study was a quasi-prospective questionnaire survey assessing PTSD and related variables in 73.3% of all Danish bank employees exposed to bank robbery during the period from April 2010 to April 2011. Participants filled out questionnaires 1 week (T1, N=450) and 6 months after the robbery (T2, N=368; 61.1% females). Mediation was examined using an analysis designed specifically to test a multiple mediator model. Females reported more PTSD symptoms than males and higher levels of neuroticism, depression, physical anxiety sensitivity, peritraumatic fear, horror, and helplessness (the A2 criterion), tonic immobility, panic, dissociation, negative posttraumatic cognitions about self and the world, and feeling let down. These variables were included in the model as potential mediators. The combination of risk factors significantly mediated the association between sex and PTSD severity, accounting for 83% of the association. The findings suggest that females report more PTSD symptoms because they experience higher levels of associated risk factors. The results are relevant to other trauma populations and to other trauma-related psychiatric disorders more prevalent in females, such as depression

  20. Accounting for sex differences in PTSD: A multi-variable mediation model

    Directory of Open Access Journals (Sweden)

    Dorte M. Christiansen

    2015-01-01

    Full Text Available Background: Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD. However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually. Objective: The present study is the first to systematically test the hypothesis that a combination of pre-, peri-, and posttraumatic risk factors more prevalent in females can account for sex differences in PTSD severity. Method: The study was a quasi-prospective questionnaire survey assessing PTSD and related variables in 73.3% of all Danish bank employees exposed to bank robbery during the period from April 2010 to April 2011. Participants filled out questionnaires 1 week (T1, N=450 and 6 months after the robbery (T2, N=368; 61.1% females. Mediation was examined using an analysis designed specifically to test a multiple mediator model. Results: Females reported more PTSD symptoms than males and higher levels of neuroticism, depression, physical anxiety sensitivity, peritraumatic fear, horror, and helplessness (the A2 criterion, tonic immobility, panic, dissociation, negative posttraumatic cognitions about self and the world, and feeling let down. These variables were included in the model as potential mediators. The combination of risk factors significantly mediated the association between sex and PTSD severity, accounting for 83% of the association. Conclusions: The findings suggest that females report more PTSD symptoms because they experience higher levels of associated risk factors. The results are relevant to other trauma populations and to other trauma

  1. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    Science.gov (United States)

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  2. Radiative transfer modeling through terrestrial atmosphere and ocean accounting for inelastic processes: Software package SCIATRAN

    Science.gov (United States)

    Rozanov, V. V.; Dinter, T.; Rozanov, A. V.; Wolanin, A.; Bracher, A.; Burrows, J. P.

    2017-06-01

    SCIATRAN is a comprehensive software package which is designed to model radiative transfer processes in the terrestrial atmosphere and ocean in the spectral range from the ultraviolet to the thermal infrared (0.18-40 μm). It accounts for multiple scattering processes, polarization, thermal emission and ocean-atmosphere coupling. The main goal of this paper is to present a recently developed version of SCIATRAN which takes into account accurately inelastic radiative processes in both the atmosphere and the ocean. In the scalar version of the coupled ocean-atmosphere radiative transfer solver presented by Rozanov et al. [61] we have implemented the simulation of the rotational Raman scattering, vibrational Raman scattering, chlorophyll and colored dissolved organic matter fluorescence. In this paper we discuss and explain the numerical methods used in SCIATRAN to solve the scalar radiative transfer equation including trans-spectral processes, and demonstrate how some selected radiative transfer problems are solved using the SCIATRAN package. In addition we present selected comparisons of SCIATRAN simulations with those published benchmark results, independent radiative transfer models, and various measurements from satellite, ground-based, and ship-borne instruments. The extended SCIATRAN software package along with a detailed User's Guide is made available for scientists and students, who are undertaking their own research typically at universities, via the web page of the Institute of Environmental Physics (IUP), University of Bremen: http://www.iup.physik.uni-bremen.de.

  3. Implementation of a cost-accounting model in a biobank: practical implications.

    Science.gov (United States)

    Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C

    2014-01-01

    Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level.

  4. A Modified Model to Estimate Building Rental Multipiers Accounting for Advalorem Operating Expenses

    Directory of Open Access Journals (Sweden)

    Smolyak S.A.

    2016-09-01

    Full Text Available To develop ideas on building element valuation contained in the first article on the subject published in REMV, we propose an elaboration of the approach accounting for ad valorem expenses incidental to property management, such as land taxes, income/capital gains tax, and insurance premium costs; all such costs, being of an ad valorem nature in the first instance, cause circularity in the logic of the model, which, however, is not intractable under the proposed approach. The resulting formulas for carrying out practical estimation of building rental multipliers and, in consequence, of building values, turn out to be somewhat modified, and we demonstrate the sensitivity of the developed approach to the impact of these ad valorem factors. On the other hand, it is demonstrated that (accounting for building depreciation charges, which should seemingly be included among the considered ad valorem factors, cancel out and do not have any impact on the resulting estimates. However, treating the depreciation of buildings in quantifiable economic terms as a reduction in derivable operating benefits over time (instead of mere physical indications, such as age, we also demonstrate that the approach has implications for estimating the economic service lives of buildings and can be practical when used in conjunction with the market-related approach to valuation – from which the requisite model inputs can be extracted as shown in the final part of the paper.

  5. Standardized facility record and report model system (FARMS) for material accounting and control

    International Nuclear Information System (INIS)

    Nishimura, Hideo; Ihara, Hitoshi; Hisamatsu, Yoshinori.

    1990-07-01

    A facility in which nuclear materials are handled maintains a facility system of accounting for and control of nuclear material. Such a system contains, as one of key elements, a record and report system. This record and report information system is a rather complex one because it needs to conform to various requirements from the national or international safeguards authorities and from the plant operator who has to achieve a safe and economical operation of the plant. Therefore it is mandatory to computerize such information system. The authors have reviewed these requirements and standardized the book-keeping and reporting procedures in line with their computerization. On the basis of this result the authors have developed a computer system, FARMS, named as an acronym of standardized facility record and report model system, mainly reflecting the requirements from the national and international safeguards authorities. The development of FARMS has also been carried out as a JASPAS - Japan Support Programme for Agency Safeguards - project since 1985 and the FARMS code was demonstrated as an accountancy tool in the regional SSAC training courses held in Japan in 1985 and 1987. This report describes the standardization of a record and report system at the facility level, its computerization as a model system and the demonstration of the developed system, FARMS. (author)

  6. MODELLING OF THERMOELASTIC TRANSIENT CONTACT INTERACTION FOR BINARY BEARING TAKING INTO ACCOUNT CONVECTION

    Directory of Open Access Journals (Sweden)

    Igor KOLESNIKOV

    2016-12-01

    Full Text Available Serviceability of metal-polymeric "dry-friction" sliding bearings depends on many parameters, including the rotational speed, friction coefficient, thermal and mechanical properties of the bearing system and, as a result, the value of contact temperature. The objective of this study is to develop a computational model for the metallic-polymer bearing, determination on the basis of this model temperature distribution, equivalent and contact stresses for elements of the bearing arrangement and selection of the optimal parameters for the bearing system to achieve thermal balance. Static problem for the combined sliding bearing with the account of heat generation due to friction has been studied in [1]; the dynamic thermoelastic problem of the shaft rotation in a single and double layer bronze bearings were investigated in [2, 3].

  7. Calibration of an experimental model of tritium storage bed designed for 'in situ' accountability

    International Nuclear Information System (INIS)

    Bidica, Nicolae; Stefanescu, Ioan; Bucur, Ciprian; Bulubasa, Gheorghe; Deaconu, Mariea

    2009-01-01

    Full text: Objectives: Tritium accountancy of the storage beds in tritium facilities is an important issue for tritium inventory control. The purpose of our work was to perform calibration of an experimental model of tritium storage bed with a special design, using electric heaters to simulate tritium decay, and to evaluate the detection limit of the accountancy method. The objective of this paper is to present an experimental method used for calibration of the storage bed and the experimental results consisting of calibration curves and detection limit. Our method is based on a 'self-assaying' tritium storage bed. The basic characteristics of the design of our storage bed consists, in principle, of a uniform distribution of the storage material on several copper thin fins (in order to obtain a uniform temperature field inside the bed), an electrical heat source to simulate the tritium decay heat, a system of thermocouples for measuring the temperature field inside the bed, and good thermal isolation of the bed from the external environment. Within this design of the tritium storage bed, the tritium accounting method is based on determining the decay heat of tritium by measuring the temperature increase of the isolated storage bed. Experimental procedure consisted in measuring of temperature field inside the bed for few values of the power injected with the aid of electrical heat source. Data have been collected for few hours and the temperature increase rate was determined for each value of the power injected. Graphical representation of temperature rise versus injected powers was obtained. This accounting method of tritium inventory stored as metal tritide is a reliable solution for in-situ tritium accountability in a tritium handling facility. Several improvements can be done regarding the design of the storage bed in order to improve the measurement accuracy and to obtain a lower detection limit as for instance use of more accurate thermocouples or special

  8. A model proposal concerning balance scorecard application integrated with resource consumption accounting in enterprise performance management

    Directory of Open Access Journals (Sweden)

    ORHAN ELMACI

    2014-06-01

    Full Text Available The present study intended to investigate the “Balance Scorecard (BSC model integrated with Resource Consumption Accounting (RCA” which helps to evaluate the enterprise as matrix structure in its all parts. It aims to measure how much tangible and intangible values (assets of enterprises contribute to the enterprises. In other words, it measures how effectively, actively, and efficiently these values (assets are used. In short, it aims to measure sustainable competency of enterprises. As expressing the effect of tangible and intangible values (assets of the enterprise on the performance in mathematical and statistical methods is insufficient, it is targeted that RCA Method integrated with BSC model is based on matrix structure and control models. The effects of all complex factors in the enterprise on the performance (productivity and efficiency estimated algorithmically with cause and effect diagram. The contributions of matrix structures for reaching the management functional targets of the enterprises that operate in market competitive environment increasing day to day, is discussed. So in the context of modern management theories, as a contribution to BSC approach which is in the foreground in today’s administrative science of enterprises in matrix organizational structures, multidimensional performance evaluation model -RCA integrated with BSC Model proposal- is presented as strategic planning and strategic evaluation instrument.

  9. A one-dimensional Q-machine model taking into account charge-exchange collisions

    International Nuclear Information System (INIS)

    Maier, H.; Kuhn, S.

    1992-01-01

    The Q-machine is a nontrivial bounded plasma system which is excellently suited not only for fundamental plasma physics investigations but also for the development and testing of new theoretical methods for modeling such systems. However, although Q-machines have now been around for over thirty years, it appears that there exist no comprehensive theoretical models taking into account their considerable geometrical and physical complexity with a reasonable degree of self-consistency. In the present context we are concerned with the low-density, single-emitter Q-machine, for which the most widely used model is probably the (one-dimensional) ''collisionless plane-diode model'', which has originally been developed for thermionic diodes. Although the validity of this model is restricted to certain ''axial'' phenomena, we consider it a suitable starting point for extensions of various kinds. While a generalization to two-dimensional geometry (with still collisionless plasma) is being reported elsewhere, the present work represents a first extension to collisional plasma (with still one-dimensional geometry). (author) 12 refs., 2 figs

  10. Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.

    Science.gov (United States)

    Martínez, C A; Khare, K; Rahman, S; Elzo, M A

    2017-10-01

    Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.

  11. A Model for Urban Environment and Resource Planning Based on Green GDP Accounting System

    Directory of Open Access Journals (Sweden)

    Linyu Xu

    2013-01-01

    Full Text Available The urban environment and resources are currently on course that is unsustainable in the long run due to excessive human pursuit of economic goals. Thus, it is very important to develop a model to analyse the relationship between urban economic development and environmental resource protection during the process of rapid urbanisation. This paper proposed a model to identify the key factors in urban environment and resource regulation based on a green GDP accounting system, which consisted of four parts: economy, society, resource, and environment. In this model, the analytic hierarchy process (AHP method and a modified Pearl curve model were combined to allow for dynamic evaluation, with higher green GDP value as the planning target. The model was applied to the environmental and resource planning problem of Wuyishan City, and the results showed that energy use was a key factor that influenced the urban environment and resource development. Biodiversity and air quality were the most sensitive factors that influenced the value of green GDP in the city. According to the analysis, the urban environment and resource planning could be improved for promoting sustainable development in Wuyishan City.

  12. An agent-based simulation model to study accountable care organizations.

    Science.gov (United States)

    Liu, Pai; Wu, Shinyi

    2016-03-01

    Creating accountable care organizations (ACOs) has been widely discussed as a strategy to control rapidly rising healthcare costs and improve quality of care; however, building an effective ACO is a complex process involving multiple stakeholders (payers, providers, patients) with their own interests. Also, implementation of an ACO is costly in terms of time and money. Immature design could cause safety hazards. Therefore, there is a need for analytical model-based decision-support tools that can predict the outcomes of different strategies to facilitate ACO design and implementation. In this study, an agent-based simulation model was developed to study ACOs that considers payers, healthcare providers, and patients as agents under the shared saving payment model of care for congestive heart failure (CHF), one of the most expensive causes of sometimes preventable hospitalizations. The agent-based simulation model has identified the critical determinants for the payment model design that can motivate provider behavior changes to achieve maximum financial and quality outcomes of an ACO. The results show nonlinear provider behavior change patterns corresponding to changes in payment model designs. The outcomes vary by providers with different quality or financial priorities, and are most sensitive to the cost-effectiveness of CHF interventions that an ACO implements. This study demonstrates an increasingly important method to construct a healthcare system analytics model that can help inform health policy and healthcare management decisions. The study also points out that the likely success of an ACO is interdependent with payment model design, provider characteristics, and cost and effectiveness of healthcare interventions.

  13. Air quality modeling for accountability research: Operational, dynamic, and diagnostic evaluation

    Science.gov (United States)

    Henneman, Lucas R. F.; Liu, Cong; Hu, Yongtao; Mulholland, James A.; Russell, Armistead G.

    2017-10-01

    Photochemical grid models play a central role in air quality regulatory frameworks, including in air pollution accountability research, which seeks to demonstrate the extent to which regulations causally impacted emissions, air quality, and public health. There is a need, however, to develop and demonstrate appropriate practices for model application and evaluation in an accountability framework. We employ a combination of traditional and novel evaluation techniques to assess four years (2001-02, 2011-12) of simulated pollutant concentrations across a decade of major emissions reductions using the Community Multiscale Air Quality (CMAQ) model. We have grouped our assessments in three categories: Operational evaluation investigates how well CMAQ captures absolute concentrations; dynamic evaluation investigates how well CMAQ captures changes in concentrations across the decade of changing emissions; diagnostic evaluation investigates how CMAQ attributes variability in concentrations and sensitivities to emissions between meteorology and emissions, and how well this attribution compares to empirical statistical models. In this application, CMAQ captures O3 and PM2.5 concentrations and change over the decade in the Eastern United States similarly to past CMAQ applications and in line with model evaluation guidance; however, some PM2.5 species-EC, OC, and sulfate in particular-exhibit high biases in various months. CMAQ-simulated PM2.5 has a high bias in winter months and low bias in the summer, mainly due to a high bias in OC during the cold months and low bias in OC and sulfate during the summer. Simulated O3 and PM2.5 changes across the decade have normalized mean bias of less than 2.5% and 17%, respectively. Detailed comparisons suggest biased EC emissions, negative wintertime SO42- sensitivities to mobile source emissions, and incomplete capture of OC chemistry in the summer and winter. Photochemical grid model-simulated O3 and PM2.5 responses to emissions and

  14. A margin model to account for respiration-induced tumour motion and its variability

    International Nuclear Information System (INIS)

    Coolens, Catherine; Webb, Steve; Evans, Phil M; Shirato, H; Nishioka, K

    2008-01-01

    In order to reduce the sensitivity of radiotherapy treatments to organ motion, compensation methods are being investigated such as gating of treatment delivery, tracking of tumour position, 4D scanning and planning of the treatment, etc. An outstanding problem that would occur with all these methods is the assumption that breathing motion is reproducible throughout the planning and delivery process of treatment. This is obviously not a realistic assumption and is one that will introduce errors. A dynamic internal margin model (DIM) is presented that is designed to follow the tumour trajectory and account for the variability in respiratory motion. The model statistically describes the variation of the breathing cycle over time, i.e. the uncertainty in motion amplitude and phase reproducibility, in a polar coordinate system from which margins can be derived. This allows accounting for an additional gating window parameter for gated treatment delivery as well as minimizing the area of normal tissue irradiated. The model was illustrated with abdominal motion for a patient with liver cancer and tested with internal 3D lung tumour trajectories. The results confirm that the respiratory phases around exhale are most reproducible and have the smallest variation in motion amplitude and phase (approximately 2 mm). More importantly, the margin area covering normal tissue is significantly reduced by using trajectory-specific margins (as opposed to conventional margins) as the angular component is by far the largest contributor to the margin area. The statistical approach to margin calculation, in addition, offers the possibility for advanced online verification and updating of breathing variation as more data become available

  15. Radiative transfer modeling through terrestrial atmosphere and ocean accounting for inelastic processes: Software package SCIATRAN

    International Nuclear Information System (INIS)

    Rozanov, V.V.; Dinter, T.; Rozanov, A.V.; Wolanin, A.; Bracher, A.; Burrows, J.P.

    2017-01-01

    SCIATRAN is a comprehensive software package which is designed to model radiative transfer processes in the terrestrial atmosphere and ocean in the spectral range from the ultraviolet to the thermal infrared (0.18–40 μm). It accounts for multiple scattering processes, polarization, thermal emission and ocean–atmosphere coupling. The main goal of this paper is to present a recently developed version of SCIATRAN which takes into account accurately inelastic radiative processes in both the atmosphere and the ocean. In the scalar version of the coupled ocean–atmosphere radiative transfer solver presented by Rozanov et al. we have implemented the simulation of the rotational Raman scattering, vibrational Raman scattering, chlorophyll and colored dissolved organic matter fluorescence. In this paper we discuss and explain the numerical methods used in SCIATRAN to solve the scalar radiative transfer equation including trans-spectral processes, and demonstrate how some selected radiative transfer problems are solved using the SCIATRAN package. In addition we present selected comparisons of SCIATRAN simulations with those published benchmark results, independent radiative transfer models, and various measurements from satellite, ground-based, and ship-borne instruments. The extended SCIATRAN software package along with a detailed User's Guide is made available for scientists and students, who are undertaking their own research typically at universities, via the web page of the Institute of Environmental Physics (IUP), University of Bremen: (http://www.iup.physik.uni-bremen.de). - Highlights: • A new version of the software package SCIATRAN is presented. • Inelastic scattering in water and atmosphere is implemented in SCIATRAN. • Raman scattering and fluorescence can be included in radiative transfer calculations. • Comparisons to other radiative transfer models show excellent agreement. • Comparisons to observations show consistent results.

  16. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    Science.gov (United States)

    Mastin, Larry G.; Van Eaton, Alexa; Durant, A.J.

    2016-01-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16–17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m−3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between  ∼  2.3 and 2.7φ (0.20–0.15 mm), despite large variations in erupted mass (0.25–50 Tg), plume height (8.5–25 km), mass fraction of fine ( discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  17. Modelling of gas-metal arc welding taking into account metal vapour

    Energy Technology Data Exchange (ETDEWEB)

    Schnick, M; Fuessel, U; Hertel, M; Haessler, M [Institute of Surface and Manufacturing Technology, Technische Universitaet Dresden, D-01062 Dresden (Germany); Spille-Kohoff, A [CFX Berlin Software GmbH, Karl-Marx-Allee 90, 10243 Berlin (Germany); Murphy, A B [CSIRO Materials Science and Engineering, PO Box 218, Lindfield NSW 2070 (Australia)

    2010-11-03

    The most advanced numerical models of gas-metal arc welding (GMAW) neglect vaporization of metal, and assume an argon atmosphere for the arc region, as is also common practice for models of gas-tungsten arc welding (GTAW). These models predict temperatures above 20 000 K and a temperature distribution similar to GTAW arcs. However, spectroscopic temperature measurements in GMAW arcs demonstrate much lower arc temperatures. In contrast to measurements of GTAW arcs, they have shown the presence of a central local minimum of the radial temperature distribution. This paper presents a GMAW model that takes into account metal vapour and that is able to predict the local central minimum in the radial distributions of temperature and electric current density. The influence of different values for the net radiative emission coefficient of iron vapour, which vary by up to a factor of hundred, is examined. It is shown that these net emission coefficients cause differences in the magnitudes, but not in the overall trends, of the radial distribution of temperature and current density. Further, the influence of the metal vaporization rate is investigated. We present evidence that, for higher vaporization rates, the central flow velocity inside the arc is decreased and can even change direction so that it is directed from the workpiece towards the wire, although the outer plasma flow is still directed towards the workpiece. In support of this thesis, we have attempted to reproduce the measurements of Zielinska et al for spray-transfer mode GMAW numerically, and have obtained reasonable agreement.

  18. Accounting for exhaust gas transport dynamics in instantaneous emission models via smooth transition regression.

    Science.gov (United States)

    Kamarianakis, Yiannis; Gao, H Oliver

    2010-02-15

    Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.

  19. Material Protection, Accounting, and Control Technologies (MPACT): Modeling and Simulation Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    Cipiti, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dunn, Timothy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Durbin, Samual [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Durkee, Joe W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); England, Jeff [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, Robert [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Ketusky, Edward [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Li, Shelly [Idaho National Lab. (INL), Idaho Falls, ID (United States); Lindgren, Eric [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meier, David [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Miller, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Osburn, Laura Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pereira, Candido [Argonne National Lab. (ANL), Argonne, IL (United States); Rauch, Eric Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Scaglione, John [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Scherer, Carolynn P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sprinkle, James K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Yoo, Tae-Sic [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-05

    The development of sustainable advanced nuclear fuel cycles is a long-term goal of the Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technologies program. The Material Protection, Accounting, and Control Technologies (MPACT) campaign is supporting research and development (R&D) of advanced instrumentation, analysis tools, and integration methodologies to meet this goal. This advanced R&D is intended to facilitate safeguards and security by design of fuel cycle facilities. The lab-scale demonstration of a virtual facility, distributed test bed, that connects the individual tools being developed at National Laboratories and university research establishments, is a key program milestone for 2020. These tools will consist of instrumentation and devices as well as computer software for modeling. To aid in framing its long-term goal, during FY16, a modeling and simulation roadmap is being developed for three major areas of investigation: (1) radiation transport and sensors, (2) process and chemical models, and (3) shock physics and assessments. For each area, current modeling approaches are described, and gaps and needs are identified.

  20. Integrated Approach Model of Risk, Control and Auditing of Accounting Information Systems

    Directory of Open Access Journals (Sweden)

    Claudiu BRANDAS

    2013-01-01

    Full Text Available The use of IT in the financial and accounting processes is growing fast and this leads to an increase in the research and professional concerns about the risks, control and audit of Ac-counting Information Systems (AIS. In this context, the risk and control of AIS approach is a central component of processes for IT audit, financial audit and IT Governance. Recent studies in the literature on the concepts of risk, control and auditing of AIS outline two approaches: (1 a professional approach in which we can fit ISA, COBIT, IT Risk, COSO and SOX, and (2 a research oriented approach in which we emphasize research on continuous auditing and fraud using information technology. Starting from the limits of existing approaches, our study is aimed to developing and testing an Integrated Approach Model of Risk, Control and Auditing of AIS on three cycles of business processes: purchases cycle, sales cycle and cash cycle in order to improve the efficiency of IT Governance, as well as ensuring integrity, reality, accuracy and availability of financial statements.

  1. Antecedents and Consequences of Individual Performance Analysis of Turnover Intention Model (Empirical Study of Public Accountants in Indonesia)

    OpenAIRE

    Raza, Hendra; Maksum, Azhar; Erlina; Lumban Raja, Prihatin

    2014-01-01

    Azhar Maksum This study aims to examine empirically the antecedents of individual performance on its consequences of turnover intention in public accounting firms. There are eight variables measured which consists of auditors' empowerment, innovation professionalism, role ambiguity, role conflict, organizational commitment, individual performance and turnover intention. Data analysis is based on 163 public accountant using the Structural Equation Modeling assisted with an appli...

  2. Accounting for treatment use when validating a prognostic model: a simulation study.

    Science.gov (United States)

    Pajouheshnia, Romin; Peelen, Linda M; Moons, Karel G M; Reitsma, Johannes B; Groenwold, Rolf H H

    2017-07-14

    Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW) on the estimated model discrimination (c-index) and calibration (observed:expected ratio and calibration plots) in scenarios with different patterns and effects of treatment use. Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and should not be ignored. When treatment use is random, treated

  3. Accounting for treatment use when validating a prognostic model: a simulation study

    Directory of Open Access Journals (Sweden)

    Romin Pajouheshnia

    2017-07-01

    Full Text Available Abstract Background Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. Methods We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW on the estimated model discrimination (c-index and calibration (observed:expected ratio and calibration plots in scenarios with different patterns and effects of treatment use. Results Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. Conclusions When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and

  4. Large proper motions in the Orion nebula

    International Nuclear Information System (INIS)

    Cudworth, K.M.; Stone, R.C.

    1977-01-01

    Several nebular features, as well as one faint star, with large proper motions were identified within the Orion nebula. The measured proper motions correspond to tangential velocities of up to approximately 70 km sec -1 . One new probable variable star was also found

  5. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    Science.gov (United States)

    Czoli, Christine; Da Silva, Michael; Shaul, Randi Zlotnik; d'Agincourt-Canning, Lori; Simpson, Christy; Boydell, Katherine; Rashkovan, Natalie; Vanin, Sharon

    2011-10-05

    Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories.These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed.Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment. © 2011 Czoli et al; licensee BioMed Central Ltd.

  6. Accounting for detectability in fish distribution models: an approach based on time-to-first-detection

    Directory of Open Access Journals (Sweden)

    Mário Ferreira

    2015-12-01

    Full Text Available Imperfect detection (i.e., failure to detect a species when the species is present is increasingly recognized as an important source of uncertainty and bias in species distribution modeling. Although methods have been developed to solve this problem by explicitly incorporating variation in detectability in the modeling procedure, their use in freshwater systems remains limited. This is probably because most methods imply repeated sampling (≥ 2 of each location within a short time frame, which may be impractical or too expensive in most studies. Here we explore a novel approach to control for detectability based on the time-to-first-detection, which requires only a single sampling occasion and so may find more general applicability in freshwaters. The approach uses a Bayesian framework to combine conventional occupancy modeling with techniques borrowed from parametric survival analysis, jointly modeling factors affecting the probability of occupancy and the time required to detect a species. To illustrate the method, we modeled large scale factors (elevation, stream order and precipitation affecting the distribution of six fish species in a catchment located in north-eastern Portugal, while accounting for factors potentially affecting detectability at sampling points (stream depth and width. Species detectability was most influenced by depth and to lesser extent by stream width and tended to increase over time for most species. Occupancy was consistently affected by stream order, elevation and annual precipitation. These species presented a widespread distribution with higher uncertainty in tributaries and upper stream reaches. This approach can be used to estimate sampling efficiency and provide a practical framework to incorporate variations in the detection rate in fish distribution models.

  7. Proper motions and distances of quasars

    International Nuclear Information System (INIS)

    Varshni, Y.P.

    1982-01-01

    The author's theory that quasars are stars raises the question of their proper motions. From the evidence presented in a previous paper, it is hypothesised that planetary nuclei and quasars are related objects and that their distributions in the galaxy are not very different. Proper motions of 30 quasars, calculated from existing measurements, are discussed. It is shown that three of these, namely PHL 1033, LB 8956 and LB 8991, have proper motions comparable to the largest proper motion known amongst the planetary nuclei. From this it is estimated that these three quasars lie within a few hundred parsecs from the sun. The evidence presented in a previous paper and the present one clearly supports the theory that quasars are stars. The possibility of using the interstellar K and H lines as distance indicators of quasars is discussed and the available evidence summarised. The desirability of determining more accurate values of the proper motions of quasars is emphasised. (Auth.)

  8. Model of medicines sales forecasting taking into account factors of influence

    Science.gov (United States)

    Kravets, A. G.; Al-Gunaid, M. A.; Loshmanov, V. I.; Rasulov, S. S.; Lempert, L. B.

    2018-05-01

    The article describes a method for forecasting sales of medicines in conditions of data sampling, which is insufficient for building a model based on historical data alone. The developed method is applicable mainly to new drugs that are already licensed and released for sale but do not yet have stable sales performance in the market. The purpose of this study is to prove the effectiveness of the suggested method forecasting drug sales, taking into account the selected factors of influence, revealed during the review of existing solutions and analysis of the specificity of the area under study. Three experiments were performed on samples of different volumes, which showed an improvement in the accuracy of forecasting sales in small samples.

  9. A Thermodamage Strength Theoretical Model of Ceramic Materials Taking into Account the Effect of Residual Stress

    Directory of Open Access Journals (Sweden)

    Weiguo Li

    2012-01-01

    Full Text Available A thermodamage strength theoretical model taking into account the effect of residual stress was established and applied to each temperature phase based on the study of effects of various physical mechanisms on the fracture strength of ultrahigh-temperature ceramics. The effects of SiC particle size, crack size, and SiC particle volume fraction on strength corresponding to different temperatures were studied in detail. This study showed that when flaw size is not large, the bigger SiC particle size results in the greater effect of tensile residual stress in the matrix grains on strength reduction, and this prediction coincides with experimental results; and the residual stress and the combined effort of particle size and crack size play important roles in controlling material strength.

  10. REGRESSION MODEL FOR RISK REPORTING IN FINANCIAL STATEMENTS OF ACCOUNTING SERVICES ENTITIES

    Directory of Open Access Journals (Sweden)

    Mirela NICHITA

    2015-06-01

    Full Text Available The purpose of financial reports is to provide useful information to users; the utility of information is defined through the qualitative characteristics (fundamental and enhancing. The financial crisis emphasized the limits of financial reporting which has been unable to prevent investors about the risks they were facing. Due to the current changes in business environment, managers have been highly motivated to rethink and improve the risk governance philosophy, processes and methodologies. The lack of quality, timely data and adequate systems to capture, report and measure the right information across the organization is a fundamental challenge for implementing and sustaining all aspects of effective risk management. Starting with the 80s, the investors are more interested in narratives (Notes to financial statements, than in primary reports (financial position and performance. The research will apply a regression model for assessment of risk reporting by the professional (accounting and taxation services for major companies from Romania during the period 2009 – 2013.

  11. Asteroid proper elements from an analytical second order theory

    International Nuclear Information System (INIS)

    Knezevic, Z.; Milani, A.

    1989-01-01

    The authors have computed by a fully analytical method a new set of proper elements for 3322 numbered main-belt asteroids. They are presented in the following format: asteroid number, proper semimajor axis (AU), proper eccentricity, sine of proper inclination and quality code (see below). This new set is significantly more accurate than all the previous ones at low to moderate eccentricities and inclinations, and especially near the main mean-motion resonances (e.g., the Themis region). This is because the short periodic perturbations are rigorously removed, and the main effects of the second-order (containing the square of the ratio [the mass of Jupiter/mass of the Sun]) are accounted for. Effects arising from the terms in the Hamiltonian of degree up to four in the eccentricity and inclination of both the asteroid and Jupiter are taken into account, and the fundamental frequencies g (for the perihelion) and s(for the node) of the asteroid are computed with a interative algorithm consistent with the basic results of modern dynamics (e.g., Kolmogorov-Arnold-Moser theory)

  12. MEASURING THE UNDETECTABLE: PROPER MOTIONS AND PARALLAXES OF VERY FAINT SOURCES

    International Nuclear Information System (INIS)

    Lang, Dustin; Hogg, David W.; Jester, Sebastian; Rix, Hans-Walter

    2009-01-01

    The near future of astrophysics involves many large solid-angle, multi-epoch, multiband imaging surveys. These surveys will, at their faint limits, have data on a large number of sources that are too faint to be detected at any individual epoch. Here, we show that it is possible to measure in multi-epoch data not only the fluxes and positions, but also the parallaxes and proper motions of sources that are too faint to be detected at any individual epoch. The method involves fitting a model of a moving point source simultaneously to all imaging, taking account of the noise and point-spread function (PSF) in each image. By this method it is possible to measure the proper motion of a point source with an uncertainty close to the minimum possible uncertainty given the information in the data, which is limited by the PSF, the distribution of observation times (epochs), and the total signal-to-noise in the combined data. We demonstrate our technique on multi-epoch Sloan Digital Sky Survey (SDSS) imaging of the SDSS Southern Stripe (SDSSSS). We show that with our new technique we can use proper motions to distinguish very red brown dwarfs from very high-redshift quasars in these SDSS data, for objects that are inaccessible to traditional techniques, and with better fidelity than by multiband imaging alone. We rediscover all 10 known brown dwarfs in our sample and present nine new candidate brown dwarfs, identified on the basis of significant proper motion.

  13. A common signal detection model accounts for both perception and discrimination of the watercolor effect.

    Science.gov (United States)

    Devinck, Frédéric; Knoblauch, Kenneth

    2012-03-21

    Establishing the relation between perception and discrimination is a fundamental objective in psychophysics, with the goal of characterizing the neural mechanisms mediating perception. Here, we show that a procedure for estimating a perceptual scale based on a signal detection model also predicts discrimination performance. We use a recently developed procedure, Maximum Likelihood Difference Scaling (MLDS), to measure the perceptual strength of a long-range, color, filling-in phenomenon, the Watercolor Effect (WCE), as a function of the luminance ratio between the two components of its generating contour. MLDS is based on an equal-variance, gaussian, signal detection model and yields a perceptual scale with interval properties. The strength of the fill-in percept increased 10-15 times the estimate of the internal noise level for a 3-fold increase in the luminance ratio. Each observer's estimated scale predicted discrimination performance in a subsequent paired-comparison task. A common signal detection model accounts for both the appearance and discrimination data. Since signal detection theory provides a common metric for relating discrimination performance and neural response, the results have implications for comparing perceptual and neural response functions.

  14. An Improved Car-Following Model Accounting for Impact of Strong Wind

    Directory of Open Access Journals (Sweden)

    Dawei Liu

    2017-01-01

    Full Text Available In order to investigate the effect of strong wind on dynamic characteristic of traffic flow, an improved car-following model based on the full velocity difference model is developed in this paper. Wind force is introduced as the influence factor of car-following behavior. Among three components of wind force, lift force and side force are taken into account. The linear stability analysis is carried out and the stability condition of the newly developed model is derived. Numerical analysis is made to explore the effect of strong wind on spatial-time evolution of a small perturbation. The results show that the strong wind can significantly affect the stability of traffic flow. Driving safety in strong wind is also studied by comparing the lateral force under different wind speeds with the side friction of vehicles. Finally, the fuel consumption of vehicle in strong wind condition is explored and the results show that the fuel consumption decreased with the increase of wind speed.

  15. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    Science.gov (United States)

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  16. KINERJA PENGELOLAAN LIMBAH HOTEL PESERTA PROPER DAN NON PROPER DI KABUPATEN BADUNG, PROVINSI BALI

    Directory of Open Access Journals (Sweden)

    Putri Nilakandi Perdanawati Pitoyo

    2016-07-01

    Full Text Available Bali tourism development can lead to positive and negative impacts that threatening environmental sustainability. This research evaluates the hotel performance of the waste management that includes management of waste water, emission, hazardous, and solid waste by hotel that participate at PROPER and non PROPER. Research using qualitative descriptive method. Not all of non PROPER doing test on waste water quality, chimney emissions quality, an inventory of hazardous waste and solid waste sorting. Wastewater discharge of PROPER hotels ranged from 290.9 to 571.8 m3/day and non PROPER ranged from 8.4 to 98.1 m3/day with NH3 parameter values that exceed the quality standards. The quality of chimney emissions were still below the quality standard. The volume of the hazardous waste of PROPER hotels ranged from 66.1 to 181.9 kg/month and non PROPER ranged from 5.003 to 103.42 kg/month. Hazardous waste from the PROPER hotel which has been stored in the TPS hazardous waste. The volume of the solid waste of PROPER hotel ranged from 342.34 to 684.54 kg/day and non PROPER ranged from 4.83 to 181.51 kg/day. The PROPER and non PROPER hotel not sort the solid waste. The hotel performance in term of wastewater management, emission, hazardous, and solid waste is better at the PROPER hotel compared to non PROPER participants.

  17. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    Science.gov (United States)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  18. Accounting comparability and the accuracy of peer-based valuation models

    NARCIS (Netherlands)

    Young, S.; Zeng, Y.

    2015-01-01

    We examine the link between enhanced accounting comparability and the valuation performance of pricing multiples. Using the warranted multiple method proposed by Bhojraj and Lee (2002, Journal of Accounting Research), we demonstrate how enhanced accounting comparability leads to better peer-based

  19. The potential to reduce the risk of manipulation of financial statements using the identification models of creative accounting

    Directory of Open Access Journals (Sweden)

    Zita Drábková

    2013-01-01

    Full Text Available Explanatory power of accounting information is the key question for deciding of users of financial statements. A whole range of economic indicators is available to the users of financial statements to measure the firm productivity. When the accounting statements (and applied methods are manipulated, the economic indicators may reveal clearly different results. The users of financial statements should have the possibility to assess the risk of manipulation of accounting statements in time considering potential risk of accounting fraud. The aim of this paper was based on the synthesis of knowledge from the review of literature, the CFEBT model analysis and Beneish Model proposing a convenient model for identifying risks of manipulation of financial statements. The paper summarizes the outcomes of possibilities and limits of manipulated financial statements and their identification. The testing hypothesis is assessing whether there is a close relation of a loss and an increase in the cash flow in 3–5 years time; whether the sum of the amounts for 3–5 year’s time would reveal the same results respectively. The hypothesis was verified on the accounting statements of the accounting entities of prepared case studies respecting the true and fair view of accounting based on Czech accounting standards.

  20. Modelling of a mecanum wheel taking into account the geometry of road rollers

    Science.gov (United States)

    Hryniewicz, P.; Gwiazda, A.; Banaś, W.; Sękala, A.; Foit, K.

    2017-08-01

    During the process planning in a company one of the basic factors associated with the production costs is the operation time for particular technological jobs. The operation time consists of time units associated with the machining tasks of a workpiece as well as the time associated with loading and unloading and the transport operations of this workpiece between machining stands. Full automation of manufacturing in industry companies tends to a maximal reduction in machine downtimes, thereby the fixed costs simultaneously decreasing. The new construction of wheeled vehicles, using Mecanum wheels, reduces the transport time of materials and workpieces between machining stands. These vehicles have the ability to simultaneously move in two axes and thus more rapid positioning of the vehicle relative to the machining stand. The Mecanum wheel construction implies placing, around the wheel free rollers that are mounted at an angle 450, which allow the movement of the vehicle not only in its axis but also perpendicular thereto. The improper selection of the rollers can cause unwanted vertical movement of the vehicle, which may cause difficulty in positioning of the vehicle in relation to the machining stand and the need for stabilisation. Hence the proper design of the free rollers is essential in designing the whole Mecanum wheel construction. It allows avoiding the disadvantageous and unwanted vertical vibrations of a whole vehicle with these wheels. In the article the process of modelling the free rollers, in order to obtain the desired shape of unchanging, horizontal trajectory of the vehicle is presented. This shape depends on the desired diameter of the whole Mecanum wheel, together with the road rollers, and the width of the drive wheel. Another factor related with the curvature of the trajectory shape is the length of the road roller and its diameter decreases depending on the position with respect to its centre. The additional factor, limiting construction of

  1. The cyclicality of loan loss provisions under three different accounting models: the United Kingdom, Spain, and Brazil

    Directory of Open Access Journals (Sweden)

    Antônio Maria Henri Beyle de Araújo

    2017-11-01

    Full Text Available ABSTRACT A controversy involving loan loss provisions in banks concerns their relationship with the business cycle. While international accounting standards for recognizing provisions (incurred loss model would presumably be pro-cyclical, accentuating the effects of the current economic cycle, an alternative model, the expected loss model, has countercyclical characteristics, acting as a buffer against economic imbalances caused by expansionary or contractionary phases in the economy. In Brazil, a mixed accounting model exists, whose behavior is not known to be pro-cyclical or countercyclical. The aim of this research is to analyze the behavior of these accounting models in relation to the business cycle, using an econometric model consisting of financial and macroeconomic variables. The study allowed us to identify the impact of credit risk behavior, earnings management, capital management, Gross Domestic Product (GDP behavior, and the behavior of the unemployment rate on provisions in countries that use different accounting models. Data from commercial banks in the United Kingdom (incurred loss, in Spain (expected loss, and in Brazil (mixed model were used, covering the period from 2001 to 2012. Despite the accounting models of the three countries being formed by very different rules regarding possible effects on the business cycles, the results revealed a pro-cyclical behavior of provisions in each country, indicating that when GDP grows, provisions tend to fall and vice versa. The results also revealed other factors influencing the behavior of loan loss provisions, such as earning management.

  2. Modelling the range expansion of the Tiger mosquito in a Mediterranean Island accounting for imperfect detection.

    Science.gov (United States)

    Tavecchia, Giacomo; Miranda, Miguel-Angel; Borrás, David; Bengoa, Mikel; Barceló, Carlos; Paredes-Esquivel, Claudia; Schwarz, Carl

    2017-01-01

    Aedes albopictus (Diptera; Culicidae) is a highly invasive mosquito species and a competent vector of several arboviral diseases that have spread rapidly throughout the world. Prevalence and patterns of dispersal of the mosquito are of central importance for an effective control of the species. We used site-occupancy models accounting for false negative detections to estimate the prevalence, the turnover, the movement pattern and the growth rate in the number of sites occupied by the mosquito in 17 localities throughout Mallorca Island. Site-occupancy probability increased from 0.35 in the 2012, year of first reported observation of the species, to 0.89 in 2015. Despite a steady increase in mosquito presence, the extinction probability was generally high indicating a high turnover in the occupied sites. We considered two site-dependent covariates, namely the distance from the point of first observation and the estimated yearly occupancy rate in the neighborhood, as predicted by diffusion models. Results suggested that mosquito distribution during the first year was consistent with what predicted by simple diffusion models, but was not consistent with the diffusion model in subsequent years when it was similar to those expected from leapfrog dispersal events. Assuming a single initial colonization event, the spread of Ae. albopictus in Mallorca followed two distinct phases, an early one consistent with diffusion movements and a second consistent with long distance, 'leapfrog', movements. The colonization of the island was fast, with ~90% of the sites estimated to be occupied 3 years after the colonization. The fast spread was likely to have occurred through vectors related to human mobility such as cars or other vehicles. Surveillance and management actions near the introduction point would only be effective during the early steps of the colonization.

  3. A mass-density model can account for the size-weight illusion

    Science.gov (United States)

    Bergmann Tiest, Wouter M.; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness

  4. A mass-density model can account for the size-weight illusion.

    Science.gov (United States)

    Wolf, Christian; Bergmann Tiest, Wouter M; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object's mass, and the other from the object's density, with estimates' weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects' density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object's density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception.

  5. Carbon accounting of forest bioenergy: from model calibrations to policy options (Invited)

    Science.gov (United States)

    Lamers, P.

    2013-12-01

    knowledge in the field by comparing different state-of-the-art temporal forest carbon modeling efforts, and discusses whether or to what extent a deterministic ';carbon debt' accounting is possible and appropriate. It concludes upon the possible scientific and eventually political choices in temporal carbon accounting for regulatory frameworks including alternative options to address unintentional carbon losses within forest ecosystems/bioenergy systems.

  6. Long-term fiscal implications of funding assisted reproduction: a generational accounting model for Spain

    Directory of Open Access Journals (Sweden)

    R. Matorras

    2015-12-01

    Full Text Available The aim of this study was to assess the lifetime economic benefits of assisted reproduction in Spain by calculating the return on this investment. We developed a generational accounting model that simulates the flow of taxes paid by the individual, minus direct government transfers received over the individual’s lifetime. The difference between discounted transfers and taxes minus the cost of either IVF or artificial insemination (AI equals the net fiscal contribution (NFC of a child conceived through assisted reproduction. We conducted sensitivity analysis to test the robustness of our results under various macroeconomic scenarios. A child conceived through assisted reproduction would contribute €370,482 in net taxes to the Spanish Treasury and would receive €275,972 in transfers over their lifetime. Taking into account that only 75% of assisted reproduction pregnancies are successful, the NFC was estimated at €66,709 for IVF-conceived children and €67,253 for AI-conceived children. The return on investment for each euro invested was €15.98 for IVF and €18.53 for AI. The long-term NFC of a child conceived through assisted reproduction could range from €466,379 to €-9,529 (IVF and from €466,923 to €-8,985 (AI. The return on investment would vary between €-2.28 and €111.75 (IVF, and €-2.48 and €128.66 (AI for each euro invested. The break-even point at which the financial position would begin to favour the Spanish Treasury ranges between 29 and 41 years of age. Investment in assisted reproductive techniques may lead to positive discounted future fiscal revenue, notwithstanding its beneficial psychological effect for infertile couples in Spain.

  7. ACCOUNTING HARMONIZATION AND HISTORICAL COST ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Valentin Gabriel CRISTEA

    2017-05-01

    Full Text Available There is a huge interest in accounting harmonization and historical costs accounting, in what they offer us. In this article, different valuation models are discussed. Although one notices the movement from historical cost accounting to fair value accounting, each one has its advantages.

  8. Computation of Asteroid Proper Elements: Recent Advances

    Science.gov (United States)

    Knežević, Z.

    2017-12-01

    The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.

  9. On the Determination of Proper Time

    OpenAIRE

    Hurl, Bing; Zhang, Zhi-Yong Wang Hai-Dong

    1998-01-01

    Through the analysis of the definition of the duration of proper time of a particle given by the length of its world line, we show that there is no transitivity of the coordinate time function derived from the definition, so there exists an ambiguity in the determination of the duration of the proper time for the particle. Its physical consequence is illustrated with quantum measurement effect.

  10. Accounting for Forest Harvest and Wildfire in a Spatially-distributed Carbon Cycle Process Model

    Science.gov (United States)

    Turner, D. P.; Ritts, W.; Kennedy, R. E.; Yang, Z.; Law, B. E.

    2009-12-01

    Forests are subject to natural disturbances in the form of wildfire, as well as management-related disturbances in the form of timber harvest. These disturbance events have strong impacts on local and regional carbon budgets, but quantifying the associated carbon fluxes remains challenging. The ORCA Project aims to quantify regional net ecosystem production (NEP) and net biome production (NBP) in Oregon, California, and Washington, and we have adopted an integrated approach based on Landsat imagery and ecosystem modeling. To account for stand-level carbon fluxes, the Biome-BGC model has been adapted to simulate multiple severities of fire and harvest. New variables include snags, direct fire emissions, and harvest removals. New parameters include fire-intensity-specific combustion factors for each carbon pool (based on field measurements) and proportional removal rates for harvest events. To quantify regional fluxes, the model is applied in a spatially-distributed mode over the domain of interest, with disturbance history derived from a time series of Landsat images. In stand-level simulations, the post disturbance transition from negative (source) to positive (sink) NEP is delayed approximately a decade in the case of high severity fire compared to harvest. Simulated direct pyrogenic emissions range from 11 to 25 % of total non-soil ecosystem carbon. In spatial mode application over Oregon and California, the sum of annual pyrogenic emissions and harvest removals was generally less that half of total NEP, resulting in significant carbon sequestration on the land base. Spatially and temporally explicit simulation of disturbance-related carbon fluxes will contribute to our ability to evaluate effects of management on regional carbon flux, and in our ability to assess potential biospheric feedbacks to climate change mediated by changing disturbance regimes.

  11. The Charitable Trust Model: An Alternative Approach For Department Of Defense Accounting

    Science.gov (United States)

    2016-12-01

    DEFENSE ACCOUNTING 5. FUNDING NUMBERS 6. AUTHOR (S) Gerald V. Weers Jr. 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School...prohibits the incurrence of costs until budget authority is provided; reversing the conditionality of the matching principle accounting logic. In summary...the Board did not believe applying depreciation accounting for these assets would contribute to measuring the cost of outputs produced, or to

  12. A near-real-time material accountancy model and its preliminary demonstration in the Tokai reprocessing plant

    International Nuclear Information System (INIS)

    Ikawa, K.; Ihara, H.; Nishimura, H.; Tsutsumi, M.; Sawahata, T.

    1983-01-01

    The study of a near-real-time (n.r.t.) material accountancy system as applied to small or medium-sized spent fuel reprocessing facilities has been carried out since 1978 under the TASTEX programme. In this study, a model of the n.r.t. accountancy system, called the ten-day-detection-time model, was developed and demonstrated in the actual operating plant. The programme was closed on May 1981, but the study has been extended. The effectiveness of the proposed n.r.t. accountancy model was evaluated by means of simulation techniques. The results showed that weekly material balances covering the entire process MBA could provide sufficient information to satisfy the IAEA guidelines for small or medium-sized facilities. The applicability of the model to the actual plant has been evaluated by a series of field tests which covered four campaigns. In addition to the material accountancy data, many valuable operational data with regard to additional locations for an in-process inventory, the time needed for an in-process inventory, etc., have been obtained. A CUMUF (cumulative MUF) chart of the resulting MUF data in the C-1 and C-2 campaigns clearly showed that there had been a measurement bias across the process MBA. This chart gave a dramatic picture of the power of the n.r.t. accountancy concept by showing the nature of this bias, which was not clearly shown in the conventional material accountancy data. (author)

  13. Modeling and analyses for an extended car-following model accounting for drivers' situation awareness from cyber physical perspective

    Science.gov (United States)

    Chen, Dong; Sun, Dihua; Zhao, Min; Zhou, Tong; Cheng, Senlin

    2018-07-01

    In fact, driving process is a typical cyber physical process which couples tightly the cyber factor of traffic information with the physical components of the vehicles. Meanwhile, the drivers have situation awareness in driving process, which is not only ascribed to the current traffic states, but also extrapolates the changing trend. In this paper, an extended car-following model is proposed to account for drivers' situation awareness. The stability criterion of the proposed model is derived via linear stability analysis. The results show that the stable region of proposed model will be enlarged on the phase diagram compared with previous models. By employing the reductive perturbation method, the modified Korteweg de Vries (mKdV) equation is obtained. The kink-antikink soliton of mKdV equation reveals theoretically the evolution of traffic jams. Numerical simulations are conducted to verify the analytical results. Two typical traffic Scenarios are investigated. The simulation results demonstrate that drivers' situation awareness plays a key role in traffic flow oscillations and the congestion transition.

  14. Associative account of self-cognition: extended forward model and multi-layer structure

    Directory of Open Access Journals (Sweden)

    Motoaki eSugiura

    2013-08-01

    Full Text Available The neural correlates of self identified by neuroimaging studies differ depending on which aspects of self are addressed. Here, three categories of self are proposed based on neuroimaging findings and an evaluation of the likely underlying cognitive processes. The physical self, representing self-agency of action, body ownership, and bodily self-recognition, is supported by the sensory and motor association cortices located primarily in the right hemisphere. The interpersonal self, representing the attention or intentions of others directed at the self, is supported by several amodal association cortices in the dorsomedial frontal and lateral posterior cortices. The social self, representing the self as a collection of context-dependent social values, is supported by the ventral aspect of the medial prefrontal cortex and the posterior cingulate cortex. Despite differences in the underlying cognitive processes and neural substrates, all three categories of self are likely to share the computational characteristics of the forward model, which is underpinned by internal schema or learned associations between one’s behavioral output and the consequential input. Additionally, these three categories exist within a hierarchical layer structure based on developmental processes that updates the schema through the attribution of prediction error. In this account, most of the association cortices critically contribute to some aspect of the self through associative learning while the primary regions involved shift from the lateral to the medial cortices in a sequence from the physical to the interpersonal to the social self.

  15. Working memory load and the retro-cue effect: A diffusion model account.

    Science.gov (United States)

    Shepherdson, Peter; Oberauer, Klaus; Souza, Alessandra S

    2018-02-01

    Retro-cues (i.e., cues presented between the offset of a memory array and the onset of a probe) have consistently been found to enhance performance in working memory tasks, sometimes ameliorating the deleterious effects of increased memory load. However, the mechanism by which retro-cues exert their influence remains a matter of debate. To inform this debate, we applied a hierarchical diffusion model to data from 4 change detection experiments using single item, location-specific probes (i.e., a local recognition task) with either visual or verbal memory stimuli. Results showed that retro-cues enhanced the quality of information entering the decision process-especially for visual stimuli-and decreased the time spent on nondecisional processes. Further, cues interacted with memory load primarily on nondecision time, decreasing or abolishing load effects. To explain these findings, we propose an account whereby retro-cues act primarily to reduce the time taken to access the relevant representation in memory upon probe presentation, and in addition protect cued representations from visual interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Development of accounting quality management system

    Directory of Open Access Journals (Sweden)

    Plakhtii T.F.

    2017-08-01

    Full Text Available Accounting organization as one of the types of practical activities at the enterprise involves organization of the process of implementation of various kinds of accounting procedures to ensure meeting needs of the users of accounting information. Therefore, to improve its quality an owner should use tools, methods and procedures that enable to improve the quality of implementation of accounting methods and technology. The necessity of using a quality management system for the improvement of accounting organization at the enterprise is substantiated. The system of accounting quality management is developed and grounded in the context of ISO 9001:2015, which includes such processes as the processes of the accounting system, leadership, planning, and evaluation. On the basis of specification and justification of the set of universal requirements (content requirements, formal requirements the model of the environment of demands for high-quality organization of the computerized accounting system that improves the process of preparing high quality financial statements is developed. In order to improve the system of accounting quality management, to justify the main objectives of its further development, namely elimination of unnecessary characteristics of accounting information, the differences between the current level of accounting information quality and its perfect level are considered; the meeting of new needs of users of accounting information that have not been satisfied yet. The ways of leadership demonstration in the system of accounting quality management of accounting subjects at the enterprise are substantiated. The relationship between the current level of accounting information quality and its perfect level is considered. The possible types of measures aimed at improving the system of accounting quality management are identified. The paper grounds the need to include the principle of proper management in the current set of accounting

  17. The BUMP model of response planning: intermittent predictive control accounts for 10 Hz physiological tremor.

    Science.gov (United States)

    Bye, Robin T; Neilson, Peter D

    2010-10-01

    Physiological tremor during movement is characterized by ∼10 Hz oscillation observed both in the electromyogram activity and in the velocity profile. We propose that this particular rhythm occurs as the direct consequence of a movement response planning system that acts as an intermittent predictive controller operating at discrete intervals of ∼100 ms. The BUMP model of response planning describes such a system. It forms the kernel of Adaptive Model Theory which defines, in computational terms, a basic unit of motor production or BUMP. Each BUMP consists of three processes: (1) analyzing sensory information, (2) planning a desired optimal response, and (3) execution of that response. These processes operate in parallel across successive sequential BUMPs. The response planning process requires a discrete-time interval in which to generate a minimum acceleration trajectory to connect the actual response with the predicted future state of the target and compensate for executional error. We have shown previously that a response planning time of 100 ms accounts for the intermittency observed experimentally in visual tracking studies and for the psychological refractory period observed in double stimulation reaction time studies. We have also shown that simulations of aimed movement, using this same planning interval, reproduce experimentally observed speed-accuracy tradeoffs and movement velocity profiles. Here we show, by means of a simulation study of constant velocity tracking movements, that employing a 100 ms planning interval closely reproduces the measurement discontinuities and power spectra of electromyograms, joint-angles, and angular velocities of physiological tremor reported experimentally. We conclude that intermittent predictive control through sequential operation of BUMPs is a fundamental mechanism of 10 Hz physiological tremor in movement. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Design of a Competency-Based Assessment Model in the Field of Accounting

    Science.gov (United States)

    Ciudad-Gómez, Adelaida; Valverde-Berrocoso, Jesús

    2012-01-01

    This paper presents the phases involved in the design of a methodology to contribute both to the acquisition of competencies and to their assessment in the field of Financial Accounting, within the European Higher Education Area (EHEA) framework, which we call MANagement of COMpetence in the areas of Accounting (MANCOMA). Having selected and…

  19. Cost accounting at GKSS

    International Nuclear Information System (INIS)

    Hinz, R.

    1979-01-01

    The GKSS has a cost accounting system comprising cost type, cost centre and cost unit accounting which permits of a comprehensive and detailed supervision of the accural of costs and use of funds, makes price setting for outside orders possible and provides the necessary data for decision-making and planning. It fulfills the requirement for an ordered accounting system; it is therefore guaranteed that there exists between financial accounts department and cost accounting a proper demarcation and transition, that costs are accounted fully only on the basis of vouchers and only once, evaluation and distribution are unified and the principle of causation is observed. Two employees are engaged in costs and services accounting. Although we strive to effect adaptations as swiftly as possible, and constantly to adapt refinements and supplementary processes for the improvement of the system, this can only occur within the scope of, and with the exactitude necessary for the required information. (author)

  20. Materials measurement and accounting in an operating plutonium conversion and purification process. Phase I. Process modeling and simulation

    International Nuclear Information System (INIS)

    Thomas, C.C. Jr.; Ostenak, C.A.; Gutmacher, R.G.; Dayem, H.A.; Kern, E.A.

    1981-04-01

    A model of an operating conversion and purification process for the production of reactor-grade plutonium dioxide was developed as the first component in the design and evaluation of a nuclear materials measurement and accountability system. The model accurately simulates process operation and can be used to identify process problems and to predict the effect of process modifications

  1. 25 CFR 87.12 - Insuring the proper performance of approved plans.

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Insuring the proper performance of approved plans. 87.12... DISTRIBUTION OF INDIAN JUDGMENT FUNDS § 87.12 Insuring the proper performance of approved plans. A timetable... regarding the maintenance of the timetable, a full accounting of any per capita distribution, and the...

  2. Accounting for water management issues within hydrological simulation: Alternative modelling options and a network optimization approach

    Science.gov (United States)

    Efstratiadis, Andreas; Nalbantis, Ioannis; Rozos, Evangelos; Koutsoyiannis, Demetris

    2010-05-01

    In mixed natural and artificialized river basins, many complexities arise due to anthropogenic interventions in the hydrological cycle, including abstractions from surface water bodies, groundwater pumping or recharge and water returns through drainage systems. Typical engineering approaches adopt a multi-stage modelling procedure, with the aim to handle the complexity of process interactions and the lack of measured abstractions. In such context, the entire hydrosystem is separated into natural and artificial sub-systems or components; the natural ones are modelled individually, and their predictions (i.e. hydrological fluxes) are transferred to the artificial components as inputs to a water management scheme. To account for the interactions between the various components, an iterative procedure is essential, whereby the outputs of the artificial sub-systems (i.e. abstractions) become inputs to the natural ones. However, this strategy suffers from multiple shortcomings, since it presupposes that pure natural sub-systems can be located and that sufficient information is available for each sub-system modelled, including suitable, i.e. "unmodified", data for calibrating the hydrological component. In addition, implementing such strategy is ineffective when the entire scheme runs in stochastic simulation mode. To cope with the above drawbacks, we developed a generalized modelling framework, following a network optimization approach. This originates from the graph theory, which has been successfully implemented within some advanced computer packages for water resource systems analysis. The user formulates a unified system which is comprised of the hydrographical network and the typical components of a water management network (aqueducts, pumps, junctions, demand nodes etc.). Input data for the later include hydraulic properties, constraints, targets, priorities and operation costs. The real-world system is described through a conceptual graph, whose dummy properties

  3. Process Accounting

    OpenAIRE

    Gilbertson, Keith

    2002-01-01

    Standard utilities can help you collect and interpret your Linux system's process accounting data. Describes the uses of process accounting, standard process accounting commands, and example code that makes use of process accounting utilities.

  4. Accountability and non-proliferation nuclear regime: a review of the mutual surveillance Brazilian-Argentine model for nuclear safeguards

    International Nuclear Information System (INIS)

    Xavier, Roberto Salles

    2014-01-01

    The regimes of accountability, the organizations of global governance and institutional arrangements of global governance of nuclear non-proliferation and of Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards are the subject of research. The starting point is the importance of the institutional model of global governance for the effective control of non-proliferation of nuclear weapons. In this context, the research investigates how to structure the current arrangements of the international nuclear non-proliferation and what is the performance of model Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards in relation to accountability regimes of global governance. For that, was searched the current literature of three theoretical dimensions: accountability, global governance and global governance organizations. In relation to the research method was used the case study and the treatment technique of data the analysis of content. The results allowed: to establish an evaluation model based on accountability mechanisms; to assess how behaves the model Mutual Vigilance Brazilian-Argentine Nuclear Safeguards front of the proposed accountability regime; and to measure the degree to which regional arrangements that work with systems of global governance can strengthen these international systems. (author)

  5. Account of the Pauli principle in the quasiparticle-phonon nuclear model

    International Nuclear Information System (INIS)

    Molina, Kh.L.

    1980-01-01

    The correlation effects in the ground states of even-even deformed nuclei on their one- and two-phonon states are studied in terms of the semimicroscopic nuclear theory. A secular equation for one-phonon excitations is derived, which take into account, in average, exact commutation relations between quasiparticle operators. It is demonstrated, that the account of the correlation in the ground state can significantly influence the values of the wave function two-phonon components

  6. A probabilistic Poisson-based model accounts for an extensive set of absolute auditory threshold measurements.

    Science.gov (United States)

    Heil, Peter; Matysiak, Artur; Neubauer, Heinrich

    2017-09-01

    Thresholds for detecting sounds in quiet decrease with increasing sound duration in every species studied. The neural mechanisms underlying this trade-off, often referred to as temporal integration, are not fully understood. Here, we probe the human auditory system with a large set of tone stimuli differing in duration, shape of the temporal amplitude envelope, duration of silent gaps between bursts, and frequency. Duration was varied by varying the plateau duration of plateau-burst (PB) stimuli, the duration of the onsets and offsets of onset-offset (OO) stimuli, and the number of identical bursts of multiple-burst (MB) stimuli. Absolute thresholds for a large number of ears (>230) were measured using a 3-interval-3-alternative forced choice (3I-3AFC) procedure. Thresholds decreased with increasing sound duration in a manner that depended on the temporal envelope. Most commonly, thresholds for MB stimuli were highest followed by thresholds for OO and PB stimuli of corresponding durations. Differences in the thresholds for MB and OO stimuli and in the thresholds for MB and PB stimuli, however, varied widely across ears, were negative in some ears, and were tightly correlated. We show that the variation and correlation of MB-OO and MB-PB threshold differences are linked to threshold microstructure, which affects the relative detectability of the sidebands of the MB stimuli and affects estimates of the bandwidth of auditory filters. We also found that thresholds for MB stimuli increased with increasing duration of the silent gaps between bursts. We propose a new model and show that it accurately accounts for our results and does so considerably better than a leaky-integrator-of-intensity model and a probabilistic model proposed by others. Our model is based on the assumption that sensory events are generated by a Poisson point process with a low rate in the absence of stimulation and higher, time-varying rates in the presence of stimulation. A subject in a 3I-3AFC

  7. A semiconductor device thermal model taking into account non-linearity and multhipathing of the cooling system

    International Nuclear Information System (INIS)

    Górecki, K; Zarȩbski, J

    2014-01-01

    The paper is devoted to modelling thermal properties of semiconductor devices at the steady state. The dc thermal model of a semiconductor device taking into account the multipath heat flow is proposed. Some results of calculations and measurements of thermal resistance of a power MOSFET operating at different cooling conditions are presented. The obtained results of calculations fit the results of measurements, which proves the correctness of the proposed model.

  8. Proper generalized decompositions an introduction to computer implementation with Matlab

    CERN Document Server

    Cueto, Elías; Alfaro, Icíar

    2016-01-01

    This book is intended to help researchers overcome the entrance barrier to Proper Generalized Decomposition (PGD), by providing a valuable tool to begin the programming task. Detailed Matlab Codes are included for every chapter in the book, in which the theory previously described is translated into practice. Examples include parametric problems, non-linear model order reduction and real-time simulation, among others. Proper Generalized Decomposition (PGD) is a method for numerical simulation in many fields of applied science and engineering. As a generalization of Proper Orthogonal Decomposition or Principal Component Analysis to an arbitrary number of dimensions, PGD is able to provide the analyst with very accurate solutions for problems defined in high dimensional spaces, parametric problems and even real-time simulation. .

  9. Accounting for Framing-Effects - an informational approach to intensionality in the Bolker-Jeffrey decision model

    OpenAIRE

    Bourgeois-Gironde , Sacha; Giraud , Raphaël

    2005-01-01

    We suscribe to an account of framing-effects in decision theory in terms of an inference to a background informationa by the hearer when a speaker uses a certain frame while other equivalent frames were also available. This account was sketched by Craig McKenzie. We embed it in Bolker-Jeffrey decision model (or logic of action) - one main reason of this is that this latter model makes preferences bear on propositions. We can deduce a given anomaly or cognitive bias (namely framing-effects) in...

  10. Dynamical systems of proper characteristic 0

    International Nuclear Information System (INIS)

    Ahmad, K.H.; Hamoui, A.

    1991-07-01

    Flows with orbits of proper characteristics 0 exhibit recurrent behaviour, a feature of basic importance in the description of their dynamics. Here, we analyze flows with such orbits relating them with recurrent flows and with flows that exhibit orbital, Poisson or Lagrange stability. (author). 11 refs

  11. Improved Industrial Development In Nigeria Through Proper ...

    African Journals Online (AJOL)

    The paper noted that most industrial development strategies in Nigeria did not give attention to technology education. And that technology education as recognized by few of the strategies were not only properly articulated for the tertiary institutions, but also poorly implemented. Therefore, to put technology and thus ...

  12. Archetypes: the PropeR way

    NARCIS (Netherlands)

    van der Linden, Helma; Grimson, Jane; Tange, Huibert; Talmon, Jan; Hasman, Arie

    2004-01-01

    The PropeR project studies the effect of Decision Support in an Electronic Health Record system (EHR) on the quality of care. One of the applications supports a multidisciplinary primary care team rehabilitating stroke patients in their home environment. This project required an EHR system that

  13. Strategy Guideline: Proper Water Heater Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hoeschele, M. [Alliance for Residential Building Innovation, Davis, CA (United States); Springer, D. [Alliance for Residential Building Innovation, Davis, CA (United States); German, A. [Alliance for Residential Building Innovation, Davis, CA (United States); Staller, J. [Alliance for Residential Building Innovation, Davis, CA (United States); Zhang, Y. [Alliance for Residential Building Innovation, Davis, CA (United States)

    2015-04-01

    This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.

  14. Strategy Guideline. Proper Water Heater Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hoeschele, M. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Springer, D. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); German, A. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Staller, J. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Zhang, Y. [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States)

    2015-04-09

    This Strategy Guideline on proper water heater selection was developed by the Building America team Alliance for Residential Building Innovation to provide step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads.

  15. The Essentials of Proper Wine Service.

    Science.gov (United States)

    Manago, Gary H.

    This instructional unit was designed to assist the food services instructor and/or the restaurant manager in training students and/or staff in the proper procedure for serving wines to guests. The lesson plans included in this unit focus on: (1) the different types of wine glasses and their uses; (2) the parts of a wine glass; (3) the proper…

  16. A mixed multiscale model better accounting for the cross term of the subgrid-scale stress and for backscatter

    Science.gov (United States)

    Thiry, Olivier; Winckelmans, Grégoire

    2016-02-01

    In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that

  17. Basic equations of quasiparticle-phonon model of nucleus with account of Pauli principle and phonons interactions in ground state

    International Nuclear Information System (INIS)

    Voronov, V.V.; Dang, N.D.

    1984-01-01

    the system of equations, enabling to calculate the energy and the structure of excited states, described by the wave function, containing one- and two-phon components was obtained in the framework of quasiparticlephonon model. The requirements of Pauli principle for two-phonon components and phonon correlation in the ground nucleus state are taken into account

  18. Closing the Gaps : Taking into Account the Effects of Heat stress and Fatique Modeling in an Operational Analysis

    NARCIS (Netherlands)

    Woodill, G.; Barbier, R.R.; Fiamingo, C.

    2010-01-01

    Traditional, combat model based analysis of Dismounted Combatant Operations (DCO) has focused on the ‘lethal’ aspects in an engagement, and to a limited extent the environment in which the engagement takes place. These are however only two of the factors that should be taken into account when

  19. Current-account effects of a devaluation in an optimizing model with capital accumulation

    DEFF Research Database (Denmark)

    Nielsen, Søren Bo

    1991-01-01

    short, the devaluation is bound to improve the current account on impact, whereas this will deteriorate in the case of a long contract period, and the more so the smaller are adjustment costs in investment. In addition, we study the consequences for the terms of trade and for the stocks of foreign...

  20. Accounting Department Chairpersons' Perceptions of Business School Performance Using a Market Orientation Model

    Science.gov (United States)

    Webster, Robert L.; Hammond, Kevin L.; Rothwell, James C.

    2013-01-01

    This manuscript is part of a stream of continuing research examining market orientation within higher education and its potential impact on organizational performance. The organizations researched are business schools and the data collected came from chairpersons of accounting departments of AACSB member business schools. We use a reworded Narver…

  1. Secular Extragalactic Parallax and Geometric Distances with Gaia Proper Motions

    Science.gov (United States)

    Paine, Jennie; Darling, Jeremiah K.

    2018-06-01

    The motion of the Solar System with respect to the cosmic microwave background (CMB) rest frame creates a well measured dipole in the CMB, which corresponds to a linear solar velocity of about 78 AU/yr. This motion causes relatively nearby extragalactic objects to appear to move compared to more distant objects, an effect that can be measured in the proper motions of nearby galaxies. An object at 1 Mpc and perpendicular to the CMB apex will exhibit a secular parallax, observed as a proper motion, of 78 µas/yr. The relatively large peculiar motions of galaxies make the detection of secular parallax challenging for individual objects. Instead, a statistical parallax measurement can be made for a sample of objects with proper motions, where the global parallax signal is modeled as an E-mode dipole that diminishes linearly with distance. We present preliminary results of applying this model to a sample of nearby galaxies with Gaia proper motions to detect the statistical secular parallax signal. The statistical measurement can be used to calibrate the canonical cosmological “distance ladder.”

  2. Globular Clusters: Absolute Proper Motions and Galactic Orbits

    Science.gov (United States)

    Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.

    2018-04-01

    We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.

  3. A regional-scale, high resolution dynamical malaria model that accounts for population density, climate and surface hydrology.

    Science.gov (United States)

    Tompkins, Adrian M; Ermert, Volker

    2013-02-18

    The relative roles of climate variability and population related effects in malaria transmission could be better understood if regional-scale dynamical malaria models could account for these factors. A new dynamical community malaria model is introduced that accounts for the temperature and rainfall influences on the parasite and vector life cycles which are finely resolved in order to correctly represent the delay between the rains and the malaria season. The rainfall drives a simple but physically based representation of the surface hydrology. The model accounts for the population density in the calculation of daily biting rates. Model simulations of entomological inoculation rate and circumsporozoite protein rate compare well to data from field studies from a wide range of locations in West Africa that encompass both seasonal endemic and epidemic fringe areas. A focus on Bobo-Dioulasso shows the ability of the model to represent the differences in transmission rates between rural and peri-urban areas in addition to the seasonality of malaria. Fine spatial resolution regional integrations for Eastern Africa reproduce the malaria atlas project (MAP) spatial distribution of the parasite ratio, and integrations for West and Eastern Africa show that the model grossly reproduces the reduction in parasite ratio as a function of population density observed in a large number of field surveys, although it underestimates malaria prevalence at high densities probably due to the neglect of population migration. A new dynamical community malaria model is publicly available that accounts for climate and population density to simulate malaria transmission on a regional scale. The model structure facilitates future development to incorporate migration, immunity and interventions.

  4. Physical and Theoretical Models of Heat Pollution Applied to Cramped Conditions Welding Taking into Account the Different Types of Heat

    Science.gov (United States)

    Bulygin, Y. I.; Koronchik, D. A.; Legkonogikh, A. N.; Zharkova, M. G.; Azimova, N. N.

    2017-05-01

    The standard k-epsilon turbulence model, adapted for welding workshops, equipped with fixed workstations with sources of pollution took into account only the convective component of heat transfer, which is quite reasonable for large-volume rooms (with low density distribution of sources of pollution) especially the results of model calculations taking into account only the convective component correlated well with experimental data. For the purposes of this study, when we are dealing with a small confined space where necessary to take account of the body heated to a high temperature (for welding), located next to each other as additional sources of heat, it can no longer be neglected radiative heat exchange. In the task - to experimentally investigate the various types of heat transfer in a limited closed space for welding and behavior of a mathematical model, describing the contribution of the various components of the heat exchange, including radiation, influencing the formation of fields of concentration, temperature, air movement and thermal stress in the test environment. Conducted field experiments to model cubic body, allowing you to configure and debug the model of heat and mass transfer processes with the help of the developed approaches, comparing the measurement results of air flow velocity and temperature with the calculated data showed qualitative and quantitative agreement between process parameters, that is an indicator of the adequacy of heat and mass transfer model.

  5. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Oxberry, Geoffrey M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kostova-Vassilevska, Tanya [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Arrighi, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chand, Kyle [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.

  6. Study of the application of a near-real-time materials accountancy system for a model plutonium conversion plants

    International Nuclear Information System (INIS)

    Ihara, Hitoshi; Ikawa, Koji

    1986-11-01

    An assessment was done on the potential capability of a Near-Real-Time materials accountancy system for a model plutonium conversion plant. To this end, a computer simulation system, DYSAS-C, has been developed and evaluated through this assessment study. This study showed that N.R.T.A system could be used not only as a good operator's accounting system but also as a useful inspectorate's system to detect an abrupt diversion. It also showed, however, that more elaborated NRTA system which have not yet evaluated in this study should be considerered when we wish to improve of detecting protracted diversion. (author)

  7. Chemically Dissected Rotation Curves of the Galactic Bulge from Main-sequence Proper Motions

    Science.gov (United States)

    Clarkson, William I.; Calamida, Annalisa; Sahu, Kailash C.; Brown, Thomas M.; Gennaro, Mario; Avila, Roberto J.; Valenti, Jeff; Debattista, Victor P.; Rich, R. Michael; Minniti, Dante; Zoccali, Manuela; Aufdemberge, Emily R.

    2018-05-01

    We report results from an exploratory study implementing a new probe of Galactic evolution using archival Hubble Space Telescope imaging observations. Precise proper motions are combined with photometric relative metallicity and temperature indices, to produce the proper-motion rotation curves of the Galactic bulge separately for metal-poor and metal-rich main-sequence samples. This provides a “pencil-beam” complement to large-scale wide-field surveys, which to date have focused on the more traditional bright giant branch tracers. We find strong evidence that the Galactic bulge rotation curves drawn from “metal-rich” and “metal-poor” samples are indeed discrepant. The “metal-rich” sample shows greater rotation amplitude and a steeper gradient against line-of-sight distance, as well as possibly a stronger central concentration along the line of sight. This may represent a new detection of differing orbital anisotropy between metal-rich and metal-poor bulge objects. We also investigate selection effects that would be implied for the longitudinal proper-motion cut often used to isolate a “pure-bulge” sample. Extensive investigation of synthetic stellar populations suggests that instrumental and observational artifacts are unlikely to account for the observed rotation curve differences. Thus, proper-motion-based rotation curves can be used to probe chemodynamical correlations for main-sequence tracer stars, which are orders of magnitude more numerous in the Galactic bulge than the bright giant branch tracers. We discuss briefly the prospect of using this new tool to constrain detailed models of Galactic formation and evolution. Based on observations made with the NASA/ESA Hubble Space Telescope and obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  8. Modeling of normal contact of elastic bodies with surface relief taken into account

    Science.gov (United States)

    Goryacheva, I. G.; Tsukanov, I. Yu

    2018-04-01

    An approach to account the surface relief in normal contact problems for rough bodies on the basis of an additional displacement function for asperities is considered. The method and analytic expressions for calculating the additional displacement function for one-scale and two-scale wavy relief are presented. The influence of the microrelief geometric parameters, including the number of scales and asperities density, on additional displacements of the rough layer is analyzed.

  9. Accounting for Slipping and Other False Negatives in Logistic Models of Student Learning

    Science.gov (United States)

    MacLellan, Christopher J.; Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    Additive Factors Model (AFM) and Performance Factors Analysis (PFA) are two popular models of student learning that employ logistic regression to estimate parameters and predict performance. This is in contrast to Bayesian Knowledge Tracing (BKT) which uses a Hidden Markov Model formalism. While all three models tend to make similar predictions,…

  10. A simple model to quantitatively account for periodic outbreaks of the measles in the Dutch Bible Belt

    Science.gov (United States)

    Bier, Martin; Brak, Bastiaan

    2015-04-01

    In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.

  11. Alternative biosphere modeling for safety assessment of HLW disposal taking account of geosphere-biosphere interface of marine environment

    International Nuclear Information System (INIS)

    Kato, Tomoko; Ishiguro, Katsuhiko; Naito, Morimasa; Ikeda, Takao; Little, Richard

    2001-03-01

    In the safety assessment of a high-level radioactive waste (HLW) disposal system, it is required to estimate radiological impacts on future human beings arising from potential radionuclide releases from a deep repository into the surface environment. In order to estimated the impacts, a biosphere model is developed by reasonably assuming radionuclide migration processes in the surface environment and relevant human lifestyles. It is important to modify the present biosphere models or to develop alternative biosphere models applying the biosphere models according to quality and quantify of the information acquired through the siting process for constructing the repository. In this study, alternative biosphere models were developed taking geosphere-biosphere interface of marine environment into account. Moreover, the flux to dose conversion factors calculated by these alternative biosphere models was compared with those by the present basic biosphere models. (author)

  12. Listening to food workers: Factors that impact proper health and hygiene practice in food service.

    Science.gov (United States)

    Clayton, Megan L; Clegg Smith, Katherine; Neff, Roni A; Pollack, Keshia M; Ensminger, Margaret

    2015-01-01

    Foodborne disease is a significant problem worldwide. Research exploring sources of outbreaks indicates a pronounced role for food workers' improper health and hygiene practice. To investigate food workers' perceptions of factors that impact proper food safety practice. Interviews with food service workers in Baltimore, MD, USA discussing food safety practices and factors that impact implementation in the workplace. A social ecological model organizes multiple levels of influence on health and hygiene behavior. Issues raised by interviewees include factors across the five levels of the social ecological model, and confirm findings from previous work. Interviews also reveal many factors not highlighted in prior work, including issues with food service policies and procedures, working conditions (e.g., pay and benefits), community resources, and state and federal policies. Food safety interventions should adopt an ecological orientation that accounts for factors at multiple levels, including workers' social and structural context, that impact food safety practice.

  13. Internet accounting

    NARCIS (Netherlands)

    Pras, Aiko; van Beijnum, Bernhard J.F.; Sprenkels, Ron; Parhonyi, R.

    2001-01-01

    This article provides an introduction to Internet accounting and discusses the status of related work within the IETF and IRTF, as well as certain research projects. Internet accounting is different from accounting in POTS. To understand Internet accounting, it is important to answer questions like

  14. Account of the effect of nuclear collision cascades in model of radiation damage of RPV steels

    International Nuclear Information System (INIS)

    Kevorkyan, Yu.R.; Nikolaev, Yu.A.

    1997-01-01

    A kinetic model is proposed for describing the effect of collision cascades in model of radiation damage of reactor pressure vessel steels. This is a closed system of equations which can be solved only by numerical methods in general case

  15. Factors accounting for youth suicide attempt in Hong Kong: a model building.

    Science.gov (United States)

    Wan, Gloria W Y; Leung, Patrick W L

    2010-10-01

    This study aimed at proposing and testing a conceptual model of youth suicide attempt. We proposed a model that began with family factors such as a history of physical abuse and parental divorce/separation. Family relationship, presence of psychopathology, life stressors, and suicide ideation were postulated as mediators, leading to youth suicide attempt. The stepwise entry of the risk factors to a logistic regression model defined their proximity as related to suicide attempt. Path analysis further refined our proposed model of youth suicide attempt. Our originally proposed model was largely confirmed. The main revision was dropping parental divorce/separation as a risk factor in the model due to lack of significant contribution when examined alongside with other risk factors. This model was cross-validated by gender. This study moved research on youth suicide from identification of individual risk factors to model building, integrating separate findings of the past studies.

  16. Account of External Cooling Medium Temperature while Modeling Thermal Processes in Power Oil-Immersed Transformers

    OpenAIRE

    Yu. A. Rounov; O. G. Shirokov; D. I. Zalizny; D. M. Los

    2004-01-01

    The paper proposes a thermal model of a power oil-immersed transformer as a system of four homogeneous bodies: winding, oil, core and cooling medium. On the basis of experimental data it is shown that such model describes more precisely actual thermal processes taking place in a transformer than the thermal model accepted in GOST 14209-85.

  17. Account of External Cooling Medium Temperature while Modeling Thermal Processes in Power Oil-Immersed Transformers

    Directory of Open Access Journals (Sweden)

    Yu. A. Rounov

    2004-01-01

    Full Text Available The paper proposes a thermal model of a power oil-immersed transformer as a system of four homogeneous bodies: winding, oil, core and cooling medium. On the basis of experimental data it is shown that such model describes more precisely actual thermal processes taking place in a transformer than the thermal model accepted in GOST 14209-85.

  18. Accounting for subgrid scale topographic variations in flood propagation modeling using MODFLOW

    DEFF Research Database (Denmark)

    Milzow, Christian; Kinzelbach, W.

    2010-01-01

    To be computationally viable, grid-based spatially distributed hydrological models of large wetlands or floodplains must be set up using relatively large cells (order of hundreds of meters to kilometers). Computational costs are especially high when considering the numerous model runs or model time...

  19. Modelling of the application of near real time accountancy and process monitoring to plants

    International Nuclear Information System (INIS)

    Huddleston, J.; Stockwell, M.K.

    1983-09-01

    Many statistical tests have been proposed for the analysis of accountancy data from nuclear fuel reprocessing plants. The purpose of this programme was to assess the performance of these tests by applying them to data streams which simulate the information that would be available from a real plant. In addition the problems of pre-processing the raw data from a plant were considered. A suite of programs to analyse the data has been written, which include colour graphical output to allow effective interpretation of the results. The commercial software package VisiCalc has been evaluated and found to be effective for the rapid production of material balances from plant data. (author)

  20. Towards the Proper Integration of Extra-Functional Requirements

    OpenAIRE

    Elke Hochmuller

    1999-01-01

    In spite of the many achievements in software engineering, proper treatment of extra-functional requirements (also known as non-functional requirements) within the software development process is still a challenge to our discipline. The application of functionality-biased software development methodologies can lead to major contradictions in the joint modelling of functional and extra-functional requirements. Based on a thorough discussion on the nature of extra-functional requirements as wel...

  1. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    Science.gov (United States)

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  2. THE CURRENT ACCOUNT DEFICIT AND THE FIXED EXCHANGE RATE. ADJUSTING MECHANISMS AND MODELS.

    Directory of Open Access Journals (Sweden)

    HATEGAN D.B. Anca

    2010-07-01

    Full Text Available The main purpose of the paper is to explain what measures can be taken in order to fix the trade deficit, and the pressure that is upon a country by imposing such measures. The international and the national supply and demand conditions change rapidly, and if a country doesn’t succeed in keeping a tight control over its deficit, a lot of factors will affect its wellbeing. In order to reduce the external trade deficit, the government needs to resort to several techniques. The desired result is to have a balanced current account, and therefore, the government is free to use measures such as fixing its exchange rate, reducing government spending etc. We have shown that all these measures will have a certain impact upon an economy, by allowing its exports to thrive and eliminate the danger from excessive imports, or vice-versa. The main conclusion our paper is that government intervention is allowed in order to maintain the balance of the current account.

  3. Accounting for spatial effects in land use regression for urban air pollution modeling.

    Science.gov (United States)

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Assigned value improves memory of proper names.

    Science.gov (United States)

    Festini, Sara B; Hartley, Alan A; Tauber, Sarah K; Rhodes, Matthew G

    2013-01-01

    Names are more difficult to remember than other personal information such as occupations. The current research examined the influence of assigned point value on memory and metamemory judgements for names and occupations to determine whether incentive can improve recall of proper names. In Experiment 1 participants studied face-name and face-occupation pairs assigned 1 or 10 points, made judgements of learning, and were given a cued recall test. High-value names were recalled more often than low-value names. However, recall of occupations was not influenced by value. In Experiment 2 meaningless nonwords were used for both names and occupations. The name difficulty disappeared, and value influenced recall of both names and occupations. Thus value similarly influenced names and occupations when meaningfulness was held constant. In Experiment 3 participants were required to use overt rote rehearsal for all items. Value did not boost recall of high-value names, suggesting that differential processing could not be implemented to improve memory. Thus incentives may improve memory for proper names by motivating people to engage in selective rehearsal and effortful elaborative processing.

  5. An extended two-lane car-following model accounting for inter-vehicle communication

    Science.gov (United States)

    Ou, Hui; Tang, Tie-Qiao

    2018-04-01

    In this paper, we develop a novel car-following model with inter-vehicle communication to explore each vehicle's movement in a two-lane traffic system when an incident occurs on a lane. The numerical results show that the proposed model can perfectly describe each vehicle's motion when an incident occurs, i.e., no collision occurs while the classical full velocity difference (FVD) model produces collision on each lane, which shows the proposed model is more reasonable. The above results can help drivers to reasonably adjust their driving behaviors when an incident occurs in a two-lane traffic system.

  6. Modelling reverse characteristics of power LEDs with thermal phenomena taken into account

    International Nuclear Information System (INIS)

    Ptak, Przemysław; Górecki, Krzysztof

    2016-01-01

    This paper refers to modelling characteristics of power LEDs with a particular reference to thermal phenomena. Special attention is paid to modelling characteristics of the circuit protecting the considered device against the excessive value of the reverse voltage and to the description of the temperature influence on optical power. The network form of the worked out model is presented and some results of experimental verification of this model for the selected diodes operating at different cooling conditions are described. The very good agreement between the calculated and measured characteristics is obtained

  7. Accounting for correlated observations in an age-based state-space stock assessment model

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders

    2016-01-01

    Fish stock assessment models often relyon size- or age-specific observations that are assumed to be statistically independent of each other. In reality, these observations are not raw observations, but rather they are estimates from a catch-standardization model or similar summary statistics base...... the independence assumption is rejected. Less fluctuating estimates of the fishing mortality is obtained due to a reduced process error. The improved model does not suffer from correlated residuals unlike the independent model, and the variance of forecasts is decreased....

  8. Mathematical Model Taking into Account Nonlocal Effects of Plasmonic Structures on the Basis of the Discrete Source Method

    Science.gov (United States)

    Eremin, Yu. A.; Sveshnikov, A. G.

    2018-04-01

    The discrete source method is used to develop and implement a mathematical model for solving the problem of scattering electromagnetic waves by a three-dimensional plasmonic scatterer with nonlocal effects taken into account. Numerical results are presented whereby the features of the scattering properties of plasmonic particles with allowance for nonlocal effects are demonstrated depending on the direction and polarization of the incident wave.

  9. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    OpenAIRE

    Degeling, Koen; IJzerman, Maarten J.; Koopman, Miriam; Koffijberg, Hendrik

    2017-01-01

    Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by ...

  10. Mathematical model of rod oscillations with account of material relaxation behaviour

    Science.gov (United States)

    Kudinov, I. V.; Kudinov, V. A.; Eremin, A. V.; Zhukov, V. V.

    2018-03-01

    Taking into account the bounded velocity of strains and deformations propagation in the formula given in the Hooke’s law, the authors have obtained the differential equation of rod damped oscillations that includes the first and the third time derivatives of displacement as well as the mixed derivative (with respect to space and time variables). Study of its precise analytical solution found by means of separation of variables has shown that rod recovery after being disturbed is accompanied by low-amplitude damped oscillations that occur at the start time and only within the range of positive displacement values. The oscillations amplitude decreases with increase of relaxation factor. Rod is recovered virtually without an oscillating process both in the limit and with any high values of the relaxation factor.

  11. A Social Audit Model for Agro-biotechnology Initiatives in Developing Countries: Accounting for Ethical, Social, Cultural, and Commercialization Issues

    Directory of Open Access Journals (Sweden)

    Obidimma Ezezika

    2009-10-01

    Full Text Available There is skepticism and resistance to innovations associated with agro-biotechnology projects, leading to the possibility of failure. The source of the skepticism is complex, but partly traceable to how local communities view genetically engineered crops, public perception on the technology’s implications, and views on the role of the private sector in public health and agriculture, especially in the developing world. We posit that a governance and management model in which ethical, social, cultural, and commercialization issues are accounted for and addressed is important in mitigating risk of project failure and improving the appropriate adoption of agro-biotechnology in sub-Saharan Africa. We introduce a social audit model, which we term Ethical, Social, Cultural and Commercialization (ESC2 auditing and which we developed based on feedback from a number of stakeholders. We lay the foundation for its importance in agro-biotechnology development projects and show how the model can be applied to projects run by Public Private Partnerships. We argue that the implementation of the audit model can help to build public trust through facilitating project accountability and transparency. The model also provides evidence on how ESC2 issues are perceived by various stakeholders, which enables project managers to effectively monitor and improve project performance. Although this model was specifically designed for agro-biotechnology initiatives, we show how it can also be applied to other development projects.

  12. Assessing and accounting for time heterogeneity in stochastic actor oriented models

    NARCIS (Netherlands)

    Lospinoso, Joshua A.; Schweinberger, Michael; Snijders, Tom A. B.; Ripley, Ruth M.

    This paper explores time heterogeneity in stochastic actor oriented models (SAOM) proposed by Snijders (Sociological methodology. Blackwell, Boston, pp 361-395, 2001) which are meant to study the evolution of networks. SAOMs model social networks as directed graphs with nodes representing people,

  13. Bioeconomic Modelling of Wetlands and Waterfowl in Western Canada: Accounting for Amenity Values

    NARCIS (Netherlands)

    Kooten, van G.C.; Whitey, P.; Wong, L.

    2011-01-01

    This study reexamines and updates an original bioeconomic model of optimal duck harvest and wetland retention by Hammack and Brown (1974, Waterfowl and Wetlands: Toward Bioeconomic Analysis. Washington, DC: Resources for the Future). It then extends the model to include the nonmarket (in situ) value

  14. Supportive Accountability: A model for providing human support for internet and ehealth interventions

    NARCIS (Netherlands)

    Mohr, D.C.; Cuijpers, P.; Lehman, K.A.

    2011-01-01

    The effectiveness of and adherence to eHealth interventions is enhanced by human support. However, human support has largely not been manualized and has usually not been guided by clear models. The objective of this paper is to develop a clear theoretical model, based on relevant empirical

  15. An individual-based model of Zebrafish population dynamics accounting for energy dynamics

    DEFF Research Database (Denmark)

    Beaudouin, Remy; Goussen, Benoit; Piccini, Benjamin

    2015-01-01

    Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model...

  16. THE DISTRIBUTION MODELING OF IMPURITIES IN THE ATMOSPHERE WITH TAKING INTO ACCOUNT OF TERRAIN

    Directory of Open Access Journals (Sweden)

    P. B. Mashyhina

    2009-03-01

    Full Text Available The 2D numerical model to simulate the pollutant dispersion over complex terrain was proposed. The model is based on the equation of potential flow and the equation of admixture transfer. Results of the numerical experiment are presented.

  17. Accounting for perception in random regret choice models: Weberian and generalized Weberian specifications

    NARCIS (Netherlands)

    Jang, S.; Rasouli, S.; Timmermans, H.J.P.

    2016-01-01

    Recently, regret-based choice models have been introduced in the travel behavior research community as an alternative to expected/random utility models. The fundamental proposition underlying regret theory is that individuals minimize the amount of regret they (are expected to) experience when

  18. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  19. A constitutive model accounting for strain ageing effects on work-hardening. Application to a C-Mn steel

    Science.gov (United States)

    Ren, Sicong; Mazière, Matthieu; Forest, Samuel; Morgeneyer, Thilo F.; Rousselier, Gilles

    2017-12-01

    One of the most successful models for describing the Portevin-Le Chatelier effect in engineering applications is the Kubin-Estrin-McCormick model (KEMC). In the present work, the influence of dynamic strain ageing on dynamic recovery due to dislocation annihilation is introduced in order to improve the KEMC model. This modification accounts for additional strain hardening rate due to limited dislocation annihilation by the diffusion of solute atoms and dislocation pinning at low strain rate and/or high temperature. The parameters associated with this novel formulation are identified based on tensile tests for a C-Mn steel at seven temperatures ranging from 20 °C to 350 °C. The validity of the model and the improvement compared to existing models are tested using 2D and 3D finite element simulations of the Portevin-Le Chatelier effect in tension.

  20. Investigation of a new model accounting for rotors of finite tip-speed ratio in yaw or tilt

    International Nuclear Information System (INIS)

    Branlard, E; Gaunaa, M; Machefaux, E

    2014-01-01

    The main results from a recently developed vortex model are implemented into a Blade Element Momentum(BEM) code. This implementation accounts for the effect of finite tip-speed ratio, an effect which was not considered in standard BEM yaw-models. The model and its implementation are presented. Data from the MEXICO experiment are used as a basis for validation. Three tools using the same 2D airfoil coefficient data are compared: a BEM code, an Actuator-Line and a vortex code. The vortex code is further used to validate the results from the newly implemented BEM yaw-model. Significant improvements are obtained for the prediction of loads and induced velocities. Further relaxation of the main assumptions of the model are briefly presented and discussed

  1. 7 CFR 1735.92 - Accounting considerations.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Accounting considerations. 1735.92 Section 1735.92... All Acquisitions and Mergers § 1735.92 Accounting considerations. (a) Proper accounting shall be... in the absence of such a commission, as required by RUS based on Generally Accepted Accounting...

  2. An improved car-following model accounting for the preceding car's taillight

    Science.gov (United States)

    Zhang, Jian; Tang, Tie-Qiao; Yu, Shao-Wei

    2018-02-01

    During the deceleration process, the preceding car's taillight may have influences on its following car's driving behavior. In this paper, we propose an extended car-following model with consideration of the preceding car's taillight. Two typical situations are used to simulate each car's movement and study the effects of the preceding car's taillight on the driving behavior. Meanwhile, sensitivity analysis of the model parameter is in detail discussed. The numerical results show that the proposed model can improve the stability of traffic flow and the traffic safety can be enhanced without a decrease of efficiency especially when cars pass through a signalized intersection.

  3. A Nonlinear Transmission Line Model of the Cochlea With Temporal Integration Accounts for Duration Effects in Threshold Fine Structure

    DEFF Research Database (Denmark)

    Verhey, Jesko L.; Mauermann, Manfred; Epp, Bastian

    2017-01-01

    For normal-hearing listeners, auditory pure-tone thresholds in quiet often show quasi periodic fluctuations when measured with a high frequency resolution, referred to as threshold fine structure. Threshold fine structure is dependent on the stimulus duration, with smaller fluctuations for short...... than for long signals. The present study demonstrates how this effect can be captured by a nonlinear and active model of the cochlear in combination with a temporal integration stage. Since this cochlear model also accounts for fine structure and connected level dependent effects, it is superior...

  4. Modelling of L-valine Repeated Fed-batch Fermentation Process Taking into Account the Dissolved Oxygen Tension

    Directory of Open Access Journals (Sweden)

    Tzanko Georgiev

    2009-03-01

    Full Text Available This article deals with synthesis of dynamic unstructured model of variable volume fed-batch fermentation process with intensive droppings for L-valine production. The presented approach of the investigation includes the following main procedures: description of the process by generalized stoichiometric equations; preliminary data processing and calculation of specific rates for main kinetic variables; identification of the specific rates takes into account the dissolved oxygen tension; establishment and optimisation of dynamic model of the process; simulation researches. MATLAB is used as a research environment.

  5. Precedent Proper Names in Informal Oikonymy

    Directory of Open Access Journals (Sweden)

    Maria V. Akhmetova

    2013-06-01

    Full Text Available The paper deals with the Russian language informal city names (oikonyms motivated by other toponyms (with reference to Russia and the CIS. The author shows that the motivating proper name can replace the city name (e. g. Глазго < Glasgow ‘Glazov’ or contaminate with it (e. g. Экибостон < Ekibastuz + Boston, the “alien” onym being attracted to construct an informal oikonym due to its phonetic similarity or, on occasion, due to an affinity, either real or imaginary, between the two settlements. The author argues that the phonetic motivation is more characteristic for the modern urban tradition, than for popular dialects.

  6. Accounting for scattering in the Landauer-Datta-Lundstrom transport model

    Directory of Open Access Journals (Sweden)

    Юрій Олексійович Кругляк

    2015-03-01

    Full Text Available Scattering of carriers in the LDL transport model during the changes of the scattering times in the collision processes is considered qualitatively. The basic relationship between the transmission coefficient T and the average mean free path  is derived for 1D conductor. As an example, the experimental data for Si MOSFET are analyzed with the use of various models of reliability.

  7. Loading Processes Dynamics Modelling Taking into Account the Bucket-Soil Interaction

    Directory of Open Access Journals (Sweden)

    Carmen Debeleac

    2007-10-01

    Full Text Available The author propose three dynamic models specialized for the vibrations and resistive forces analysis that appear at the loading process with different construction equipment like frontal loaders and excavators.The models used putting into evidence the components of digging: penetration, cutting, and loading.The conclusions of this study consist by evidentiate the dynamic overloads that appear on the working state and that induced the self-oscillations into the equipment structure.

  8. A multiphase constitutive model of reinforced soils accounting for soil-inclusion interaction behaviour

    OpenAIRE

    BENNIS, M; DE BUHAN, P

    2003-01-01

    A two-phase continuum description of reinforced soil structures is proposed in which the soil mass and the reinforcement network are treated as mutually interacting superposed media. The equations governing such a model are developed in the context of elastoplasticity, with special emphasis put on the soil/reinforcement interaction constitutive law. As shown in an illustrative example, such a model paves the way for numerically efficient design methods of reinforced soil structures.

  9. Mathematical modeling of pigment dispersion taking into account the full agglomerate particle size distribution

    DEFF Research Database (Denmark)

    Kiil, Søren

    2017-01-01

    The purpose of this work is to develop a mathematical model that can quantify the dispersion of pigments, with a focus on the mechanical breakage of pigment agglomerates. The underlying physical mechanism was assumed to be surface erosion of spherical pigment agglomerates. The full agglomerate pa.......g., in the development of novel dispersion principles and for analysis of dispersion failures. The general applicability of the model, beyond the three pigments considered, needs to be confirmed....

  10. Management Accounting

    OpenAIRE

    John Burns; Martin Quinn; Liz Warren; João Oliveira

    2013-01-01

    Overview of the BookThe textbook comprises six sections which together represent a comprehensive insight into management accounting - its technical attributes, changeable wider context, and the multiple roles of management accountants. The sections cover: (1) an introduction to management accounting, (2) how organizations account for their costs, (3) the importance of tools and techniques which assist organizational planning and control, (4) the various dimensions of making business decisions...

  11. An extended continuum model accounting for the driver's timid and aggressive attributions

    International Nuclear Information System (INIS)

    Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng

    2017-01-01

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV–Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption. - Highlights: • A new continuum model is developed with the consideration of the driver's timid and aggressive behaviors simultaneously. • Applying the linear stability theory, the new model's linear stability is obtained. • Through nonlinear analysis, the KdV–Burgers equation is derived. • The energy consumption for this model is studied.

  12. Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.

    Science.gov (United States)

    Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael

    2015-03-03

    Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.

  13. An extended continuum model accounting for the driver's timid and aggressive attributions

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Rongjun; Ge, Hongxia [Faculty of Maritime and Transportation, Ningbo University, Ningbo 315211 (China); Jiangsu Province Collaborative Innovation Center for Modern Urban Traffic Technologies, Nanjing 210096 (China); National Traffic Management Engineering and Technology Research Centre Ningbo University Sub-centre, Ningbo 315211 (China); Wang, Jufeng, E-mail: wjf@nit.zju.edu.cn [Ningbo Institute of Technology, Zhejiang University, Ningbo 315100 (China)

    2017-04-18

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV–Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption. - Highlights: • A new continuum model is developed with the consideration of the driver's timid and aggressive behaviors simultaneously. • Applying the linear stability theory, the new model's linear stability is obtained. • Through nonlinear analysis, the KdV–Burgers equation is derived. • The energy consumption for this model is studied.

  14. Modelling the long term alteration of concretes: taking carbonation into account

    International Nuclear Information System (INIS)

    Badouix, Franck

    1999-01-01

    After an introduction on the storage and warehousing of wastes from the nuclear industry (principles and objectives, general historic context, classification of radioactive wastes), an overview of studies performed within the CEA on wastes (activities related to the fuel cycle, research on warehousing and storage materials), and an introduction to the development of a general code of simulation of the degradation of cement matrix material and of a modelling of concrete carbonation under water, this research thesis reports a bibliographical study on the following topics: case of a non-altered hydrated concrete, expertise performed on altered materials on industrial sites, alteration of CPA-CEM I paste (alteration by demineralized water, carbonation). Based on these observations, a simplified model is developed for the cross diffusion of calcium and carbonates in a semi-infinite inert porous matrix of portlandite. This model is used to simulate degradations performed in laboratory on a CPA-CEM I paste. This model reveals to be insufficient as far as carbonation is concerned. Tests are performed to study the influence of granulates on a concrete (from an industrial site or elaborated in laboratory with a known composition) in water with low mineral content. A model is developed to understand the behaviour of paste-granulate interfaces. Then, concretes are lixiviated in carbonated water, and by using previous results and the simplified modelling of carbonation, simulations are performed and compared with experimental results [fr

  15. An agent-based simulation model of patient choice of health care providers in accountable care organizations.

    Science.gov (United States)

    Alibrahim, Abdullah; Wu, Shinyi

    2018-03-01

    Accountable care organizations (ACO) in the United States show promise in controlling health care costs while preserving patients' choice of providers. Understanding the effects of patient choice is critical in novel payment and delivery models like ACO that depend on continuity of care and accountability. The financial, utilization, and behavioral implications associated with a patient's decision to forego local health care providers for more distant ones to access higher quality care remain unknown. To study this question, we used an agent-based simulation model of a health care market composed of providers able to form ACO serving patients and embedded it in a conditional logit decision model to examine patients capable of choosing their care providers. This simulation focuses on Medicare beneficiaries and their congestive heart failure (CHF) outcomes. We place the patient agents in an ACO delivery system model in which provider agents decide if they remain in an ACO and perform a quality improving CHF disease management intervention. Illustrative results show that allowing patients to choose their providers reduces the yearly payment per CHF patient by $320, reduces mortality rates by 0.12 percentage points and hospitalization rates by 0.44 percentage points, and marginally increases provider participation in ACO. This study demonstrates a model capable of quantifying the effects of patient choice in a theoretical ACO system and provides a potential tool for policymakers to understand implications of patient choice and assess potential policy controls.

  16. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    Science.gov (United States)

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Using a chemistry transport model to account for the spatial variability of exposure concentrations in epidemiologic air pollution studies.

    Science.gov (United States)

    Valari, Myrto; Menut, Laurent; Chatignoux, Edouard

    2011-02-01

    Environmental epidemiology and more specifically time-series analysis have traditionally used area-averaged pollutant concentrations measured at central monitors as exposure surrogates to associate health outcomes with air pollution. However, spatial aggregation has been shown to contribute to the overall bias in the estimation of the exposure-response functions. This paper presents the benefit of adding features of the spatial variability of exposure by using concentration fields modeled with a chemistry transport model instead of monitor data and accounting for human activity patterns. On the basis of county-level census data for the city of Paris, France, and a Monte Carlo simulation, a simple activity model was developed accounting for the temporal variability between working and evening hours as well as during transit. By combining activity data with modeled concentrations, the downtown, suburban, and rural spatial patterns in exposure to nitrogen dioxide, ozone, and PM2.5 (particulate matter [PM] pollution on total nonaccidental mortality for the 4-yr period from 2001 to 2004. It was shown that the time series of the exposure surrogates developed here are less correlated across co-pollutants than in the case of the area-averaged monitor data. This led to less biased exposure-response functions when all three co-pollutants were inserted simultaneously in the same regression model. This finding yields insight into pollutant-specific health effects that are otherwise masked by the high correlation among co-pollutants.

  18. A Proper Perspective on the Twin Deficits

    Science.gov (United States)

    1989-05-01

    deficit twins, the relation between them, and their consanguine parentage. The trade deficit or, to be more accurate, the current account deficit, is...In general, there is a small negative, but statistically significant, relationship between the size of the federal deficit in one year and the

  19. Accounting standards

    NARCIS (Netherlands)

    Stellinga, B.; Mügge, D.

    2014-01-01

    The European and global regulation of accounting standards have witnessed remarkable changes over the past twenty years. In the early 1990s, EU accounting practices were fragmented along national lines and US accounting standards were the de facto global standards. Since 2005, all EU listed

  20. Accounting outsourcing

    OpenAIRE

    Linhartová, Lucie

    2012-01-01

    This thesis gives a complex view on accounting outsourcing, deals with the outsourcing process from its beginning (condition of collaboration, making of contract), through collaboration to its possible ending. This work defines outsourcing, indicates the main advatages, disadvatages and arguments for its using. The main object of thesis is mainly practical side of accounting outsourcing and providing of first quality accounting services.

  1. Mechanistic Physiologically Based Pharmacokinetic (PBPK) Model of the Heart Accounting for Inter-Individual Variability: Development and Performance Verification.

    Science.gov (United States)

    Tylutki, Zofia; Mendyk, Aleksander; Polak, Sebastian

    2018-04-01

    Modern model-based approaches to cardiac safety and efficacy assessment require accurate drug concentration-effect relationship establishment. Thus, knowledge of the active concentration of drugs in heart tissue is desirable along with inter-subject variability influence estimation. To that end, we developed a mechanistic physiologically based pharmacokinetic model of the heart. The models were described with literature-derived parameters and written in R, v.3.4.0. Five parameters were estimated. The model was fitted to amitriptyline and nortriptyline concentrations after an intravenous infusion of amitriptyline. The cardiac model consisted of 5 compartments representing the pericardial fluid, heart extracellular water, and epicardial intracellular, midmyocardial intracellular, and endocardial intracellular fluids. Drug cardiac metabolism, passive diffusion, active efflux, and uptake were included in the model as mechanisms involved in the drug disposition within the heart. The model accounted for inter-individual variability. The estimates of optimized parameters were within physiological ranges. The model performance was verified by simulating 5 clinical studies of amitriptyline intravenous infusion, and the simulated pharmacokinetic profiles agreed with clinical data. The results support the model feasibility. The proposed structure can be tested with the goal of improving the patient-specific model-based cardiac safety assessment and offers a framework for predicting cardiac concentrations of various xenobiotics. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  2. Accounting for misclassification in electronic health records-derived exposures using generalized linear finite mixture models.

    Science.gov (United States)

    Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M

    2017-06-01

    Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.

  3. Translating Institutional Templates: A Historical Account of the Consequences of Importing Policing Models into Argentina

    Directory of Open Access Journals (Sweden)

    Matías Dewey

    2017-01-01

    Full Text Available This article focuses on the translation of the French and English law enforcement models into Argentina and analyzes its consequences in terms of social order. Whereas in the former two models the judiciary and police institutions originated in large-scale processes of historical consolidation, in the latter these institutions were implanted without the antecedents present in their countries of origin. The empirical references are Argentine police institutions, particularly the police of the Buenos Aires Province, observed at two moments in which the institutional import was particularly intense: towards the end of the nineteenth and beginning of the twentieth centuries, and at the end of the twentieth century. By way of tracing these processes of police constitution and reform, we show how new models of law enforcement and policing interacted with indigenous political structures and cultural frames, as well as how this constellation produced a social order in which legality and illegality are closely interwoven. The article is an attempt to go beyond the common observations regarding how an imported model failed; instead, it dissects the effects the translation actually produced and how the translated models transform into resources that reshape the new social order. A crucial element, the article shows, is that these resources can be instrumentalized according to »idiosyncrasies«, interests, and quotas of power.

  4. Unsupervised machine learning account of magnetic transitions in the Hubbard model

    Science.gov (United States)

    Ch'ng, Kelvin; Vazquez, Nick; Khatami, Ehsan

    2018-01-01

    We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t -distributed stochastic neighboring ensemble (t -SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t -SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t -SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.

  5. An extension of a high temperature creep model to account for fuel sheath oxidation

    International Nuclear Information System (INIS)

    Boccolini, G.; Valli, G.

    1983-01-01

    Starting from the high-temperature creep model for Zircaloy fuel sheathing, the NIRVANA (developed by AECL), a multilayer model, is proposed in this paper: it includes the outer oxide plus alpha retained layers, and the inner core of beta or alpha plus beta material, all constrained to deform with the same creep rate. The model has been incorporated into the SPARA fuel computer code developed for the transient analysis of fuel rod behaviour in the CIRENE prototype reactor, but it is in principle valid for all Zircaloy fuel sheathings. Its predictions are compared with experimental results from burst tests on BWR and PWR type sheaths; the tests were carried out at CNEN under two research contracts with Ansaldo Meccanico Nucleare and Sigen-Sopren, respectively

  6. Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality

    Energy Technology Data Exchange (ETDEWEB)

    Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis

    2015-03-01

    This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.

  7. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Science.gov (United States)

    Beheshti, Rahmatollah; Jones-Smith, Jessica C; Igusa, Takeru

    2017-01-01

    Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES), the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1) quantifying the influence of prior diet preferences when food budgets are increased and 2) simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs), or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP). Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  8. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Directory of Open Access Journals (Sweden)

    Rahmatollah Beheshti

    Full Text Available Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES, the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1 quantifying the influence of prior diet preferences when food budgets are increased and 2 simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs, or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP. Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  9. Improved signal model for confocal sensors accounting for object depending artifacts.

    Science.gov (United States)

    Mauch, Florian; Lyda, Wolfram; Gronle, Marc; Osten, Wolfgang

    2012-08-27

    The conventional signal model of confocal sensors is well established and has proven to be exceptionally robust especially when measuring rough surfaces. Its physical derivation however is explicitly based on plane surfaces or point like objects, respectively. Here we show experimental results of a confocal point sensor measurement of a surface standard. The results illustrate the rise of severe artifacts when measuring curved surfaces. On this basis, we present a systematic extension of the conventional signal model that is proven to be capable of qualitatively explaining these artifacts.

  10. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    OpenAIRE

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts, allowing the inclusion of a broader set of ecosystem services types such regulating services and cultural services. Consistent with the principles of national account, ecosystem accounting focuses on asse...

  11. A comparison of land use change accounting methods: seeking common grounds for key modeling choices in biofuel assessments

    DEFF Research Database (Denmark)

    de Bikuna Salinas, Koldo Saez; Hamelin, Lorie; Hauschild, Michael Zwicky

    2018-01-01

    Five currently used methods to account for the global warming (GW) impact of the induced land-use change (LUC) greenhouse gas (GHG) emissions have been applied to four biofuel case studies. Two of the investigated methods attempt to avoid the need of considering a definite occupation -thus...... amortization period by considering ongoing LUC trends as a dynamic baseline. This leads to the accounting of a small fraction (0.8%) of the related emissions from the assessed LUC, thus their validity is disputed. The comparison of methods and contrasting case studies illustrated the need of clearly...... distinguishing between the different time horizons involved in life cycle assessments (LCA) of land-demanding products like biofuels. Absent in ISO standards, and giving rise to several confusions, definitions for the following time horizons have been proposed: technological scope, inventory model, impact...

  12. Behavioral health and health care reform models: patient-centered medical home, health home, and accountable care organization.

    Science.gov (United States)

    Bao, Yuhua; Casalino, Lawrence P; Pincus, Harold Alan

    2013-01-01

    Discussions of health care delivery and payment reforms have largely been silent about how behavioral health could be incorporated into reform initiatives. This paper draws attention to four patient populations defined by the severity of their behavioral health conditions and insurance status. It discusses the potentials and limitations of three prominent models promoted by the Affordable Care Act to serve populations with behavioral health conditions: the Patient-Centered Medical Home, the Health Home initiative within Medicaid, and the Accountable Care Organization. To incorporate behavioral health into health reform, policymakers and practitioners may consider embedding in the reform efforts explicit tools-accountability measures and payment designs-to improve access to and quality of care for patients with behavioral health needs.

  13. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia

    Directory of Open Access Journals (Sweden)

    Supriyati

    2015-12-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the crite-ria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may dif-fer, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, theAudit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  14. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia (P.197-207

    Directory of Open Access Journals (Sweden)

    Supriyati Supriyati

    2017-01-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the criteria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may differ, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, the Audit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  15. Small strain multiphase-field model accounting for configurational forces and mechanical jump conditions

    Science.gov (United States)

    Schneider, Daniel; Schoof, Ephraim; Tschukin, Oleg; Reiter, Andreas; Herrmann, Christoph; Schwab, Felix; Selzer, Michael; Nestler, Britta

    2018-03-01

    Computational models based on the phase-field method have become an essential tool in material science and physics in order to investigate materials with complex microstructures. The models typically operate on a mesoscopic length scale resolving structural changes of the material and provide valuable information about the evolution of microstructures and mechanical property relations. For many interesting and important phenomena, such as martensitic phase transformation, mechanical driving forces play an important role in the evolution of microstructures. In order to investigate such physical processes, an accurate calculation of the stresses and the strain energy in the transition region is indispensable. We recall a multiphase-field elasticity model based on the force balance and the Hadamard jump condition at the interface. We show the quantitative characteristics of the model by comparing the stresses, strains and configurational forces with theoretical predictions in two-phase cases and with results from sharp interface calculations in a multiphase case. As an application, we choose the martensitic phase transformation process in multigrain systems and demonstrate the influence of the local homogenization scheme within the transition regions on the resulting microstructures.

  16. An analytical model for CDMA downlink rate optimization taking into account uplink coverage restriction

    NARCIS (Netherlands)

    Endrayanto, A.I.; van den Berg, Hans Leo; Boucherie, Richardus J.

    2003-01-01

    This paper models and analyzes downlink and uplink power assignment in Code Division Multiple Access (CDMA) mobile networks. By discretizing the area into small segments, the power requirements are characterized via a matrix representation that separates user and system characteristics. We obtain a

  17. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    Science.gov (United States)

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  18. Accounting for false-positive acoustic detections of bats using occupancy models

    Science.gov (United States)

    Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.

    2014-01-01

    1. Acoustic surveys have become a common survey method for bats and other vocal taxa. Previous work shows that bat echolocation may be misidentified, but common analytic methods, such as occupancy models, assume that misidentifications do not occur. Unless rare, such misidentifications could lead to incorrect inferences with significant management implications.

  19. Accounting for change of support in spatial accuracy assessment of modelled soil mineral phosphorous concentration

    NARCIS (Netherlands)

    Leopold, U.; Heuvelink, G.B.M.; Tiktak, A.; Finke, P.A.; Schoumans, O.F.

    2006-01-01

    Agricultural activities in the Netherlands cause high nitrogen and phosphorous fluxes from soil to ground- and surface water. A model chain (STONE) has been developed to study and predict the magnitude of the resulting ground- and surface water pollution under different environmental conditions.

  20. Accounting for heterogeneity in travel episode satisfaction using a random parameters panel effects regression model

    NARCIS (Netherlands)

    Rasouli, Soora; Timmermans, Harry

    2014-01-01

    Rasouli & Timmermans1 suggested a model of travel episode satisfaction that includes the degree and nature of multitasking, activity envelope, transport mode, travel party, duration and a set of contextual and socio-economic variables. In this sequel, the focus of attention shifts to the analysis of

  1. Methods for Accounting for Co-Teaching in Value-Added Models. Working Paper

    Science.gov (United States)

    Hock, Heinrich; Isenberg, Eric

    2012-01-01

    Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching.…

  2. Practical Model for First Hyperpolarizability Dispersion Accounting for Both Homogeneous and Inhomogeneous Broadening Effects.

    Science.gov (United States)

    Campo, Jochen; Wenseleers, Wim; Hales, Joel M; Makarov, Nikolay S; Perry, Joseph W

    2012-08-16

    A practical yet accurate dispersion model for the molecular first hyperpolarizability β is presented, incorporating both homogeneous and inhomogeneous line broadening because these affect the β dispersion differently, even if they are indistinguishable in linear absorption. Consequently, combining the absorption spectrum with one free shape-determining parameter Ginhom, the inhomogeneous line width, turns out to be necessary and sufficient to obtain a reliable description of the β dispersion, requiring no information on the homogeneous (including vibronic) and inhomogeneous line broadening mechanisms involved, providing an ideal model for practical use in extrapolating experimental nonlinear optical (NLO) data. The model is applied to the efficient NLO chromophore picolinium quinodimethane, yielding an excellent fit of the two-photon resonant wavelength-dependent data and a dependable static value β0 = 316 × 10(-30) esu. Furthermore, we show that including a second electronic excited state in the model does yield an improved description of the NLO data at shorter wavelengths but has only limited influence on β0.

  3. Accounting for religieus sensibilities in social intervention: A supplement to Donkers’ models of change

    Directory of Open Access Journals (Sweden)

    Timothy Schilling

    2006-06-01

    Full Text Available In deze bijdrage wordt nagegaan in hoeverre de drie veranderkundige modellen van Donkers adequaat zijn als het gaat om de positionering van sociale interventies die zijn geïnspireerd door het geloof. Aan de hand van het werk van de Brothers of the Christians Schools in Manhattan, betoogt de auteur dat het sociaal-technologische model, het persoongeoriënteerde model en het maatschappijkritische model ontoereikend zijn om dit soort werk te plaatsen. In geen van deze modellen wordt namelijk plaats ingeruimd voor een ervaringshorizon die verder reikt dan de persoon in relatie tot de maatschappij. Terwijl voor een gelovige een horizon van de eeuwigheid of het concept van God een belangrijke rol vervult in de motivatie, het doel en het begrip van een sociale interventie. Via gebed wordt God betrokken in de relatie tussen werker en cliënt en in het veranderingsproces. De auteur stelt daarom een vierde, op het geloof gebaseerd model voor dat, zo hoopt hij, een goed interpretatiekader kan vormen voor vanuit religieuze achtergrond ingezette sociale interventies. Op die manier kan wellicht meer inzicht worden verkregen in hoe en waarom het geloof werkzaam is in de inzet voor sociale veranderingen, niet alleen het christelijk geloof, waar zijn casus betrekking op heeft, maar mogelijk ook andere religies.

  4. Summary of model to account for inhibition of CAM corrosion by porous ceramic coating

    Energy Technology Data Exchange (ETDEWEB)

    Hopper, R., LLNL

    1998-03-31

    Corrosion occurs during five characteristic periods or regimes. These are summarized below. For more detailed discussion, see the attached Memorandum by Robert Hopper entitled `Ceramic Barrier Performance Model, Version 1.0, Description of Initial PA Input` and dated March 30, 1998.

  5. Evaluation of alternative surface runoff accounting procedures using the SWAT model

    Science.gov (United States)

    For surface runoff estimation in the Soil and Water Assessment Tool (SWAT) model, the curve number (CN) procedure is commonly adopted to calculate surface runoff by utilizing antecedent soil moisture condition (SCSI) in field. In the recent version of SWAT (SWAT2005), an alternative approach is ava...

  6. An analytical model for CDMA downlink rate optimization taking into account uplink coverage restrictions

    NARCIS (Netherlands)

    Endrayanto, A.I.; van den Berg, Hans Leo; Boucherie, Richardus J.

    2005-01-01

    This paper models and analyzes downlink and uplink power assignment in code division multiple access (CDMA) mobile networks. By discretizing the area into small segments, the power requirements are characterized via a matrix representation that separates user and system characteristics. We obtain a

  7. Revisiting Kappa to account for change in the accuracy assessment of land-use models

    NARCIS (Netherlands)

    Vliet, van J.; Bregt, A.K.; Hagen-Zanker, A.

    2011-01-01

    Land-use change models are typically calibrated to reproduce known historic changes. Calibration results can then be assessed by comparing two datasets: the simulated land-use map and the actual land-use map at the same time. A common method for this is the Kappa statistic, which expresses the

  8. Demographic Accounting and Model-Building. Education and Development Technical Reports.

    Science.gov (United States)

    Stone, Richard

    This report describes and develops a model for coordinating a variety of demographic and social statistics within a single framework. The framework proposed, together with its associated methods of analysis, serves both general and specific functions. The general aim of these functions is to give numerical definition to the pattern of society and…

  9. Models of language: towards a practice-based account of information in natural language

    NARCIS (Netherlands)

    Andrade-Lotero, E.J.

    2012-01-01

    Edgar Andrade-Lotero onderzocht twee modellen van taalkundige informatie. Hij richt zich met name op de filosofische vooronderstellingen van deze modellen. Eén van deze modellen is afkomstig uit de formele semantiek; het andere model is gebaseerd op een specifiek onderzoek naar de rol van tekens in

  10. Using state-and-transition modeling to account for imperfect detection in invasive species management

    Science.gov (United States)

    Frid, Leonardo; Holcombe, Tracy; Morisette, Jeffrey T.; Olsson, Aaryn D.; Brigham, Lindy; Bean, Travis M.; Betancourt, Julio L.; Bryan, Katherine

    2013-01-01

    Buffelgrass, a highly competitive and flammable African bunchgrass, is spreading rapidly across both urban and natural areas in the Sonoran Desert of southern and central Arizona. Damages include increased fire risk, losses in biodiversity, and diminished revenues and quality of life. Feasibility of sustained and successful mitigation will depend heavily on rates of spread, treatment capacity, and cost–benefit analysis. We created a decision support model for the wildland–urban interface north of Tucson, AZ, using a spatial state-and-transition simulation modeling framework, the Tool for Exploratory Landscape Scenario Analyses. We addressed the issues of undetected invasions, identifying potentially suitable habitat and calibrating spread rates, while answering questions about how to allocate resources among inventory, treatment, and maintenance. Inputs to the model include a state-and-transition simulation model to describe the succession and control of buffelgrass, a habitat suitability model, management planning zones, spread vectors, estimated dispersal kernels for buffelgrass, and maps of current distribution. Our spatial simulations showed that without treatment, buffelgrass infestations that started with as little as 80 ha (198 ac) could grow to more than 6,000 ha by the year 2060. In contrast, applying unlimited management resources could limit 2060 infestation levels to approximately 50 ha. The application of sufficient resources toward inventory is important because undetected patches of buffelgrass will tend to grow exponentially. In our simulations, areas affected by buffelgrass may increase substantially over the next 50 yr, but a large, upfront investment in buffelgrass control could reduce the infested area and overall management costs.

  11. A single-trace dual-process model of episodic memory: a novel computational account of familiarity and recollection.

    Science.gov (United States)

    Greve, Andrea; Donaldson, David I; van Rossum, Mark C W

    2010-02-01

    Dual-process theories of episodic memory state that retrieval is contingent on two independent processes: familiarity (providing a sense of oldness) and recollection (recovering events and their context). A variety of studies have reported distinct neural signatures for familiarity and recollection, supporting dual-process theory. One outstanding question is whether these signatures reflect the activation of distinct memory traces or the operation of different retrieval mechanisms on a single memory trace. We present a computational model that uses a single neuronal network to store memory traces, but two distinct and independent retrieval processes access the memory. The model is capable of performing familiarity and recollection-based discrimination between old and new patterns, demonstrating that dual-process models need not to rely on multiple independent memory traces, but can use a single trace. Importantly, our putative familiarity and recollection processes exhibit distinct characteristics analogous to those found in empirical data; they diverge in capacity and sensitivity to sparse and correlated patterns, exhibit distinct ROC curves, and account for performance on both item and associative recognition tests. The demonstration that a single-trace, dual-process model can account for a range of empirical findings highlights the importance of distinguishing between neuronal processes and the neuronal representations on which they operate.

  12. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    Directory of Open Access Journals (Sweden)

    Koen Degeling

    2017-12-01

    Full Text Available Abstract Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Methods Two approaches, 1 using non-parametric bootstrapping and 2 using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Results Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500, the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25, yielding infeasible modeling outcomes. Conclusions Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  13. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models.

    Science.gov (United States)

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-12-15

    Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  14. The effects of drugs on human models of emotional processing: an account of antidepressant drug treatment.

    Science.gov (United States)

    Pringle, Abbie; Harmer, Catherine J

    2015-12-01

    Human models of emotional processing suggest that the direct effect of successful antidepressant drug treatment may be to modify biases in the processing of emotional information. Negative biases in emotional processing are documented in depression, and single or short-term dosing with conventional antidepressant drugs reverses these biases in depressed patients prior to any subjective change in mood. Antidepressant drug treatments also modulate emotional processing in healthy volunteers, which allows the consideration of the psychological effects of these drugs without the confound of changes in mood. As such, human models of emotional processing may prove to be useful for testing the efficacy of novel treatments and for matching treatments to individual patients or subgroups of patients.

  15. A communication model of shared decision making: accounting for cancer treatment decisions.

    Science.gov (United States)

    Siminoff, Laura A; Step, Mary M

    2005-07-01

    The authors present a communication model of shared decision making (CMSDM) that explicitly identifies the communication process as the vehicle for decision making in cancer treatment. In this view, decision making is necessarily a sociocommunicative process whereby people enter into a relationship, exchange information, establish preferences, and choose a course of action. The model derives from contemporary notions of behavioral decision making and ethical conceptions of the doctor-patient relationship. This article briefly reviews the theoretical approaches to decision making, notes deficiencies, and embeds a more socially based process into the dynamics of the physician-patient relationship, focusing on cancer treatment decisions. In the CMSDM, decisions depend on (a) antecedent factors that have potential to influence communication, (b) jointly constructed communication climate, and (c) treatment preferences established by the physician and the patient.

  16. Why does placing the question before an arithmetic word problem improve performance? A situation model account.

    Science.gov (United States)

    Thevenot, Catherine; Devidal, Michel; Barrouillet, Pierre; Fayol, Michel

    2007-01-01

    The aim of this paper is to investigate the controversial issue of the nature of the representation constructed by individuals to solve arithmetic word problems. More precisely, we consider the relevance of two different theories: the situation or mental model theory (Johnson-Laird, 1983; Reusser, 1989) and the schema theory (Kintsch & Greeno, 1985; Riley, Greeno, & Heller, 1983). Fourth-graders who differed in their mathematical skills were presented with problems that varied in difficulty and with the question either before or after the text. We obtained the classic effect of the position of the question, with better performance when the question was presented prior to the text. In addition, this effect was more marked in the case of children who had poorer mathematical skills and in the case of more difficult problems. We argue that this pattern of results is compatible only with the situation or mental model theory, and not with the schema theory.

  17. Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.

    2016-01-01

    Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.

  18. Model application of Murabahah financing acknowledgement statement of Sharia accounting standard No 59 Year 2002

    Science.gov (United States)

    Muda, Iskandar; Panjaitan, Rohdearni; Erlina; Ginting, Syafruddin; Maksum, Azhar; Abubakar

    2018-03-01

    The purpose of this research is to observe murabahah financing implantation model. Observations were made on one of the sharia banks going public in Indonesia. Form of implementation of such implementation in the form of financing given the exact facilities and maximum financing, then the provision of financing should be adjusted to the type, business conditions and business plans prospective mudharib. If the financing provided is too low with the mudharib requirement not reaching the target and the financing is not refundable.

  19. Accounting for Model Uncertainties Using Reliability Methods - Application to Carbon Dioxide Geologic Sequestration System. Final Report

    International Nuclear Information System (INIS)

    Mok, Chin Man; Doughty, Christine; Zhang, Keni; Pruess, Karsten; Kiureghian, Armen; Zhang, Miao; Kaback, Dawn

    2010-01-01

    A new computer code, CALRELTOUGH, which uses reliability methods to incorporate parameter sensitivity and uncertainty analysis into subsurface flow and transport models, was developed by Geomatrix Consultants, Inc. in collaboration with Lawrence Berkeley National Laboratory and University of California at Berkeley. The CALREL reliability code was developed at the University of California at Berkely for geotechnical applications and the TOUGH family of codes was developed at Lawrence Berkeley National Laboratory for subsurface flow and tranport applications. The integration of the two codes provides provides a new approach to deal with uncertainties in flow and transport modeling of the subsurface, such as those uncertainties associated with hydrogeology parameters, boundary conditions, and initial conditions of subsurface flow and transport using data from site characterization and monitoring for conditioning. The new code enables computation of the reliability of a system and the components that make up the system, instead of calculating the complete probability distributions of model predictions at all locations at all times. The new CALRELTOUGH code has tremendous potential to advance subsurface understanding for a variety of applications including subsurface energy storage, nuclear waste disposal, carbon sequestration, extraction of natural resources, and environmental remediation. The new code was tested on a carbon sequestration problem as part of the Phase I project. Phase iI was not awarded.

  20. An extended heterogeneous car-following model accounting for anticipation driving behavior and mixed maximum speeds

    Science.gov (United States)

    Sun, Fengxin; Wang, Jufeng; Cheng, Rongjun; Ge, Hongxia

    2018-02-01

    The optimal driving speeds of the different vehicles may be different for the same headway. In the optimal velocity function of the optimal velocity (OV) model, the maximum speed vmax is an important parameter determining the optimal driving speed. A vehicle with higher maximum speed is more willing to drive faster than that with lower maximum speed in similar situation. By incorporating the anticipation driving behavior of relative velocity and mixed maximum speeds of different percentages into optimal velocity function, an extended heterogeneous car-following model is presented in this paper. The analytical linear stable condition for this extended heterogeneous traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulted from the cooperation between anticipation driving behavior and heterogeneous maximum speeds in the optimal velocity function. The analytical and numerical results all demonstrate that strengthening driver's anticipation effect can improve the stability of heterogeneous traffic flow, and increasing the lowest value in the mixed maximum speeds will result in more instability, but increasing the value or proportion of the part already having higher maximum speed will cause different stabilities at high or low traffic densities.

  1. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    NARCIS (Netherlands)

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts,

  2. Tree biomass in the Swiss landscape: nationwide modelling for improved accounting for forest and non-forest trees.

    Science.gov (United States)

    Price, B; Gomez, A; Mathys, L; Gardi, O; Schellenberger, A; Ginzler, C; Thürig, E

    2017-03-01

    Trees outside forest (TOF) can perform a variety of social, economic and ecological functions including carbon sequestration. However, detailed quantification of tree biomass is usually limited to forest areas. Taking advantage of structural information available from stereo aerial imagery and airborne laser scanning (ALS), this research models tree biomass using national forest inventory data and linear least-square regression and applies the model both inside and outside of forest to create a nationwide model for tree biomass (above ground and below ground). Validation of the tree biomass model against TOF data within settlement areas shows relatively low model performance (R 2 of 0.44) but still a considerable improvement on current biomass estimates used for greenhouse gas inventory and carbon accounting. We demonstrate an efficient and easily implementable approach to modelling tree biomass across a large heterogeneous nationwide area. The model offers significant opportunity for improved estimates on land use combination categories (CC) where tree biomass has either not been included or only roughly estimated until now. The ALS biomass model also offers the advantage of providing greater spatial resolution and greater within CC spatial variability compared to the current nationwide estimates.

  3. Fluorescence microscopy point spread function model accounting for aberrations due to refractive index variability within a specimen.

    Science.gov (United States)

    Ghosh, Sreya; Preza, Chrysanthe

    2015-07-01

    A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6  μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction.

  4. Covariance-based synaptic plasticity in an attractor network model accounts for fast adaptation in free operant learning.

    Science.gov (United States)

    Neiman, Tal; Loewenstein, Yonatan

    2013-01-23

    In free operant experiments, subjects alternate at will between targets that yield rewards stochastically. Behavior in these experiments is typically characterized by (1) an exponential distribution of stay durations, (2) matching of the relative time spent at a target to its relative share of the total number of rewards, and (3) adaptation after a change in the reward rates that can be very fast. The neural mechanism underlying these regularities is largely unknown. Moreover, current decision-making neural network models typically aim at explaining behavior in discrete-time experiments in which a single decision is made once in every trial, making these models hard to extend to the more natural case of free operant decisions. Here we show that a model based on attractor dynamics, in which transitions are induced by noise and preference is formed via covariance-based synaptic plasticity, can account for the characteristics of behavior in free operant experiments. We compare a specific instance of such a model, in which two recurrently excited populations of neurons compete for higher activity, to the behavior of rats responding on two levers for rewarding brain stimulation on a concurrent variable interval reward schedule (Gallistel et al., 2001). We show that the model is consistent with the rats' behavior, and in particular, with the observed fast adaptation to matching behavior. Further, we show that the neural model can be reduced to a behavioral model, and we use this model to deduce a novel "conservation law," which is consistent with the behavior of the rats.

  5. Accountability and non-proliferation nuclear regime: a review of the mutual surveillance Brazilian-Argentine model for nuclear safeguards; Accountability e regime de nao proliferacao nuclear: uma avaliacao do modelo de vigilancia mutua brasileiro-argentina de salvaguardas nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Xavier, Roberto Salles

    2014-08-01

    The regimes of accountability, the organizations of global governance and institutional arrangements of global governance of nuclear non-proliferation and of Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards are the subject of research. The starting point is the importance of the institutional model of global governance for the effective control of non-proliferation of nuclear weapons. In this context, the research investigates how to structure the current arrangements of the international nuclear non-proliferation and what is the performance of model Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards in relation to accountability regimes of global governance. For that, was searched the current literature of three theoretical dimensions: accountability, global governance and global governance organizations. In relation to the research method was used the case study and the treatment technique of data the analysis of content. The results allowed: to establish an evaluation model based on accountability mechanisms; to assess how behaves the model Mutual Vigilance Brazilian-Argentine Nuclear Safeguards front of the proposed accountability regime; and to measure the degree to which regional arrangements that work with systems of global governance can strengthen these international systems. (author)

  6. Securing radioactive sources through a proper management

    International Nuclear Information System (INIS)

    Mourao, Rogerio Pimenta

    2009-01-01

    The safety and security of radioactive sources have become a hot issue for the nuclear community in the last two decades. The Goiania accident in Brazil and the September 11th attack alerted governments and nuclear agencies around the world to the vulnerability of the thousands of disused radioactive sources ill-stored or misplaced in a myriad of ways, especially in countries with less developed infra-structure. Once the threat of environmental contamination or malevolent use of these sources became clear, the International Atomic Energy Agency and the American Government spawned initiatives to reduce this risk, basically stimulating the proper conditioning of the sources and, whenever possible, seeking their repatriation to the countries of origin. Since 1996 Brazil has been participating actively in this effort, having carried out hands-on operations to condition old radium sources in Latin American and Caribbean countries and also repatriated its own neutron sources to the United States. A new operation is presently being organized: the reconditioning of the high activity sources contained in teletherapy units stored in the country using a mobile hot cell developed in South Africa. Also an agreement is being negotiated between the US National Nuclear Security Agency and the Brazilian CNEN to repatriate hundreds of radioactive gauges presently stored at CNEN's source storage buildings. (author)

  7. Modelling and experimental validation for off-design performance of the helical heat exchanger with LMTD correction taken into account

    Energy Technology Data Exchange (ETDEWEB)

    Phu, Nguyen Minh; Trinh, Nguyen Thi Minh [Vietnam National University, Ho Chi Minh City (Viet Nam)

    2016-07-15

    Today the helical coil heat exchanger is being employed widely due to its dominant advantages. In this study, a mathematical model was established to predict off-design works of the helical heat exchanger. The model was based on the LMTD and e-NTU methods, where a LMTD correction factor was taken into account to increase accuracy. An experimental apparatus was set-up to validate the model. Results showed that errors of thermal duty, outlet hot fluid temperature, outlet cold fluid temperature, shell-side pressure drop, and tube-side pressure drop were respectively +-5%, +-1%, +-1%, +-5% and +-2%. Diagrams of dimensionless operating parameters and a regression function were also presented as design-maps, a fast calculator for usage in design and operation of the exchanger. The study is expected to be a good tool to estimate off-design conditions of the single-phase helical heat exchangers.

  8. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Science.gov (United States)

    Liang, Peixin; Chai, Feng; Bi, Yunlong; Pei, Yulong; Cheng, Shukang

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization.

  9. What Time is Your Sunset? Accounting for Refraction in Sunrise/set Prediction Models

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer Lynn; Chizek Frouard, Malynda; Hilton, James; Phlips, Alan; Edgar, Roman

    2018-01-01

    Algorithms that predict sunrise and sunset times currently have an uncertainty of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, including difficulties determining whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction.We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We, then, compared these predictions with data sets of observed rise/set times taken from Mount Wilson Observatory in California, University of Alberta in Edmonton, Alberta, and onboard the SS James Franco in the Atlantic. A thorough investigation of the problem requires a more substantial data set of observed rise/set times and corresponding meteorological data from around the world.We have developed a mobile application, Sunrise & Sunset Observer, so that anyone can capture this astronomical and meteorological data using their smartphone video recorder as part of a citizen science project. The Android app for this project is available in the Google Play store. Videos can also be submitted through the project website (riseset.phy.mtu.edu). Data analysis will lead to more complete models that will provide higher accuracy rise/set predictions to benefit astronomers, navigators, and outdoorsmen everywhere.

  10. Optimization model of energy mix taking into account the environmental impact

    International Nuclear Information System (INIS)

    Gruenwald, O.; Oprea, D.

    2012-01-01

    At present, the energy system in the Czech Republic needs to decide some important issues regarding limited fossil resources, greater efficiency in producing of electrical energy and reducing emission levels of pollutants. These problems can be decided only by formulating and implementing an energy mix that will meet these conditions: rational, reliable, sustainable and competitive. The aim of this article is to find a new way of determining an optimal mix for the energy system in the Czech Republic. To achieve the aim, the linear optimization model comprising several economics, environmental and technical aspects will be applied. (Authors)

  11. Making Collaborative Innovation Accountable

    DEFF Research Database (Denmark)

    Sørensen, Eva

    The public sector is increasingly expected to be innovative, but the prize for a more innovative public sector might be that it becomes difficult to hold public authorities to account for their actions. The article explores the tensions between innovative and accountable governance, describes...... the foundation for these tensions in different accountability models, and suggest directions to take in analyzing the accountability of collaborative innovation processes....

  12. Modeling co-occurrence of northern spotted and barred owls: accounting for detection probability differences

    Science.gov (United States)

    Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.

    2009-01-01

    Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.

  13. Singing with yourself: evidence for an inverse modeling account of poor-pitch singing.

    Science.gov (United States)

    Pfordresher, Peter Q; Mantell, James T

    2014-05-01

    Singing is a ubiquitous and culturally significant activity that humans engage in from an early age. Nevertheless, some individuals - termed poor-pitch singers - are unable to match target pitches within a musical semitone while singing. In the experiments reported here, we tested whether poor-pitch singing deficits would be reduced when individuals imitate recordings of themselves as opposed to recordings of other individuals. This prediction was based on the hypothesis that poor-pitch singers have not developed an abstract "inverse model" of the auditory-vocal system and instead must rely on sensorimotor associations that they have experienced directly, which is true for sequences an individual has already produced. In three experiments, participants, both accurate and poor-pitch singers, were better able to imitate sung recordings of themselves than sung recordings of other singers. However, this self-advantage was enhanced for poor-pitch singers. These effects were not a byproduct of self-recognition (Experiment 1), vocal timbre (Experiment 2), or the absolute pitch of target recordings (i.e., the advantage remains when recordings are transposed, Experiment 3). Results support the conceptualization of poor-pitch singing as an imitative deficit resulting from a deficient inverse model of the auditory-vocal system with respect to pitch. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Accounting emergy flows to determine the best production model of a coffee plantation

    International Nuclear Information System (INIS)

    Giannetti, B.F.; Ogura, Y.; Bonilla, S.H.; Almeida, C.M.V.B.

    2011-01-01

    Cerrado, a savannah region, is Brazil's second largest ecosystem after the Amazon rainforest and is also threatened with imminent destruction. In the present study emergy synthesis was applied to assess the environmental performance of a coffee farm located in Coromandel, Minas Gerais, in the Brazilian Cerrado. The effects of land use on sustainability were evaluated by comparing the emergy indices along ten years in order to assess the energy flows driving the production process, and to determine the best production model combining productivity and environmental performance. The emergy indices are presented as a function of the annual crop. Results show that Santo Inacio farm should produce approximately 20 bags of green coffee per hectare to accomplish its best performance regarding both the production efficiency and the environment. The evaluation of coffee trade complements those obtained by contrasting productivity and environmental performance, and despite of the market prices variation, the optimum interval for Santo Inacio's farm is between 10 and 25 coffee bags/ha. - Highlights: → Emergy synthesis is used to assess the environmental performance of a coffee farm in Brazil. → The effects of land use on sustainability were evaluated along ten years. → The energy flows driving the production process were assessed. → The best production model combining productivity and environmental performance was determined.

  15. Accounting emergy flows to determine the best production model of a coffee plantation

    Energy Technology Data Exchange (ETDEWEB)

    Giannetti, B.F.; Ogura, Y.; Bonilla, S.H. [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil); Almeida, C.M.V.B., E-mail: cmvbag@terra.com.br [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil)

    2011-11-15

    Cerrado, a savannah region, is Brazil's second largest ecosystem after the Amazon rainforest and is also threatened with imminent destruction. In the present study emergy synthesis was applied to assess the environmental performance of a coffee farm located in Coromandel, Minas Gerais, in the Brazilian Cerrado. The effects of land use on sustainability were evaluated by comparing the emergy indices along ten years in order to assess the energy flows driving the production process, and to determine the best production model combining productivity and environmental performance. The emergy indices are presented as a function of the annual crop. Results show that Santo Inacio farm should produce approximately 20 bags of green coffee per hectare to accomplish its best performance regarding both the production efficiency and the environment. The evaluation of coffee trade complements those obtained by contrasting productivity and environmental performance, and despite of the market prices variation, the optimum interval for Santo Inacio's farm is between 10 and 25 coffee bags/ha. - Highlights: > Emergy synthesis is used to assess the environmental performance of a coffee farm in Brazil. > The effects of land use on sustainability were evaluated along ten years. > The energy flows driving the production process were assessed. > The best production model combining productivity and environmental performance was determined.

  16. The Influence of Feedback on Task-Switching Performance: A Drift Diffusion Modeling Account

    Directory of Open Access Journals (Sweden)

    Russell Cohen Hoffing

    2018-02-01

    Full Text Available Task-switching is an important cognitive skill that facilitates our ability to choose appropriate behavior in a varied and changing environment. Task-switching training studies have sought to improve this ability by practicing switching between multiple tasks. However, an efficacious training paradigm has been difficult to develop in part due to findings that small differences in task parameters influence switching behavior in a non-trivial manner. Here, for the first time we employ the Drift Diffusion Model (DDM to understand the influence of feedback on task-switching and investigate how drift diffusion parameters change over the course of task switch training. We trained 316 participants on a simple task where they alternated sorting stimuli by color or by shape. Feedback differed in six different ways between subjects groups, ranging from No Feedback (NFB to a variety of manipulations addressing trial-wise vs. Block Feedback (BFB, rewards vs. punishments, payment bonuses and different payouts depending upon the trial type (switch/non-switch. While overall performance was found to be affected by feedback, no effect of feedback was found on task-switching learning. Drift Diffusion Modeling revealed that the reductions in reaction time (RT switch cost over the course of training were driven by a continually decreasing decision boundary. Furthermore, feedback effects on RT switch cost were also driven by differences in decision boundary, but not in drift rate. These results reveal that participants systematically modified their task-switching performance without yielding an overall gain in performance.

  17. The Influence of Feedback on Task-Switching Performance: A Drift Diffusion Modeling Account.

    Science.gov (United States)

    Cohen Hoffing, Russell; Karvelis, Povilas; Rupprechter, Samuel; Seriès, Peggy; Seitz, Aaron R

    2018-01-01

    Task-switching is an important cognitive skill that facilitates our ability to choose appropriate behavior in a varied and changing environment. Task-switching training studies have sought to improve this ability by practicing switching between multiple tasks. However, an efficacious training paradigm has been difficult to develop in part due to findings that small differences in task parameters influence switching behavior in a non-trivial manner. Here, for the first time we employ the Drift Diffusion Model (DDM) to understand the influence of feedback on task-switching and investigate how drift diffusion parameters change over the course of task switch training. We trained 316 participants on a simple task where they alternated sorting stimuli by color or by shape. Feedback differed in six different ways between subjects groups, ranging from No Feedback (NFB) to a variety of manipulations addressing trial-wise vs. Block Feedback (BFB), rewards vs. punishments, payment bonuses and different payouts depending upon the trial type (switch/non-switch). While overall performance was found to be affected by feedback, no effect of feedback was found on task-switching learning. Drift Diffusion Modeling revealed that the reductions in reaction time (RT) switch cost over the course of training were driven by a continually decreasing decision boundary. Furthermore, feedback effects on RT switch cost were also driven by differences in decision boundary, but not in drift rate. These results reveal that participants systematically modified their task-switching performance without yielding an overall gain in performance.

  18. One-dimensional model of oxygen transport impedance accounting for convection perpendicular to the electrode

    Energy Technology Data Exchange (ETDEWEB)

    Mainka, J. [Laboratorio Nacional de Computacao Cientifica (LNCC), CMC 6097, Av. Getulio Vargas 333, 25651-075 Petropolis, RJ, Caixa Postal 95113 (Brazil); Maranzana, G.; Thomas, A.; Dillet, J.; Didierjean, S.; Lottin, O. [Laboratoire d' Energetique et de Mecanique Theorique et Appliquee (LEMTA), Universite de Lorraine, 2, avenue de la Foret de Haye, 54504 Vandoeuvre-les-Nancy (France); LEMTA, CNRS, 2, avenue de la Foret de Haye, 54504 Vandoeuvre-les-Nancy (France)

    2012-10-15

    A one-dimensional (1D) model of oxygen transport in the diffusion media of proton exchange membrane fuel cells (PEMFC) is presented, which considers convection perpendicular to the electrode in addition to diffusion. The resulting analytical expression of the convecto-diffusive impedance is obtained using a convection-diffusion equation instead of a diffusion equation in the case of classical Warburg impedance. The main hypothesis of the model is that the convective flux is generated by the evacuation of water produced at the cathode which flows through the porous media in vapor phase. This allows the expression of the convective flux velocity as a function of the current density and of the water transport coefficient {alpha} (the fraction of water being evacuated at the cathode outlet). The resulting 1D oxygen transport impedance neglects processes occurring in the direction parallel to the electrode that could have a significant impact on the cell impedance, like gas consumption or concentration oscillations induced by the measuring signal. However, it enables us to estimate the impact of convection perpendicular to the electrode on PEMFC impedance spectra and to determine in which conditions the approximation of a purely diffusive oxygen transport is valid. Experimental observations confirm the numerical results. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  19. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    Energy Technology Data Exchange (ETDEWEB)

    Beckon, William N., E-mail: William_Beckon@fws.gov

    2016-07-15

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  20. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    International Nuclear Information System (INIS)

    Beckon, William N.

    2016-01-01

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  1. Where's the problem? Considering Laing and Esterson's account of schizophrenia, social models of disability, and extended mental disorder.

    Science.gov (United States)

    Cooper, Rachel

    2017-08-01

    In this article, I compare and evaluate R. D. Laing and A. Esterson's account of schizophrenia as developed in Sanity, Madness and the Family (1964), social models of disability, and accounts of extended mental disorder. These accounts claim that some putative disorders (schizophrenia, disability, certain mental disorders) should not be thought of as reflecting biological or psychological dysfunction within the afflicted individual, but instead as external problems (to be located in the family, or in the material and social environment). In this article, I consider the grounds on which such claims might be supported. I argue that problems should not be located within an individual putative patient in cases where there is some acceptable test environment in which there is no problem. A number of cases where such an argument can show that there is no internal disorder are discussed. I argue, however, that Laing and Esterson's argument-that schizophrenia is not within diagnosed patients-does not work. The problem with their argument is that they fail to show that the diagnosed women in their study function adequately in any environment.

  2. Modeling liquid-vapor equilibria with an equation of state taking into account dipolar interactions and association by hydrogen bonding

    International Nuclear Information System (INIS)

    Perfetti, E.

    2006-11-01

    Modelling fluid-rock interactions as well as mixing and unmixing phenomena in geological processes requires robust equations of state (EOS) which must be applicable to systems containing water, gases over a broad range of temperatures and pressures. Cubic equations of state based on the Van der Waals theory (e. g. Soave-Redlich-Kwong or Peng-Robinson) allow simple modelling from the critical parameters of the studied fluid components. However, the accuracy of such equations becomes poor when water is a major component of the fluid since neither association trough hydrogen bonding nor dipolar interactions are accounted for. The Helmholtz energy of a fluid may be written as the sum of different energetic contributions by factorization of partition function. The model developed in this thesis for the pure H 2 O and H 2 S considers three contributions. The first contribution represents the reference Van der Waals fluid which is modelled by the SRK cubic EOS. The second contribution accounts for association through hydrogen bonding and is modelled by a term derived from Cubic Plus Association (CPA) theory. The third contribution corresponds to the dipolar interactions and is modelled by the Mean Spherical Approximation (MSA) theory. The resulting CPAMSA equation has six adjustable parameters, which three represent physical terms whose values are close to their experimental counterpart. This equation results in a better reproduction of the thermodynamic properties of pure water than obtained using the classical CPA equation along the vapour-liquid equilibrium. In addition, extrapolation to higher temperatures and pressure is satisfactory. Similarly, taking into account dipolar interactions together with the SRK cubic equation of state for calculating molar volume of H 2 S as a function of pressure and temperature results in a significant improvement compared to the SRK equation alone. Simple mixing rules between dipolar molecules are proposed to model the H 2 O-H 2 S

  3. Modelling the behaviour of uranium-series radionuclides in soils and plants taking into account seasonal variations in soil hydrology

    International Nuclear Information System (INIS)

    Pérez-Sánchez, D.; Thorne, M.C.

    2014-01-01

    In a previous paper, a mathematical model for the behaviour of 79 Se in soils and plants was described. Subsequently, a review has been published relating to the behaviour of 238 U-series radionuclides in soils and plants. Here, we bring together those two strands of work to describe a new mathematical model of the behaviour of 238 U-series radionuclides entering soils in solution and their uptake by plants. Initial studies with the model that are reported here demonstrate that it is a powerful tool for exploring the behaviour of this decay chain or subcomponents of it in soil-plant systems under different hydrological regimes. In particular, it permits studies of the degree to which secular equilibrium assumptions are appropriate when modelling this decay chain. Further studies will be undertaken and reported separately examining sensitivities of model results to input parameter values and also applying the model to sites contaminated with 238 U-series radionuclides. - Highlights: • Kinetic model of radionuclide transport in soils and uptake by plants. • Takes soil hydrology and redox conditions into account. • Applicable to the whole U-238 chain, including Rn-222, Pb-210 and Po-210. • Demonstrates intra-season and inter-season variability on timescales up to thousands of years

  4. Photoproduction of pions on nuclear in chiral bag model with account of motion effects of recoil nucleon

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Kanokov, Z.; Musakhanov, M.M.; Rakhimov, A.M.

    1989-01-01

    Pion production on a nucleon is studied in the chiral bag model (CBM). A CBM version is investigated in which the pions get into the bag and interact with quarks in a pseudovector way in the entire volume. Charged pion photoproduction amplitudes are found taking into account the recoil nucleon motion effects. Angular and energy distributions of charged pions, polarization of the recoil nucleon, multipoles are calculated. The recoil effects are shon to give an additional contribution to the static approximation of order of 10-20%. At bag radius value R=1 in the calculations are consistent with the experimental data

  5. A two-phase moisture transport model accounting for sorption hysteresis in layered porous building constructions

    DEFF Research Database (Denmark)

    Johannesson, Björn; Janz, Mårten

    2009-01-01

    Building constructions most commonly consists of layered porous materials such as masonry on bricks. The moisture distribution and its variations due to change in surrounding environment is of special interest in such layered construction since materials adsorb different amounts of water and exhi......Building constructions most commonly consists of layered porous materials such as masonry on bricks. The moisture distribution and its variations due to change in surrounding environment is of special interest in such layered construction since materials adsorb different amounts of water....... The model is developed by carefully examining the mass balance postulates for the two considered constituents together with appropriate and suitable constitutive assumptions. A test example is solved by using an implemented implicit finite element code which uses a modified Newton-Raphson scheme to tackle...

  6. Simple model for taking into account the effects of plasma screening in thermonuclear reactions

    International Nuclear Information System (INIS)

    Shalybkov, D.A.; Yakovlev, D.G.

    1988-01-01

    In the Thomas-Fermi model of high-density matter analytic calculation is made of the factor by which the rate of the thermonuclear reactions is enhanced by the effects of plasma screening in a degenerate weakly non-ideal electron gas and a strongly nonideal two-component ion liquid with large charge of the ions. The regions of densities and temperatures in which screening due to compressibility of the electron gas plays an important part are found. It is noted that the screening due to this compressibility may be influenced by strong magnetic fields B /approximately/ 10 12 -10 13 G, which quantize the motion of the electrons and change the electron charge screening length in the plasma. The results can be used for the degenerate cores of white dwarfs and shells of neutron stars

  7. Model of investment appraisal of high-rise construction with account of cost of land resources

    Science.gov (United States)

    Okolelova, Ella; Shibaeva, Marina; Trukhina, Natalya

    2018-03-01

    The article considers problems and potential of high-rise construction as a global urbanization. The results of theoretical and practical studies on the appraisal of investments in high-rise construction are provided. High-rise construction has a number of apparent upsides in modern terms of development of megapolises and primarily it is economically efficient. Amid serious lack of construction sites, skyscrapers successfully deal with the need of manufacturing, office and living premises. Nevertheless, there are plenty issues, which are related with high-rise construction, and only thorough scrutiny of them allow to estimate the real economic efficiency of this branch. The article focuses on the question of economic efficiency of high-rise construction. The suggested model allows adjusting the parameters of a facility under construction, setting the tone for market value as well as the coefficient for appreciation of the construction net cost, that depends on the number of storey's, in the form of function or discrete values.

  8. Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation

    KAUST Repository

    De La Garza Martinez, Pablo

    2016-05-01

    Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.

  9. Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation

    KAUST Repository

    De La Garza Martinez, Pablo

    2016-01-01

    Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.

  10. On the influence of debris in glacier melt modelling: a new temperature-index model accounting for the debris thickness feedback

    Science.gov (United States)

    Carenzo, Marco; Mabillard, Johan; Pellicciotti, Francesca; Reid, Tim; Brock, Ben; Burlando, Paolo

    2013-04-01

    The increase of rockfalls from the surrounding slopes and of englacial melt-out material has led to an increase of the debris cover extent on Alpine glaciers. In recent years, distributed debris energy-balance models have been developed to account for the melt rate enhancing/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya. Some of the input data such as wind or temperature are also of difficult extrapolation from station measurements. Due to their lower data requirement, empirical models have been used in glacier melt modelling. However, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of debris thickness on melt. In this paper, we present a new temperature-index model accounting for the debris thickness feedback in the computation of melt rates at the debris-ice interface. The empirical parameters (temperature factor, shortwave radiation factor, and lag factor accounting for the energy transfer through the debris layer) are optimized at the point scale for several debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter has been validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. The new model is developed on Miage Glacier, Italy, a debris cover glacier in which the ablation area is mantled in near-continuous layer of rock. Subsequently, its transferability is tested on Haut Glacier d'Arolla, Switzerland, where debris is thinner and its extension has been seen to expand in the last decades. The results show that the performance of the new debris temperature-index model (DETI) in simulating the glacier melt rate at the point scale

  11. Accounting for disturbance history in models: using remote sensing to constrain carbon and nitrogen pool spin-up.

    Science.gov (United States)

    Hanan, Erin J; Tague, Christina; Choate, Janet; Liu, Mingliang; Kolden, Crystal; Adam, Jennifer

    2018-03-24

    Disturbances such as wildfire, insect outbreaks, and forest clearing, play an important role in regulating carbon, nitrogen, and hydrologic fluxes in terrestrial watersheds. Evaluating how watersheds respond to disturbance requires understanding mechanisms that interact over multiple spatial and temporal scales. Simulation modeling is a powerful tool for bridging these scales; however, model projections are limited by uncertainties in the initial state of plant carbon and nitrogen stores. Watershed models typically use one of two methods to initialize these stores: spin-up to steady state or remote sensing with allometric relationships. Spin-up involves running a model until vegetation reaches equilibrium based on climate. This approach assumes that vegetation across the watershed has reached maturity and is of uniform age, which fails to account for landscape heterogeneity and non-steady-state conditions. By contrast, remote sensing, can provide data for initializing such conditions. However, methods for assimilating remote sensing into model simulations can also be problematic. They often rely on empirical allometric relationships between a single vegetation variable and modeled carbon and nitrogen stores. Because allometric relationships are species- and region-specific, they do not account for the effects of local resource limitation, which can influence carbon allocation (to leaves, stems, roots, etc.). To address this problem, we developed a new initialization approach using the catchment-scale ecohydrologic model RHESSys. The new approach merges the mechanistic stability of spin-up with the spatial fidelity of remote sensing. It uses remote sensing to define spatially explicit targets for one or several vegetation state variables, such as leaf area index, across a watershed. The model then simulates the growth of carbon and nitrogen stores until the defined targets are met for all locations. We evaluated this approach in a mixed pine-dominated watershed in

  12. Editorial: The proper place for knowledge

    Directory of Open Access Journals (Sweden)

    Yngve Nordkvelle

    2009-11-01

    Full Text Available Knowledge is an interesting word, which never goes out of fashion. In the political context, knowledge is something everyone hails and cherishes. An example is that the Socialist Government in Norway renamed its "Ministry of Education and Research" to the "Ministry of Knowledge". It would probably be politically wrong to defy the word "knowledge". The word "knowledge" stirs, however, different sentiments in people. In modern education, the word signifies something notable, discernable, visual or at least possible to distinguish from what it is not. In learning in higher education, knowledge is most often considered as the raw material for learning, with the little extra that distinguishes it from "information". Knowledge is information with a direction, a purpose and meaning, but without the implied cultivation of a teaching and learning process. Given knowledge is used for educational purposes, the processing of knowledge from its basic concepts to embodied and reflected knowledge, properly understood and reconceptualised by the learner, transforms not only the learner, but also the knowledge. In a peripatetic tradition, one likes to think of knowledge as foundation elements for constructions of ethical wisdom as its highest reflective level. Probably we will never see a "Ministry of Wisdom" established, because what is "wisdom" is probably so much more politically charged than "Knowledge". One can only wonder why anyone would degrade a ministry for education to something less. In Europe a rewriting of university curricula is underway all over the continent, because "knowledge" is a key concept in the writing of "learning outcomes". It appears every college is absorbed in sorting out what knowledge is and how knowledge can be classified in categories and levels, and then composed to readable descriptions of syllabi, course descriptions and schemes. Let us hope they are more able than what has been the case. Professor Ronald Barnett of the

  13. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  14. Probabilistic modelling in urban drainage – two approaches that explicitly account for temporal variation of model errors

    DEFF Research Database (Denmark)

    Löwe, Roland; Del Giudice, Dario; Mikkelsen, Peter Steen

    of input uncertainties observed in the models. The explicit inclusion of such variations in the modelling process will lead to a better fulfilment of the assumptions made in formal statistical frameworks, thus reducing the need to resolve to informal methods. The two approaches presented here...

  15. The Situated Inference Model: An Integrative Account of the Effects of Primes on Perception, Behavior, and Motivation.

    Science.gov (United States)

    Loersch, Chris; Payne, B Keith

    2011-05-01

    The downstream consequences of a priming induction range from changes in the perception of objects in the environment to the initiation of prime-related behavior and goal striving. Although each of these outcomes has been accounted for by separate mechanisms, we argue that a single process could produce all three priming effects. In this article, we introduce the situated inference model of priming, discuss its potential to account for these divergent outcomes with one mechanism, and demonstrate its ability to organize the priming literatures surrounding these effects. According to the model, primes often do not cause direct effects, instead altering only the accessibility of prime-related mental content. This information produces downstream effects on judgment, behavior, or motivation when it is mistakenly viewed as originating from one's own internal thought processes. When this misattribution occurs, the prime-related mental content becomes a possible source of information for solving whatever problems are afforded by the current situation. Because different situations afford very different questions and concerns, the inferred meaning of this prime-related content can vary greatly. The use of this information to answer qualitatively different questions can lead a single prime to produce varied effects on judgment, behavior, and motivation. © The Author(s) 2011.

  16. A general model for likelihood computations of genetic marker data accounting for linkage, linkage disequilibrium, and mutations.

    Science.gov (United States)

    Kling, Daniel; Tillmar, Andreas; Egeland, Thore; Mostad, Petter

    2015-09-01

    Several applications necessitate an unbiased determination of relatedness, be it in linkage or association studies or in a forensic setting. An appropriate model to compute the joint probability of some genetic data for a set of persons given some hypothesis about the pedigree structure is then required. The increasing number of markers available through high-density SNP microarray typing and NGS technologies intensifies the demand, where using a large number of markers may lead to biased results due to strong dependencies between closely located loci, both within pedigrees (linkage) and in the population (allelic association or linkage disequilibrium (LD)). We present a new general model, based on a Markov chain for inheritance patterns and another Markov chain for founder allele patterns, the latter allowing us to account for LD. We also demonstrate a specific implementation for X chromosomal markers that allows for computation of likelihoods based on hypotheses of alleged relationships and genetic marker data. The algorithm can simultaneously account for linkage, LD, and mutations. We demonstrate its feasibility using simulated examples. The algorithm is implemented in the software FamLinkX, providing a user-friendly GUI for Windows systems (FamLinkX, as well as further usage instructions, is freely available at www.famlink.se ). Our software provides the necessary means to solve cases where no previous implementation exists. In addition, the software has the possibility to perform simulations in order to further study the impact of linkage and LD on computed likelihoods for an arbitrary set of markers.

  17. Tritium accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.; Spannagel, G.

    1995-01-01

    Conventional accountancy means that for a given material balance area and a given interval of time the tritium balance is established so that at the end of that interval of time the book inventory is compared with the measured inventory. In this way, an optimal effectiveness of accountancy is achieved. However, there are still further objectives of accountancy, namely the timely detection of anomalies as well as the localization of anomalies in a major system. It can be shown that each of these objectives can be optimized only at the expense of the others. Recently, Near-Real-Time Accountancy procedures have been studied; their methodological background as well as their merits will be discussed. (orig.)

  18. Accounting rigid support at the border in a mixed model the finite element method in problems of ice cover destruction

    Directory of Open Access Journals (Sweden)

    V. V. Knyazkov

    2014-01-01

    Full Text Available To evaluate the force to damage the ice covers is necessary for estimation of icebreaking capability of vessels, as well as of hull strength of icebreakers, and navigation of ships in ice conditions. On the other hand, the use of ice cover support to arrange construction works from the ice is also of practical interest.By the present moment a great deal of investigations of ice cover deformation have been carried out to result, usually, in approximate calculations formula which was obtained after making a variety of assumptions. Nevertheless, we believe that it is possible to make further improvement in calculations. Application numerical methods, and, for example, FEM, makes possible to avoid numerous drawbacks of analytical methods dealing with both complex boundaries and load application areas and other problem peculiarities.The article considers an application of mixed models of FEM for investigating ice cover deformation. A simple flexible triangle element of mixed type was taken to solve this problem. Vector of generalized coordinates of the element contains apices flexures and normal bending moments in the middle of its sides. Compared to other elements mixed models easily satisfy compatibility requirements on the boundary of adjacent elements and do not require numerical displacement differentiation to define bending moments, because bending moments are included in vector of element generalized coordinates.The method of account of rigid support plate is proposed. The resulting ratio, taking into account the "stiffening", reduces the number of resolving systems of equations by the number of elements on the plate contour.To evaluate further the results the numerical realization of ice cover stress-strained problem it becomes necessary and correct to check whether calculation results correspond to accurate solution. Using an example of circular plate the convergence of numerical solutions to analytical solutions is showed.The article

  19. Model cortical association fields account for the time course and dependence on target complexity of human contour perception.

    Directory of Open Access Journals (Sweden)

    Vadas Gintautas

    2011-10-01

    Full Text Available Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas distributed among groups of randomly rotated fragments (clutter. The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms, followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least [Formula: see text] ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas.

  20. IS IT IMPORTANT THE ACCOUNTING MODEL USED BY THE ECONOMIC ENTITY IN MAKING DECISIONS BY THE USERS OF THE INFORMATION? POINTS OF VIEW.

    Directory of Open Access Journals (Sweden)

    Luminita Rus

    2014-07-01

    Full Text Available Nowadays it is vital to stay informed. But why is the information so important? The following will present the role and the importance of accounting information in decision-making regarding economic entity. Is accounting information a want or a need? Can accounting information be interpreted correctly indifferent of the accounting model used? Decisions made by the consumers of accounting information would be the same in terms of using the "cash" or "accrual" method? Based on these questions, this paper presents data as a whole for everyone and from where each consumer of information can extract only the part that they are interested in and which is useful for them. This paper follows the interests of users of accounting information regarding the interest, the need for information and the decisions they may take as a result of the information received, and also of the accounting model used by the entity from which the information is expected. This paper does not include all users of the accounting information that have interest in the economic entity, nor does it take into account all the information that are affected by the use of a model of accounting or the other one of the economic entity, but we can conclude that there are situations in which users ' decisions are influenced by the accounting model used, and others where there is no influence. We can't rule on any model to be the best; it is good that provides useful information showing the true reality of the economic entity.

  1. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  2. Ensemble-based flash-flood modelling: Taking into account hydrodynamic parameters and initial soil moisture uncertainties

    Science.gov (United States)

    Edouard, Simon; Vincendon, Béatrice; Ducrocq, Véronique

    2018-05-01

    Intense precipitation events in the Mediterranean often lead to devastating flash floods (FF). FF modelling is affected by several kinds of uncertainties and Hydrological Ensemble Prediction Systems (HEPS) are designed to take those uncertainties into account. The major source of uncertainty comes from rainfall forcing and convective-scale meteorological ensemble prediction systems can manage it for forecasting purpose. But other sources are related to the hydrological modelling part of the HEPS. This study focuses on the uncertainties arising from the hydrological model parameters and initial soil moisture with aim to design an ensemble-based version of an hydrological model dedicated to Mediterranean fast responding rivers simulations, the ISBA-TOP coupled system. The first step consists in identifying the parameters that have the strongest influence on FF simulations by assuming perfect precipitation. A sensitivity study is carried out first using a synthetic framework and then for several real events and several catchments. Perturbation methods varying the most sensitive parameters as well as initial soil moisture allow designing an ensemble-based version of ISBA-TOP. The first results of this system on some real events are presented. The direct perspective of this work will be to drive this ensemble-based version with the members of a convective-scale meteorological ensemble prediction system to design a complete HEPS for FF forecasting.

  3. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Peixin; Chai, Feng [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Bi, Yunlong [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Pei, Yulong, E-mail: peiyulong1@163.com [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Cheng, Shukang [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China)

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization. - Highlights: • The no-load magnetic field of poke-type motors is firstly calculated by analytical method. • The magnetic circuit model and iterative method are employed to calculate the permeability. • The analytical expression of each subdomain is derived.. • The proposed method can effectively reduce the predesign stages duration.

  4. Serviceability limit state related to excessive lateral deformations to account for infill walls in the structural model

    Directory of Open Access Journals (Sweden)

    G. M. S. ALVA

    Full Text Available Brazilian Codes NBR 6118 and NBR 15575 provide practical values for interstory drift limits applied to conventional modeling in order to prevent negative effects in masonry infill walls caused by excessive lateral deformability, however these codes do not account for infill walls in the structural model. The inclusion of infill walls in the proposed model allows for a quantitative evaluation of structural stresses in these walls and an assessment of cracking in these elements (sliding shear diagonal tension and diagonal compression cracking. This paper presents the results of simulations of single-story one-bay infilled R/C frames. The main objective is to show how to check the serviceability limit states under lateral loads when the infill walls are included in the modeling. The results of numerical simulations allowed for an evaluation of stresses and the probable cracking pattern in infill walls. The results also allowed an identification of some advantages and limitations of the NBR 6118 practical procedure based on interstory drift limits.

  5. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    International Nuclear Information System (INIS)

    Liang, Peixin; Chai, Feng; Bi, Yunlong; Pei, Yulong; Cheng, Shukang

    2016-01-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization. - Highlights: • The no-load magnetic field of poke-type motors is firstly calculated by analytical method. • The magnetic circuit model and iterative method are employed to calculate the permeability. • The analytical expression of each subdomain is derived.. • The proposed method can effectively reduce the predesign stages duration.

  6. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    Science.gov (United States)

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. MODEL OF DISTRIBUTION OF THE BUDGET OF THE PORTFOLIO OF IT PROJECTS TAKING IN-TO ACCOUNT THEIR PRIORITY

    Directory of Open Access Journals (Sweden)

    Anita V. Sotnikova

    2015-01-01

    Full Text Available Article is devoted to a problem of effective distribution of the general budget of a portfolio between the IT projects which are its part taking into ac-count their priority. The designated problem is actual in view of low results of activity of the consulting companies in the sphere of information technologies.For determination of priority of IT projects the method of analytical networks developed by T. Saati is used. For the purpose of application of this method the system of criteria (indicators reflecting influence of IT projects of a portfolio on the most significant purposes of implementation of IT projects of a portfolio is developed. As system of criteria the key indicators of efficiency defined when developing the Balanced system of indicators which meet above-mentioned requirements are used. The essence of a method of analytical net-works consists in paired comparison of key indicators of efficiency concerning the purpose of realization of a portfolio and IT projects which are a part of a portfolio. Result of use of a method of analytical networks are coefficients of priority of each IT project of a portfolio. The received coefficients of priority of IT projects are used in the offered model of distribution of the budget of a portfolio between IT projects. Thus, the budget of a portfolio of IT projects is distributed between them taking into account not only the income from implementation of each IT project, but also other criteria, important for the IT company, for example: the degree of compliance of the IT project to strategic objectives of the IT company defining expediency of implementation of the IT project; the term of implementation of the IT project determined by the customer. The developed model of distribution of the budget of a portfolio between IT projects is approved on the example of distribution of the budget between IT projects of the portfolio consisting of three IT projects. Taking into account the received

  8. Proper Tools Helping Sustainability in Logistics Practice

    NARCIS (Netherlands)

    Alrik Stelling; Nico Lamers; Gerard Vos; Reinder Pieters; Stef Weijers; Erik Koekebakker

    2009-01-01

    Proliferation on sustainability is a must, for quite a lot of companies. Logisticians could use models in attaining sustainability, or at least in understanding its potentials. A sustainable business plan must be based on a clear vision and must be underpinned thoroughly, in order to get the board

  9. On the Faddeev-Yacubovsky model of four nucleon scattering problem with account of spin and isospin

    International Nuclear Information System (INIS)

    Sharma, V.K.

    1976-01-01

    The Faddeev-Yacubovsky model of four nucleons taking into account their spin and isospin with the two-channel resonating group approximation, is considered. In this approximation, one employs a completely antisymmetric wave function which can be written as the clustering of d + d and n+He 3 (or p+H 3 ) systems with antisymmetric spin isospin states. The two-nucleon interactions used are of the separable Yamaguchi form in Ssub(1)sup(3) and Ssub(0)sup(3) states. The equations for the states with quantum numbers S=0,1,2 T=0 are obtained. It is shown that with subsequent separable representation of two-particle t-matrix reduces the equations to a set of one-dimensional coupled integral equations. (author)

  10. Estimation of the detection limit of an experimental model of tritium storage bed designed for 'in-situ' accountability

    International Nuclear Information System (INIS)

    Bulubasa, Gheorghe; Bidica, Nicolae; Stefanescu, Ioan; Bucur, Ciprian; Deaconu, Mariea

    2009-01-01

    During the water detritiation process most of the tritium inventory is transferred from water into the gaseous phase, then it is further enriched and finally extracted and safely stored. The control of tritium inventory is an acute issue from several points of view: - Financially - tritium is an expensive material; - Safeguard - tritium is considered as nuclear material of strategic importance; - Safety - tritium is a radioactive material: requirements for documented safety analysis report (to ensure strict limits on the total tritium allowed) and for evaluation of accident consequences associated with that inventory. Large amounts of tritium can be stored, in a very safely manner, as metal tritides. A bench-scale experiment of a tritium storage bed with integrated system for in-situ tritium inventory accountancy was designed and developed at ICSI Rm. Valcea. The calibration curve and the detection limit for this experimental model of tritium storage bed were determined. The experimental results are presented in this paper. (authors)

  11. Towards the Proper Integration of Extra-Functional Requirements

    Directory of Open Access Journals (Sweden)

    Elke Hochmuller

    1999-05-01

    Full Text Available In spite of the many achievements in software engineering, proper treatment of extra-functional requirements (also known as non-functional requirements within the software development process is still a challenge to our discipline. The application of functionality-biased software development methodologies can lead to major contradictions in the joint modelling of functional and extra-functional requirements. Based on a thorough discussion on the nature of extra-functional requirements as well as on open issues in coping with them, this paper emphasizes the role of extra-functional requirements in the software development process. Particularly, a framework supporting the explicit integration of extra functional requirements into a conventional phase-driven process model is proposed and outlined.

  12. Aeroelastic System Development Using Proper Orthogonal Decomposition and Volterra Theory

    Science.gov (United States)

    Lucia, David J.; Beran, Philip S.; Silva, Walter A.

    2003-01-01

    This research combines Volterra theory and proper orthogonal decomposition (POD) into a hybrid methodology for reduced-order modeling of aeroelastic systems. The out-come of the method is a set of linear ordinary differential equations (ODEs) describing the modal amplitudes associated with both the structural modes and the POD basis functions for the uid. For this research, the structural modes are sine waves of varying frequency, and the Volterra-POD approach is applied to the fluid dynamics equations. The structural modes are treated as forcing terms which are impulsed as part of the uid model realization. Using this approach, structural and uid operators are coupled into a single aeroelastic operator. This coupling converts a free boundary uid problem into an initial value problem, while preserving the parameter (or parameters) of interest for sensitivity analysis. The approach is applied to an elastic panel in supersonic cross ow. The hybrid Volterra-POD approach provides a low-order uid model in state-space form. The linear uid model is tightly coupled with a nonlinear panel model using an implicit integration scheme. The resulting aeroelastic model provides correct limit-cycle oscillation prediction over a wide range of panel dynamic pressure values. Time integration of the reduced-order aeroelastic model is four orders of magnitude faster than the high-order solution procedure developed for this research using traditional uid and structural solvers.

  13. The (Proper) Microfoundations of Routines and Capabilities

    DEFF Research Database (Denmark)

    Felin, Teppo; Foss, Nicolai Juul

    2012-01-01

    Sidney Winter (2011), Brian Pentland (2011), and Geoffrey Hodgson and Thorbjørn Knudsen (2011) take issue with the arguments in Teppo Felin and Nicolai J. Foss (2011), along with more generally critiquing the ‘microfoundations project’ related to routines and capabilities. In this rejoinder we ar...... chauvinism; (3) models of mind and man; (4) levels of analysis; (5) agency and uncaused causes; and then further discuss (6) a rationalist alternative....

  14. Explosion Source Model Development in Support of Seismic Monitoring Technologies: New Models Accounting for Shock-Induced Tensile Failure

    Science.gov (United States)

    2008-09-01

    values for nuclear explosions at the Semipalatinsk Test Site (STS) will be inferred in the same way they were for NTS. Comparisons between K values...K > ~3 in Poisson media. Most Nevada Test Site (NTS) observations support ~1 < K < 3, and as such the new model predicts lower Ms compared to the...explosions at the two test sites and for two different containment rules are summarized in Table 1 below. F1 is found to be positive for NTS, as we

  15. Accounting for pH heterogeneity and variability in modelling human health risks from cadmium in contaminated land

    International Nuclear Information System (INIS)

    Gay, J. Rebecca; Korre, Anna

    2009-01-01

    The authors have previously published a methodology which combines quantitative probabilistic human health risk assessment and spatial statistical methods (geostatistics) to produce an assessment, incorporating uncertainty, of risks to human health from exposure to contaminated land. The model assumes a constant soil to plant concentration factor (CF veg ) when calculating intake of contaminants. This model is modified here to enhance its use in a situation where CF veg varies according to soil pH, as is the case for cadmium. The original methodology uses sequential indicator simulation (SIS) to map soil concentration estimates for one contaminant across a site. A real, age-stratified population is mapped across the contaminated area, and intake of soil contaminants by individuals is calculated probabilistically using an adaptation of the Contaminated Land Exposure Assessment (CLEA) model. The proposed improvement involves not only the geostatistical estimation of the contaminant concentration, but also that of soil pH, which in turn leads to a variable CF veg estimate which influences the human intake results. The results presented demonstrate that taking pH into account can influence the outcome of the risk assessment greatly. It is proposed that a similar adaptation could be used for other combinations of soil variables which influence CF veg .

  16. Proper Elements and Secular Resonances for Irregular Satellites

    Science.gov (United States)

    Beaugé, C.; Nesvorný, D.

    2007-06-01

    We present results of an analytical study of proper elements and secular resonances for the irregular satellites of the outer planets. In the case of the Jovian system we identify three satellite families, two of them previously known (Carme and Ananke), plus a new agglomeration of four bodies that includes Pasiphae as its largest member. While the distribution of proper elements for Saturn's moons seems to be more random, a small cluster was found for the direct moons formed by Albiorix, Erriapo, and 2004 S1, slightly different from the so-called Gaulish cluster. No significant families are detected in the present study for the Uranian or Neptunian satellite systems. For each satellite system we determine the location of several secular resonances in the proper element space. Apart from the well-known resonance locks of Pasiphae, Sinope, and Siarnaq, a comparison between the resonance locations and proper elements shows that Saturn's satellite Narvi also exhibits temporary librations in the ϖ-ϖsolar resonance. However, unlike the resonant Jovian moons that are located in the same configuration, Narvi's critical argument librates alternately around values near 90° and 270°. Neither the Uranian nor Neptunian systems seem to have resonant moons. The resonant dynamics of the real satellites in the vicinity of ϖ˙-ϖ˙solar=0 is studied with a simple model for secular resonances based on the restricted three-body problem. Depending on the initial conditions, we show the existence of one or two modes of libration that can occur at different values of the critical angle, showing a good correspondence with the observed behavior of all the resonant moons. Finally, we discuss the global distribution of the real satellites with respect to the secular resonances, as compared with synthetic populations of bodies drawn solely from stability conditions. For Saturn, we find that the present satellite population appears compatible with simple random distributions. Although

  17. Goals and Psychological Accounting

    DEFF Research Database (Denmark)

    Koch, Alexander Karl; Nafziger, Julia

    We model how people formulate and evaluate goals to overcome self-control problems. People often attempt to regulate their behavior by evaluating goal-related outcomes separately (in narrow psychological accounts) rather than jointly (in a broad account). To explain this evidence, our theory...... of endogenous narrow or broad psychological accounts combines insights from the literatures on goals and mental accounting with models of expectations-based reference-dependent preferences. By formulating goals the individual creates expectations that induce reference points for task outcomes. These goal......-induced reference points make substandard performance psychologically painful and motivate the individual to stick to his goals. How strong the commitment to goals is depends on the type of psychological account. We provide conditions when it is optimal to evaluate goals in narrow accounts. The key intuition...

  18. An empirical test of Birkett’s competency model for management accountants : A confirmative study conducted in the Netherlands

    NARCIS (Netherlands)

    Bots, J.M.; Groenland, E.A.G.; Swagerman, D.

    2009-01-01

    In 2002, the Accountants-in-Business section of the International Federation of Accountants (IFAC) issued the Competency Profiles for Management Accounting Practice and Practitioners report. This “Birkett Report” presents a framework for competency development during the careers of management

  19. Modelling of the physico-chemical behaviour of clay minerals with a thermo-kinetic model taking into account particles morphology in compacted material.

    Science.gov (United States)

    Sali, D.; Fritz, B.; Clément, C.; Michau, N.

    2003-04-01

    Modelling of fluid-mineral interactions is largely used in Earth Sciences studies to better understand the involved physicochemical processes and their long-term effect on the materials behaviour. Numerical models simplify the processes but try to preserve their main characteristics. Therefore the modelling results strongly depend on the data quality describing initial physicochemical conditions for rock materials, fluids and gases, and on the realistic way of processes representations. The current geo-chemical models do not well take into account rock porosity and permeability and the particle morphology of clay minerals. In compacted materials like those considered as barriers in waste repositories, low permeability rocks like mudstones or compacted powders will be used : they contain mainly fine particles and the geochemical models used for predicting their interactions with fluids tend to misjudge their surface areas, which are fundamental parameters in kinetic modelling. The purpose of this study was to improve how to take into account the particles morphology in the thermo-kinetic code KINDIS and the reactive transport code KIRMAT. A new function was integrated in these codes, considering the reaction surface area as a volume depending parameter and the calculated evolution of the mass balance in the system was coupled with the evolution of reactive surface areas. We made application exercises for numerical validation of these new versions of the codes and the results were compared with those of the pre-existing thermo-kinetic code KINDIS. Several points are highlighted. Taking into account reactive surface area evolution during simulation modifies the predicted mass transfers related to fluid-minerals interactions. Different secondary mineral phases are also observed during modelling. The evolution of the reactive surface parameter helps to solve the competition effects between different phases present in the system which are all able to fix the chemical

  20. On the Importance of Accounting for Competing Risks in Pediatric Brain Cancer: II. Regression Modeling and Sample Size

    International Nuclear Information System (INIS)

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-01-01

    Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.

  1. Comprehensive Revenue and Expense Data Collection Methodology for Teaching Health Centers: A Model for Accountable Graduate Medical Education Financing.

    Science.gov (United States)

    Regenstein, Marsha; Snyder, John E; Jewers, Mariellen Malloy; Nocella, Kiki; Mullan, Fitzhugh

    2018-04-01

    Despite considerable federal investment, graduate medical education financing is neither transparent for estimating residency training costs nor accountable for effectively producing a physician workforce that matches the nation's health care needs. The Teaching Health Center Graduate Medical Education (THCGME) program's authorization in 2010 provided an opportunity to establish a more transparent financing mechanism. We developed a standardized methodology for quantifying the necessary investment to train primary care physicians in high-need communities. The THCGME Costing Instrument was designed utilizing guidance from site visits, financial documentation, and expert review. It collects educational outlays, patient service expenses and revenues from residents' ambulatory and inpatient care, and payer mix. The instrument was fielded from April to November 2015 in 43 THCGME-funded residency programs of varying specialties and organizational structures. Of the 43 programs, 36 programs (84%) submitted THCGME Costing Instruments. The THCGME Costing Instrument collected standardized, detailed cost data on residency labor (n = 36), administration and educational outlays (n = 33), ambulatory care visits and payer mix (n = 30), patient service expenses (n =  26), and revenues generated by residents (n = 26), in contrast to Medicare cost reports, which include only costs incurred by residency programs. The THCGME Costing Instrument provides a model for calculating evidence-based costs and revenues of community-based residency programs, and it enhances accountability by offering an approach that estimates residency costs and revenues in a range of settings. The instrument may have feasibility and utility for application in other residency training settings.

  2. ABACC - Brazil-Argentina Agency for Accounting and Control of Nuclear Materials, a model of integration and transparence

    International Nuclear Information System (INIS)

    Oliveira, Antonio A.; Do Canto, Odilon Marcusso

    2013-01-01

    Argentina and Brazil began its activities in the nuclear area about the same time, in the 50 century past. The existence of an international nuclear nonproliferation treaty-TNP-seen by Brazil and Argentina as discriminatory and prejudicial to the interests of the countries without nuclear weapons, led to the need for a common system of control of nuclear material between the two countries to somehow provide assurances to the international community of the exclusively peaceful purpose of its nuclear programs. The creation of a common system, assured the establishment of uniform procedures to implement safeguards in Argentina and Brazil, so the same requirements and safeguards procedures took effect in both countries, and the operators of nuclear facilities began to follow the same rules of control of nuclear materials and subjected to the same type of verification and control. On July 18, 1991, the Bilateral Agreement for the Exclusively Peaceful Use of Nuclear Energy created a binational body, the Argentina-Brazil Agency for Accounting and Control of Nuclear Materials-ABACC-to implement the so-called Common System of Accounting and Control of Nuclear materials - SCCC. The deal provided, permanently, a clear commitment to use exclusively for peaceful purposes all material and nuclear facilities under the jurisdiction or control of the two countries. The Quadripartite Agreement, signed in December of that year, between the two countries, ABACC and IAEA completed the legal framework for the implementation of comprehensive safeguards system. The 'model ABACC' now represents a paradigmatic framework in the long process of economic, political, technological and cultural integration of the two countries. Argentina and Brazil were able to establish a guarantee system that is unique in the world today and that consolidated and matured over more than twenty years, has earned the respect of the international community

  3. Can rational models be good accounts of developmental change? The case of language development at two time scales.

    Science.gov (United States)

    Dawson, Colin R; Gerken, LouAnn

    2012-01-01

    Rational models of human perception and cognition have allowed researchers new ways to look at learning and the ability to make inferences from data. But how good are such models at accounting for developmental change? In this chapter, we address this question in the domain of language development, focusing on the speed with which developmental change takes place, and classifying different types of language development as either fast or slow. From the pattern of fast and slow development observed, we hypothesize that rational learning processes are generally well suited for handling fast processes over small amounts of input data. In contrast, we suggest that associative learning processes are generally better suited to slow development, in which learners accumulate information about what is typical of their language over time. Finally, although one system may be dominant for a particular component of language learning, we speculate that both systems frequently interact, with the associative system providing a source of emergent hypotheses to be evaluated by the rational system and the rational system serving to highlight which aspects of the learner's input need to be processed in greater depth by the associative system.

  4. Basis of accountability system

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    The first part of this presentation describes in an introductory manner the accountability design approach which is used for the Model Plant in order to meet US safeguards requirements. The general requirements for the US national system are first presented. Next, the approach taken to meet each general requirement is described. The general concepts and principles of the accountability system are introduced. The second part of this presentation describes some basic concepts and techniques used in the model plant accounting system and relates them to US safeguards requirements. The specifics and mechanics of the model plant accounting system are presented in the third part. The purpose of this session is to enable participants to: (1) understand how the accounting system is designed to meet safeguards criteria for both IAEA and State Systems; (2) understand the principles of materials accounting used to account for element and isotope in the model plant; (3) understand how the computer-based accounting system operates to meet the above objectives

  5. Towards ecosystem accounting

    NARCIS (Netherlands)

    Duku, C.; Rathjens, H.; Zwart, S.J.; Hein, L.

    2015-01-01

    Ecosystem accounting is an emerging field that aims to provide a consistent approach to analysing environment-economy interactions. One of the specific features of ecosystem accounting is the distinction between the capacity and the flow of ecosystem services. Ecohydrological modelling to support

  6. The proper generalized decomposition for advanced numerical simulations a primer

    CERN Document Server

    Chinesta, Francisco; Leygue, Adrien

    2014-01-01

    Many problems in scientific computing are intractable with classical numerical techniques. These fail, for example, in the solution of high-dimensional models due to the exponential increase of the number of degrees of freedom. Recently, the authors of this book and their collaborators have developed a novel technique, called Proper Generalized Decomposition (PGD) that has proven to be a significant step forward. The PGD builds by means of a successive enrichment strategy a numerical approximation of the unknown fields in a separated form. Although first introduced and successfully demonstrated in the context of high-dimensional problems, the PGD allows for a completely new approach for addressing more standard problems in science and engineering. Indeed, many challenging problems can be efficiently cast into a multi-dimensional framework, thus opening entirely new solution strategies in the PGD framework. For instance, the material parameters and boundary conditions appearing in a particular mathematical mod...

  7. AMERICAN ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Mihaela Onica

    2005-01-01

    Full Text Available The international Accounting Standards already contribute to the generation of better and more easily comparable financial information on an international level, supporting thus a more effective allocationof the investments resources in the world. Under the circumstances, there occurs the necessity of a consistent application of the standards on a global level. The financial statements are part of thefinancial reporting process. A set of complete financial statements usually includes a balance sheet,a profit and loss account, a report of the financial item change (which can be presented in various ways, for example as a status of the treasury flows and of the funds flows and those notes, as well as those explanatory situations and materials which are part of the financial statements.

  8. Payroll accounting

    OpenAIRE

    Hodžová, Markéta

    2009-01-01

    Abstract Main topic of my thesis is the Payroll Accounting. The work summarizes most of the areas that are related to this topic and the knowledge necessary in calculating the final determination of wages. Beginning the thesis mentions specific chapters from the Labor code which explain the facts about the start, changes and the termination of the employment contract then more detailed description of the arrangements performed outside of the employment contract and then working hours and mini...

  9. Measuring the safeguards value of material accountability

    International Nuclear Information System (INIS)

    Sicherman, A.

    1988-01-01

    Material accountability (MA) activities focus on providing after-the-fact indication of diversion or theft of special nuclear material (SNM). MA activities include maintaining records for tracking nuclear material and conducting periodic inventories and audits to ensure that loss has not occurred. This paper presents a value model concept for assessing the safeguards benefits of MA activities and for comparing these benefits to those provided by physical protection (PP) and material control (MC) components. The model considers various benefits of MA, which include: 1) providing information to assist in recovery of missing material, 2) providing assurance that physical protection and material control systems have been working, 3) defeating protracted theft attempts, and 4) properly resolving causes of and responding appropriately to anomalies of missing material and external alarms (e.g., hoax). Such a value model can aid decision-makers in allocating safeguards resources among PP, MC, and MA systems

  10. S1P transporter SPNS2 regulates proper postnatal retinal morphogenesis.

    Science.gov (United States)

    Fang, Chao; Bian, Ganlan; Ren, Pan; Xiang, Jie; Song, Jun; Yu, Caiyong; Zhang, Qian; Liu, Ling; Chen, Kun; Liu, Fangfang; Zhang, Kun; Wu, Chunfeng; Sun, Ruixia; Hu, Dan; Ju, Gong; Wang, Jian

    2018-02-08

    Spinster homolog 2 (SPNS2) is the membrane transporter of sphingosine-1-phosphate (S1P), and it participates in several physiologic processes by activating different S1P receptors (S1PRs). However, its functions in the nervous system remain largely unclear. We explored the important role of SPNS2 in the process of retinal morphogenesis using a spns2-deficient rat model. In the absence of the functional SPNS2 transporter, we observed progressively aggravating laminar disorganization of the epithelium at the postnatal stage of retinal development. Disrupted cell polarity, delayed cell-cycle exit of retinal progenitor cells, and insufficient migration of newborn neurons were proposed in this study as potential mechanisms accounting for this structural disorder. In addition, we analyzed the expression profiles of spns2 and s1prs, and proposed that SPNS2 regulated retinal morphogenesis by establishing the S1P level in the eye and activating S1PR3 signaling. These data indicate that SPNS2 is indispensable for normal retinal morphogenesis and provide new insights on the role of S1P in the developing retina using an established in vivo model.-Fang, C., Bian, G., Ren, P., Xiang, J., Song, J., Yu, C., Zhang, Q., Liu, L., Chen, K., Liu, F., Zhang, K., Wu, C., Sun, R., Hu, D., Ju, G., Wang, J. S1P transporter SPNS2 regulates proper postnatal retinal morphogenesis.

  11. Dorsoventral and Proximodistal Hippocampal Processing Account for the Influences of Sleep and Context on Memory (Reconsolidation: A Connectionist Model

    Directory of Open Access Journals (Sweden)

    Justin Lines

    2017-01-01

    Full Text Available The context in which learning occurs is sufficient to reconsolidate stored memories and neuronal reactivation may be crucial to memory consolidation during sleep. The mechanisms of context-dependent and sleep-dependent memory (reconsolidation are unknown but involve the hippocampus. We simulated memory (reconsolidation using a connectionist model of the hippocampus that explicitly accounted for its dorsoventral organization and for CA1 proximodistal processing. Replicating human and rodent (reconsolidation studies yielded the following results. (1 Semantic overlap between memory items and extraneous learning was necessary to explain experimental data and depended crucially on the recurrent networks of dorsal but not ventral CA3. (2 Stimulus-free, sleep-induced internal reactivations of memory patterns produced heterogeneous recruitment of memory items and protected memories from subsequent interference. These simulations further suggested that the decrease in memory resilience when subjects were not allowed to sleep following learning was primarily due to extraneous learning. (3 Partial exposure to the learning context during simulated sleep (i.e., targeted memory reactivation uniformly increased memory item reactivation and enhanced subsequent recall. Altogether, these results show that the dorsoventral and proximodistal organization of the hippocampus may be important components of the neural mechanisms for context-based and sleep-based memory (reconsolidations.

  12. Analytical modeling of demagnetizing effect in magnetoelectric ferrite/PZT/ferrite trilayers taking into account a mechanical coupling

    Science.gov (United States)

    Loyau, V.; Aubert, A.; LoBue, M.; Mazaleyrat, F.

    2017-03-01

    In this paper, we investigate the demagnetizing effect in ferrite/PZT/ferrite magnetoelectric (ME) trilayer composites consisting of commercial PZT discs bonded by epoxy layers to Ni-Co-Zn ferrite discs made by a reactive Spark Plasma Sintering (SPS) technique. ME voltage coefficients (transversal mode) were measured on ferrite/PZT/ferrite trilayer ME samples with different thicknesses or phase volume ratio in order to highlight the influence of the magnetic field penetration governed by these geometrical parameters. Experimental ME coefficients and voltages were compared to analytical calculations using a quasi-static model. Theoretical demagnetizing factors of two magnetic discs that interact together in parallel magnetic structures were derived from an analytical calculation based on a superposition method. These factors were introduced in ME voltage calculations which take account of the demagnetizing effect. To fit the experimental results, a mechanical coupling factor was also introduced in the theoretical formula. This reflects the differential strain that exists in the ferrite and PZT layers due to shear effects near the edge of the ME samples and within the bonding epoxy layers. From this study, an optimization in magnitude of the ME voltage is obtained. Lastly, an analytical calculation of demagnetizing effect was conducted for layered ME composites containing higher numbers of alternated layers (n ≥ 5). The advantage of such a structure is then discussed.

  13. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    Science.gov (United States)

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Self-consistent modeling of induced magnetic field in Titan's atmosphere accounting for the generation of Schumann resonance

    Science.gov (United States)

    Béghin, Christian

    2015-02-01

    This model is worked out in the frame of physical mechanisms proposed in previous studies accounting for the generation and the observation of an atypical Schumann Resonance (SR) during the descent of the Huygens Probe in the Titan's atmosphere on 14 January 2005. While Titan is staying inside the subsonic co-rotating magnetosphere of Saturn, a secondary magnetic field carrying an Extremely Low Frequency (ELF) modulation is shown to be generated through ion-acoustic instabilities of the Pedersen current sheets induced at the interface region between the impacting magnetospheric plasma and Titan's ionosphere. The stronger induced magnetic field components are focused within field-aligned arcs-like structures hanging down the current sheets, with minimum amplitude of about 0.3 nT throughout the ramside hemisphere from the ionopause down to the Moon surface, including the icy crust and its interface with a conductive water ocean. The deep penetration of the modulated magnetic field in the atmosphere is thought to be allowed thanks to the force balance between the average temporal variations of thermal and magnetic pressures within the field-aligned arcs. However, there is a first cause of diffusion of the ELF magnetic components, probably due to feeding one, or eventually several SR eigenmodes. A second leakage source is ascribed to a system of eddy-Foucault currents assumed to be induced through the buried water ocean. The amplitude spectrum distribution of the induced ELF magnetic field components inside the SR cavity is found fully consistent with the measurements of the Huygens wave-field strength. Waiting for expected future in-situ exploration of Titan's lower atmosphere and the surface, the Huygens data are the only experimental means available to date for constraining the proposed model.

  15. Developing a Global Model of Accounting Education and Examining IES Compliance in Australia, Japan, and Sri Lanka

    Science.gov (United States)

    Watty, Kim; Sugahara, Satoshi; Abayadeera, Nadana; Perera, Luckmika

    2013-01-01

    The introduction of International Education Standards (IES) signals a clear move by the International Accounting Education Standards Board (IAESB) to ensure high quality standards in professional accounting education at a global level. This study investigated how IES are perceived and valued by member bodies and academics in three counties:…

  16. Emerging accounting trends accounting for leases.

    Science.gov (United States)

    Valletta, Robert; Huggins, Brian

    2010-12-01

    A new model for lease accounting can have a significant impact on hospitals and healthcare organizations. The new approach proposes a "right-of-use" model that involves complex estimates and significant administrative burden. Hospitals and health systems that draw heavily on lease arrangements should start preparing for the new approach now even though guidance and a final rule are not expected until mid-2011. This article highlights a number of considerations from the lessee point of view.

  17. Large-scale determinants of diversity across Spanish forest habitats: accounting for model uncertainty in compositional and structural indicators

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Quller, E.; Torras, O.; Alberdi, I.; Solana, J.; Saura, S.

    2011-07-01

    An integral understanding of forest biodiversity requires the exploration of the many aspects it comprises and of the numerous potential determinants of their distribution. The landscape ecological approach provides a necessary complement to conventional local studies that focus on individual plots or forest ownerships. However, most previous landscape studies used equally-sized cells as units of analysis to identify the factors affecting forest biodiversity distribution. Stratification of the analysis by habitats with a relatively homogeneous forest composition might be more adequate to capture the underlying patterns associated to the formation and development of a particular ensemble of interacting forest species. Here we used a landscape perspective in order to improve our understanding on the influence of large-scale explanatory factors on forest biodiversity indicators in Spanish habitats, covering a wide latitudinal and attitudinal range. We considered six forest biodiversity indicators estimated from more than 30,000 field plots in the Spanish national forest inventory, distributed in 213 forest habitats over 16 Spanish provinces. We explored biodiversity response to various environmental (climate and topography) and landscape configuration (fragmentation and shape complexity) variables through multiple linear regression models (built and assessed through the Akaike Information Criterion). In particular, we took into account the inherent model uncertainty when dealing with a complex and large set of variables, and considered different plausible models and their probability of being the best candidate for the observed data. Our results showed that compositional indicators (species richness and diversity) were mostly explained by environmental factors. Models for structural indicators (standing deadwood and stand complexity) had the worst fits and selection uncertainties, but did show significant associations with some configuration metrics. In general

  18. Infrastrukturel Accountability

    DEFF Research Database (Denmark)

    Ubbesen, Morten Bonde

    Hvordan redegør man troværdigt for noget så diffust som en hel nations udledning af drivhusgasser? Det undersøger denne afhandling i et etnografisk studie af hvordan Danmarks drivhusgasregnskab udarbejdes, rapporteres og kontrolleres. Studiet trækker på begreber og forståelser fra 'Science & Tech...... & Technology Studies', og bidrager med begrebet 'infrastrukturel accountability' til nye måder at forstå og tænke om det arbejde, hvormed højt specialiserede praksisser dokumenterer og redegør for kvaliteten af deres arbejde....

  19. The material control and accounting system model development in the Radiochemical plant of Siberian Chemical Combine (SChC)

    International Nuclear Information System (INIS)

    Kozyrev, A.S.; Purygin, V.Ya.; Skuratov, V.A.; Lapotkov, A.A.

    1999-01-01

    The nuclear material (NM) control and accounting computerized system is designed to automatically account NM reception, movement and storage at the Radiochemical Plant. The objective of this system development is to provide a constant surveillance over the process material movement, to improve their accountability and administrative work, to upgrade the plant protection against possible NM thefts, stealing and diversion, to rule out any casual errors of operators, to improve the timeliness and significance (reliability) of information about nuclear materials. The NM control and accounting system at the Radiochemical Plant should be based on the computerized network. It must keep track of all the material movements in each Material Balance Areas: material receipt from other plant; material local movement within the plant; material shipment to other plants; generation of required documents about NM movements and its accounting [ru

  20. Molecular weight​/branching distribution modeling of low-​density-​polyethylene accounting for topological scission and combination termination in continuous stirred tank reactor

    NARCIS (Netherlands)

    Yaghini, N.; Iedema, P.D.

    2014-01-01

    We present a comprehensive model to predict the molecular weight distribution (MWD),(1) and branching distribution of low-density polyethylene (IdPE),(2) for free radical polymerization system in a continuous stirred tank reactor (CSTR).(3) The model accounts for branching, by branching moment or

  1. Modeling uranium(VI) adsorption onto montmorillonite under varying carbonate concentrations: A surface complexation model accounting for the spillover effect on surface potential

    Science.gov (United States)

    Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; Davis, J. A.

    2018-01-01

    The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonite edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites ('spillover' effect). A series of U(VI) - Na-montmorillonite batch adsorption experiments was conducted as a function of pH, with variable U(VI), Ca, and dissolved carbonate concentrations. Based on the experimental data, a new type of surface complexation model (SCM) was developed for montmorillonite, that specifically accounts for the spillover effect using the edge surface speciation model by Tournassat et al. (2016a). The SCM allows for a prediction of U(VI) adsorption under varying chemical conditions with a minimum number of fitting parameters, not only for our own experimental results, but also for a number of published data sets. The model agreed well with many of these datasets without introducing a second site type or including the formation of ternary U(VI)-carbonato surface complexes. The model predictions were greatly impacted by utilizing analytical measurements of dissolved inorganic carbon (DIC) concentrations in individual sample solutions rather than assuming solution equilibration with a specific partial pressure of CO2, even when the gas phase was

  2. Comparison of Methods of Teaching Children Proper Lifting ...

    African Journals Online (AJOL)

    Objective: This study was designed to determine the effects of three teaching methods on children\\'s ability to demonstrate and recall their mastery of proper lifting techniques. Method: Ninety-three primary five and six public school children who had no knowledge of proper lifting technique were assigned into three equal ...

  3. 5 CFR 2635.205 - Proper disposition of prohibited gifts.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Proper disposition of prohibited gifts... STANDARDS OF ETHICAL CONDUCT FOR EMPLOYEES OF THE EXECUTIVE BRANCH Gifts From Outside Sources § 2635.205 Proper disposition of prohibited gifts. (a) An employee who has received a gift that cannot be accepted...

  4. The proper name as starting point for basic reading skills

    NARCIS (Netherlands)

    Both-De Vries, Anna C.; Bus, Adriana G

    Does alphabetic-phonetic writing start with the proper name and how does the name affect reading and writing skills? Sixty 4- to 5(1/2)-year-old children from middle SES families with Dutch as their first language wrote their proper name and named letters. For each child we created unique sets of

  5. Energy accountancy

    International Nuclear Information System (INIS)

    Boer, G.A. de.

    1981-01-01

    G.A. de Boer reacts to recently published criticism of his contribution to a report entitled 'Commentaar op het boek 'Tussen Kernenergie en Kolen. Een Analyse' van ir. J.W. Storm van Leeuwen' (Commentary on the book 'Nuclear Energy versus Coal. An Analysis by ir. J.W. Storm van Leeuwen), published by the Dutch Ministry of Economic Affairs. The contribution (Appendix B) deals with energy analyses. He justifies his arguments for using energy accountancy for assessing different methods of producing electricity, and explains that it is simply an alternative to purely economic methods. The energy conversion yield (ratio of energy produced to energy required) is tabulated for different sources. De Boer emphasises that his article purposely discusses among other things, definitions, forms of energy, the limits of the systems, the conversion of money into energy and the definition of the energy yield at length, in order to prevent misunderstandings. (C.F.)

  6. Design Accountability

    DEFF Research Database (Denmark)

    Koskinen, Ilpo; Krogh, Peter

    2015-01-01

    When design research builds on design practice, it may contribute to both theory and practice of design in ways richer than research that treats design as a topic. Such research, however, faces several tensions that it has to negotiate successfully in order not to lose its character as research....... This paper looks at constructive design research which takes the entanglement of theory and practice as its hallmark, and uses it as a test case in exploring how design researchers can work with theory, methodology, and practice without losing their identity as design researchers. The crux of practice based...... design research is that where classical research is interested in singling out a particular aspect and exploring it in depth, design practice is characterized by balancing numerous concerns in a heterogenous and occasionally paradoxical product. It is on this basis the notion of design accountability...

  7. Determination of proper motions in the Pleiades cluster

    Science.gov (United States)

    Schilbach, E.

    1991-04-01

    For 458 stars in the Pleiades field from the catalog of Eichhorn et al. (1970) proper motions were derived on Tautenburg and CERGA Schmidt telescope plates measured with the automated measuring machine MAMA in Paris. The catalog positions were considered as first epoch coordinates with an epoch difference of ca. 33 years to the observations. The results show good coincidence of proper motions derived with both Schmidt telescopes within the error bars. Comparison with proper motions determined by Vasilevskis et al. (1979) displays some significant differences but no systematic effects depending on plate coordinates or magnitudes could be found. An accuracy of 0.3 arcsec/100a for one proper motion component was estimated. According to the criterion of common proper motion 34 new cluster members were identified.

  8. The Southern Proper Motion Program. IV. The SPM4 Catalog

    Science.gov (United States)

    Girard, Terrence M.; van Altena, William F.; Zacharias, Norbert; Vieira, Katherine; Casetti-Dinescu, Dana I.; Castillo, Danilo; Herrera, David; Lee, Young Sun; Beers, Timothy C.; Monet, David G.; López, Carlos E.

    2011-07-01

    We present the fourth installment of the Yale/San Juan Southern Proper Motion Catalog, SPM4. The SPM4 contains absolute proper motions, celestial coordinates, and B, V photometry for over 103 million stars and galaxies between the south celestial pole and -20° declination. The catalog is roughly complete to V = 17.5 and is based on photographic and CCD observations taken with the Yale Southern Observatory's double astrograph at Cesco Observatory in El Leoncito, Argentina. The proper-motion precision, for well-measured stars, is estimated to be 2-3 mas yr-1, depending on the type of second-epoch material. At the bright end, proper motions are on the International Celestial Reference System by way of Hipparcos Catalog stars, while the faint end is anchored to the inertial system using external galaxies. Systematic uncertainties in the absolute proper motions are on the order of 1 mas yr-1.

  9. Equation model on continuing professional development and career advancement: Evidences among certified public accountants in Davao City, Philippines

    Directory of Open Access Journals (Sweden)

    Joel B. Tan

    2016-12-01

    Full Text Available This paper determined the effectiveness of Continuing Professional Development (CPD to the career advancement of Certified Public Accountant (CPA when analysed by profile. Respondents were 100 CPAs from Davao City fairly distributed as to age, sex, sector connected, year working and credit unit earned. The study covered periods 2010-2013. The paper employed descriptive-quantitative design and used a validated, self-construct questionnaire as instrument. The sampling technique employed was stratified. Data were gathered through survey and personal interview. The statistical treatments used were frequency, mean, ANOVA and logistical regression. The critical alpha was set at.05 level of significance. Results revealed that majority of active CPD participants were young, female CPAs. The extent of participation was found minimum. The overall level of contribution of CPD to career advancement was held negligible although CPD showed strongest impact on improvement of financial income and weakest on enhancement of personal competencies. Further, the study found that no significant difference exists between the level of CPD contribution and career advancement of CPA when grouped according to profile. This suggests that demographics such as age, sex, sector belonged and working years have no statistical impact on the level of CPD contribution. The predictor variable which is CPD credit unit has shown statistical influence to career advancement. The strength of association is determined by a model CPD = -2.67 + 0.43units_earnedCPD. Thus, CPAs must capitalize on best practice participation and meaningful engagements with CPD as a springboard to personal and professional success.

  10. Accounting for multimorbidity in pay for performance: a modelling study using UK Quality and Outcomes Framework data.

    Science.gov (United States)

    Ruscitto, Andrea; Mercer, Stewart W; Morales, Daniel; Guthrie, Bruce

    2016-08-01

    The UK Quality and Outcomes Framework (QOF) offers financial incentives to deliver high-quality care for individual diseases, but the single-disease focus takes no account of multimorbidity. To examine variation in QOF payments for two indicators incentivised in ≥1 disease domain. Modelling study using cross-sectional data from 314 general practices in Scotland. Maximum payments that practices could receive under existing financial incentives were calculated for blood pressure (BP) control and influenza immunisation according to the number of coexisting clinical conditions. Payments were recalculated assuming a single new indicator. Payment varied by condition (£4.71-£11.08 for one BP control and £2.09-£5.78 for one influenza immunisation). Practices earned more for delivering the same action in patients with multimorbidity: in patients with 2, 3, and ≥4 conditions mean payments were £13.95, £21.92, and £29.72 for BP control, and £7.48, £11.21, and £15.14 for influenza immunisation, respectively. Practices in deprived areas had more multiple incentivised patients. When recalculated so that each incentivised action was only paid for once, all practices received less for BP control: affluent practices received more and deprived practices received less for influenza immunisation. For patients with single conditions, existing QOF payment methods have more than twofold variation in payment for delivering the same process. Multiple payments were common in patients with multimorbidity. A payment method is required that ensures fairness of rewards while maintaining adequate funding for practices based on actual workload. © British Journal of General Practice 2016.

  11. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    NARCIS (Netherlands)

    Degeling, Koen; Ijzerman, Maarten J.; Koopman, Miriam; Koffijberg, Hendrik

    2017-01-01

    Background: Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive

  12. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    NARCIS (Netherlands)

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-01-01

    Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive

  13. Democratic Model of Public Policy Accountability. Case Study on Implementation of Street Vendors Empowerment Policy in Makassar City

    Directory of Open Access Journals (Sweden)

    Rulinawaty Kasmadsi

    2015-08-01

    Full Text Available Policy accountability is a form of manifestation of public officials responsible to the people. One form of policy accountability that is discussed here is street vendors policy accountability, because they are a group of citizens who have the economic activities in public spaces. The existence of this policy how-ever, the number of street vendors from year to year increase in Makassar City. Therefore, this study seeks to uncover and explain the democratic policy ac-countability through the street vendors’ responses and expectations to the implementation of street ven-dors empowerment policy in Makassar City; and to uncover and explain the democratic policy account-ability through the stakeholders’ responses and ex-pectations to the implementation of street vendors empowerment policy in Makassar City. To achieve these objectives, the study uses democracy theory, in which this theory focuses on togetherness in dis-cussing solutions to the various problems of street vendors and in the policy implementation as well.This study used a qualitative design and case studies strat-egy. Data collection techniques used was observa-tion, interview, and documentation. Data were ana-lyzed with case description its settings. The results of this study pointed out that the interests and needs of the street vendors are not met through the empow-erment policies vendors. This is caused by the ab-sence of accountability forum as a place of togeth-erness all of street vendors empowerment stakehold-ers’. Street vendors empowerment policy in Makassar City are designed base on a top-down approach, so they are considered as objects, which must accept all government programs aimed at them.

  14. Proper laboratory notebook practices: protecting your intellectual property.

    Science.gov (United States)

    Nickla, Jason T; Boehm, Matthew B

    2011-03-01

    A laboratory notebook contains a wealth of knowledge that can be critical for establishing evidence in support of intellectual property rights and for refuting claims of research misconduct. The proper type, organization, use, maintenance, and storage of laboratory notebooks should be a priority for everyone at research institutions. Failure to properly document research activities can lead to serious problems, including the loss of valuable patent rights. Consequences of improper laboratory notebook practices can be harsh; numerous examples are described in court cases and journal articles, indicating a need for research institutions to develop strict policies on the proper use and storage of research documentation.

  15. The Perception of the Accounting Students on the Image of the Accountant and the Accounting Profession

    Directory of Open Access Journals (Sweden)

    Lucian Cernuşca

    2015-01-01

    Full Text Available This study aims to present the perception of the accounting students on the accountant image and the accounting profession, thus contributing to a better understanding of the option for the field of accounting and the motivations for choosing this profession. The paper consists of the following parts: introduction, literature review, research methodology, research findings, conclusions and bibliography. The accounting profession must be aligned to the current conditions the Romanian accounting system is going through to harmonize to the IFRS and European regulations and the development of information technologies and the transition to digital era. The role of the accountant changes from a simple digit operator to a modern one. This will be part of the managerial team, provide strategic and financial advice and effective solutions for the proper functioning of the organization, the modern stereotype involving creativity in the accounting activities. The research aims at understanding the role of the accounting profession as a social identity and as a social phenomenon and the implications for academia and professional bodies.

  16. Determination of a cohesive law for delamination modelling - Accounting for variation in crack opening and stress state across the test specimen width

    DEFF Research Database (Denmark)

    Joki, R. K.; Grytten, F.; Hayman, Brian

    2016-01-01

    by differentiating the fracture resistance with respect to opening displacement at the initial location of the crack tip, measured at the specimen edge. 2) Extend the bridging law to a cohesive law by accounting for crack tip fracture energy. 3) Fine-tune the cohesive law through an iterative modelling approach so......The cohesive law for Mode I delamination in glass fibre Non-Crimped Fabric reinforced vinylester is determined for use in finite element models. The cohesive law is derived from a delamination test based on DCB specimens loaded with pure bending moments taking into account the presence of large...... that the changing state of stress and deformation across the width of the test specimen is taken into account. The changing state of stress and deformation across the specimen width is shown to be significant for small openings (small fracture process zone size). This will also be important for the initial part...

  17. Low-level and narm radioactive wastes. Model documentation: accounting model for PRESTO-EPA-POP, PRESTO-EPA-DEEP, and PRESTO-EPA-BRC. Methodology and users manual. Final report

    International Nuclear Information System (INIS)

    Rogers, V.; Hung, C.

    1987-12-01

    The accounting model was used as a utility model for assessing the cumulative health effects to the general population residing in the downstream regional water basin as a result of the disposal of LLW when a unit response analysis method is used. The utility model is specifically designed to assess the cumulative population health effects in conjunction with the PRESTO-EPA-POP, PRESTO-EPA-BRC, or PRESTO-EPA-DEEP model simply for the purpose of reducing the cost of analysis. Therefore, the assessment of the cumulative population health effects may also be conducted with one of the above appropriate models without the aid of this accounting model

  18. Cataclysmic variables in the SUPERBLINK proper motion survey

    Energy Technology Data Exchange (ETDEWEB)

    Skinner, Julie N.; Thorstensen, John R. [Department of Physics and Astronomy, 6127 Wilder Laboratory, Dartmouth College, Hanover, NH 03755-3528 (United States); Lépine, Sébastien, E-mail: jns@dartmouth.edu [Department of Physics and Astronomy, Georgia State University, 25 Park Place NE, Atlanta, GA 30303 (United States)

    2014-12-01

    We have discovered a new high proper motion cataclysmic variable (CV) in the SUPERBLINK proper motion survey, which is sensitive to stars with proper motions greater than 40 mas yr{sup −1}. This CV was selected for follow-up observations as part of a larger search for CVs selected based on proper motions and their near-UV−V and V−K{sub s} colors. We present spectroscopic observations from the 2.4 m Hiltner Telescope at MDM Observatory. The new CV's orbital period is near 96 minutes, its spectrum shows the double-peaked Balmer emission lines characteristic of quiescent dwarf novae, and its V magnitude is near 18.2. Additionally, we present a full list of known CVs in the SUPERBLINK catalog.

  19. Proper Use of Audio-Visual Aids: Essential for Educators.

    Science.gov (United States)

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  20. Proper time axis of a closed relativistic system

    International Nuclear Information System (INIS)

    Chernikov, N.A.; Fadeev, N.G.; Shavokhina, N.S.

    1997-01-01

    The definition of a proper time axis of a closed relativistic system of colliding particles is given. The solution of the proper time axis problem is presented. If the light velocity c equals the imaginary unit i, then in the case of a plane motion of the system the problem about the proper time axis turns out to be equivalent to the known in engineering mechanics problem about the reduction of any system of forces, applied to a rigid body, to the dynamic screw. In the general case, when c=i, the problem about the proper time axis turns out to be equivalent to the problem about the reduction to the dynamic screw of a system of forces, applied to a rigid body in a four-dimensional Euclidean space