WorldWideScience

Sample records for models rarely account

  1. How to model rare events?

    Science.gov (United States)

    Grieser, J.; Jewson, S.

    2009-04-01

    The risk of extreme meteorological events is often estimated using extreme value theory (EVT). However, EVT can't be expected to work well in all cases. Two examples are (a) very rare events which are not adequately captured in short observational records and (b) nonstationary situations where observations alone cannot provide risk estimates for the future. For these reasons Risk Management Solutions (RMS) develops models of extreme weather risks that are based on a combination of both, physics and statistics, rather than just statistics. One example is the RMS TC-Rain model. In addition to wind and storm surge, tropical cyclones (TCs) can lead to torrential rain that may cause widespread flooding and landslides. The most prominent recent historical example is tropical storm Alison (2001) which inundated Houston and caused roughly US 5bn of damage. Since Alison was only tropical storm, rather than a hurricane, no damage due to wind and storm surge was expected and no serious warnings were issued. RMS now has developed a TC-Rain Model which is based on a combination of observations, experience and physical parameterizations. It is an example on how the use of physical principles helps to estimate the risk of rare and devastating events. Based on an event set of TC tracks it allows the calculation of several hundred thousand TC rain footprints which can then be used for the estimation of flood levels and their return periods via a complex dynamical hydrological model. The TC-Rain Model takes a number of physical mechanisms into account, including (a) the effect of surface roughness change at land fall, (b) orographic rain enhancement, (c) drift of rain due to strong horizontal winds, (d) asymmetry, (e) outer rain bands and (f) the dependence on sea surface temperature. It is calibrated using 35 US-landfalling tropical cyclones from 1998 to the 2008, and verified against all US-landfalling TCs since 1948. The model is not designed as a forecasting tool, but rather a

  2. Mouse Models of Rare Craniofacial Disorders.

    Science.gov (United States)

    Achilleos, Annita; Trainor, Paul A

    2015-01-01

    A rare disease is defined as a condition that affects less than 1 in 2000 individuals. Currently more than 7000 rare diseases have been documented, and most are thought to be of genetic origin. Rare diseases primarily affect children, and congenital craniofacial syndromes and disorders constitute a significant proportion of rare diseases, with over 700 having been described to date. Modeling craniofacial disorders in animal models has been instrumental in uncovering the etiology and pathogenesis of numerous conditions and in some cases has even led to potential therapeutic avenues for their prevention. In this chapter, we focus primarily on two general classes of rare disorders, ribosomopathies and ciliopathies, and the surprising finding that the disruption of fundamental, global processes can result in tissue-specific craniofacial defects. In addition, we discuss recent advances in understanding the pathogenesis of an extremely rare and specific craniofacial condition known as syngnathia, based on the first mouse models for this condition. Approximately 1% of all babies are born with a minor or major developmental anomaly, and individuals suffering from rare diseases deserve the same quality of treatment and care and attention to their disease as other patients. © 2015 Elsevier Inc. All rights reserved.

  3. Modelling in Accounting. Theoretical and Practical Dimensions

    Directory of Open Access Journals (Sweden)

    Teresa Szot -Gabryś

    2010-10-01

    Full Text Available Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic measurements areas which have not been hitherto covered by any accounting system (it applies, for example, to small businesses, agricultural farms, human capital, which requires the development of an appropriate theoretical and practical model. The article illustrates the issue of modelling in accounting based on the example of an accounting model developed for small businesses, i.e. economic entities which are not obliged by law to keep accounting records.

  4. Implementing a trustworthy cost-accounting model.

    Science.gov (United States)

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  5. Questioning Stakeholder Legitimacy: A Philanthropic Accountability Model.

    Science.gov (United States)

    Kraeger, Patsy; Robichau, Robbie

    2017-01-01

    Philanthropic organizations contribute to important work that solves complex problems to strengthen communities. Many of these organizations are moving toward engaging in public policy work, in addition to funding programs. This paper raises questions of legitimacy for foundations, as well as issues of transparency and accountability in a pluralistic democracy. Measures of civic health also inform how philanthropic organizations can be accountable to stakeholders. We propose a holistic model for philanthropic accountability that combines elements of transparency and performance accountability, as well as practices associated with the American pluralistic model for democratic accountability. We argue that philanthropic institutions should seek stakeholder and public input when shaping any public policy agenda. This paper suggests a new paradigm, called philanthropic accountability that can be used for legitimacy and democratic governance of private foundations engaged in policy work. The Philanthropic Accountability Model can be empirically tested and used as a governance tool.

  6. Understanding financial crisis through accounting models

    NARCIS (Netherlands)

    Bezemer, D.J.

    2010-01-01

    This paper presents evidence that accounting (or flow-of-funds) macroeconomic models helped anticipate the credit crisis and economic recession Equilibrium models ubiquitous in mainstream policy and research did not This study traces the Intellectual pedigrees of the accounting approach as an

  7. Modelling of functional systems of managerial accounting

    Directory of Open Access Journals (Sweden)

    O.V. Fomina

    2017-12-01

    Full Text Available The modern stage of managerial accounting development takes place under the powerful influence of managerial innovations. The article aimed at the development of integrational model of budgeting and the system of balanced indices in the system of managerial accounting that will contribute the increasing of relevance for making managerial decisions by managers of different levels management. As a result of the study the author proposed the highly pragmatical integration model of budgeting and system of the balanced indices in the system of managerial accounting, which is realized by the development of the system of gathering, consolidation, analysis, and interpretation of financial and nonfinancial information, contributes the increasing of relevance for making managerial decisions on the base of coordination and effective and purpose orientation both strategical and operative resources of an enterprise. The effective integrational process of the system components makes it possible to distribute limited resources rationally taking into account prospective purposes and strategic initiatives, to carry

  8. Modeling habitat dynamics accounting for possible misclassification

    Science.gov (United States)

    Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.

    2012-01-01

    Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.

  9. Media Accountability Systems: Models, proposals and outlooks

    Directory of Open Access Journals (Sweden)

    Fernando O. Paulino

    2007-06-01

    Full Text Available This paper analyzes one of the basic actions of SOS-Imprensa, the mechanism to assure Media Accountability with the goal of proposing a synthesis of models for the Brazilian reality. The article aims to address the possibilities of creating and improving mechanisms to stimulate the democratic press process and to mark out and assure freedom of speech and personal rights with respect to the media. Based on the Press Social Responsibility Theory, the hypothesis is that the experiences analyzed (Communication Council, Press Council, Ombudsman and Readers Council are alternatives for accountability, mediation and arbitration, seeking visibility, trust and public support in favor of fairer media.

  10. The OntoREA Accounting Model: Ontology-based Modeling of the Accounting Domain

    Directory of Open Access Journals (Sweden)

    Christian Fischer-Pauzenberger

    2017-07-01

    Full Text Available McCarthy developed a framework for modeling the economic rationale of different business transactions along the enterprise value chain described in his seminal article “The REA Accounting Model – A Generalized Framework for Accounting Systems in a Shared Data Environment” Originally, the REA accounting model was specified in the entity-relationship (ER language. Later on other languages – especially in form of generic data models and UML class models (UML language – were used. Recently, the OntoUML language was developed by Guizzardi and used by Gailly et al. for a metaphysical reengineering of the REA enterprise ontology. Although the REA accounting model originally addressed the accounting domain, it most successfuly is applied as a reference framework for the conceptual modeling of enterprise systems. The primary research objective of this article is to anchor the REA-based models more deeply in the accounting domain. In order to achieve this objective, essential primitives of the REA model are identified and conceptualized in the OntoUML language within the Asset Liability Equity (ALE context of the traditional ALE accounting domain.

  11. Fusion of expertise among accounting accounting faculty. Towards an expertise model for academia in accounting.

    NARCIS (Netherlands)

    Njoku, Jonathan C.; van der Heijden, Beatrice; Inanga, Eno L.

    2010-01-01

    This paper aims to portray an accounting faculty expert. It is argued that neither the academic nor the professional orientation alone appears adequate in developing accounting faculty expertise. The accounting faculty expert is supposed to develop into a so-called ‘flexpert’ (Van der Heijden, 2003)

  12. The two bands model for the high temperature conductivity of the binary rare earth alloys

    International Nuclear Information System (INIS)

    Borgiel, W.

    1983-09-01

    The formula for the high temperature spin disorder resistivity for the concentrated Asub(1-x)Bsub(x)C alloys where A,B is an element of Rare Earth (RE) is determined on the basis of two bands model and the coherent potential approximation (CPA). The conductivity given by the 5d bands coming from the RE compounds has been taken into account

  13. A simulation model for material accounting systems

    International Nuclear Information System (INIS)

    Coulter, C.A.; Thomas, K.E.

    1987-01-01

    A general-purpose model that was developed to simulate the operation of a chemical processing facility for nuclear materials has been extended to describe material measurement and accounting procedures as well. The model now provides descriptors for material balance areas, a large class of measurement instrument types and their associated measurement errors for various classes of materials, the measurement instruments themselves with their individual calibration schedules, and material balance closures. Delayed receipt of measurement results (as for off-line analytical chemistry assay), with interim use of a provisional measurement value, can be accurately represented. The simulation model can be used to estimate inventory difference variances for processing areas that do not operate at steady state, to evaluate the timeliness of measurement information, to determine process impacts of measurement requirements, and to evaluate the effectiveness of diversion-detection algorithms. Such information is usually difficult to obtain by other means. Use of the measurement simulation model is illustrated by applying it to estimate inventory difference variances for two material balance area structures of a fictitious nuclear material processing line

  14. Rare top quark decays in extended models

    International Nuclear Information System (INIS)

    Gaitan, R.; Miranda, O. G.; Cabral-Rosetti, L. G.

    2006-01-01

    Flavor changing neutral currents (FCNC) decays t → H0 + c, t → Z + c, and H0 → t + c-bar are discussed in the context of Alternative Left-Right symmetric Models (ALRM) with extra isosinglet heavy fermions where FCNC decays may take place at tree-level and are only suppressed by the mixing between ordinary top and charm quarks, which is poorly constraint by current experimental values. The non-manifest case is also briefly discussed

  15. Modelling in Accounting. Theoretical and Practical Dimensions

    OpenAIRE

    Teresa Szot -Gabryś

    2010-01-01

    Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic...

  16. Display of the information model accounting system

    OpenAIRE

    Matija Varga

    2011-01-01

    This paper presents the accounting information system in public companies, business technology matrix and data flow diagram. The paper describes the purpose and goals of the accounting process, matrix sub-process and data class. Data flow in the accounting process and the so-called general ledger module are described in detail. Activities of the financial statements and determining the financial statements of the companies are mentioned as well. It is stated how the general ledger...

  17. DMFCA Model as a Possible Way to Detect Creative Accounting and Accounting Fraud in an Enterprise

    Directory of Open Access Journals (Sweden)

    Jindřiška Kouřilová

    2013-05-01

    Full Text Available The quality of reported accounting data as well as the quality and behaviour of their users influence the efficiency of an enterprise’s management. Its assessment could therefore be changed as well. To identify creative accounting and fraud, several methods and tools were used. In this paper we would like to present our proposal of the DMFCA (Detection model Material Flow Cost Accounting balance model based on environmental accounting and the MFCA (Material Flow Cost Accounting as its method. The following balance areas are included: material, financial and legislative. Using the analysis of strengths and weaknesses of the model, its possible use within a production and business company were assessed. Its possible usage to the detection of some creative accounting techniques was also assessed. The Model is developed in details for practical use and describing theoretical aspects.

  18. Display of the information model accounting system

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2011-12-01

    Full Text Available This paper presents the accounting information system in public companies, business technology matrix and data flow diagram. The paper describes the purpose and goals of the accounting process, matrix sub-process and data class. Data flow in the accounting process and the so-called general ledger module are described in detail. Activities of the financial statements and determining the financial statements of the companies are mentioned as well. It is stated how the general ledger module should function and what characteristics it must have. Line graphs will depict indicators of the company’s business success, indebtedness and company’s efficiency coefficients based on financial balance reports, and profit and loss report.

  19. Accountability: a missing construct in models of adherence behavior and in clinical practice.

    Science.gov (United States)

    Oussedik, Elias; Foy, Capri G; Masicampo, E J; Kammrath, Lara K; Anderson, Robert E; Feldman, Steven R

    2017-01-01

    Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients' motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8-12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability) to patients' autonomous internal desire to please a respected health care provider (autonomous accountability), the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura's Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well as the testing and refinement of adherence interventions that make use of this critical determinant of human behavior.

  20. Modeling of diode pumped metastable rare gas lasers.

    Science.gov (United States)

    Yang, Zining; Yu, Guangqi; Wang, Hongyan; Lu, Qisheng; Xu, Xiaojun

    2015-06-01

    As a new kind of optically pumped gaseous lasers, diode pumped metastable rare gas lasers (OPRGLs) show potential in high power operation. In this paper, a multi-level rate equation based model of OPRGL is established. A qualitative agreement between simulation and Rawlins et al.'s experimental result shows the validity of the model. The key parameters' influences and energy distribution characteristics are theoretically studied, which is useful for the optimized design of high efficient OPRGLs.

  1. Studi Model Penerimaan Tehnologi (Technology Acceptance Model) Novice Accountant

    OpenAIRE

    Rustiana, Rustiana

    2006-01-01

    This study investigates adoption or application of behavior information technologyacceptance. Davis' Technology Acceptance Model is employed to explain perceived usefulness, perceived ease of use, and intention to use in information systems. The respondents were 228 accounting students in management information system. Data was collected by questionnaire and then analyzed by using linear regression analysis and independent t-test. The results are in line with most of the hypotheses, only hypo...

  2. Understanding rare disease pathogenesis: a grand challenge for model organisms.

    Science.gov (United States)

    Hieter, Philip; Boycott, Kym M

    2014-10-01

    In this commentary, Philip Hieter and Kym Boycott discuss the importance of model organisms for understanding pathogenesis of rare human genetic diseases, and highlight the work of Brooks et al., "Dysfunction of 60S ribosomal protein L10 (RPL10) disrupts neurodevelopment and causes X-linked microcephaly in humans," published in this issue of GENETICS. Copyright © 2014 by the Genetics Society of America.

  3. Accountability

    Science.gov (United States)

    Fielding, Michael; Inglis, Fred

    2017-01-01

    This contribution republishes extracts from two important articles published around 2000 concerning the punitive accountability system suffered by English primary and secondary schools. The first concerns the inspection agency Ofsted, and the second managerialism. Though they do not directly address assessment, they are highly relevant to this…

  4. Uncertainty in Discount Models and Environmental Accounting

    Directory of Open Access Journals (Sweden)

    Donald Ludwig

    2005-12-01

    Full Text Available Cost-benefit analysis (CBA is controversial for environmental issues, but is nevertheless employed by many governments and private organizations for making environmental decisions. Controversy centers on the practice of economic discounting in CBA for decisions that have substantial long-term consequences, as do most environmental decisions. Customarily, economic discounting has been calculated at a constant exponential rate, a practice that weights the present heavily in comparison with the future. Recent analyses of economic data show that the assumption of constant exponential discounting should be modified to take into account large uncertainties in long-term discount rates. A proper treatment of this uncertainty requires that we consider returns over a plausible range of assumptions about future discounting rates. When returns are averaged in this way, the schemes with the most severe discounting have a negligible effect on the average after a long period of time has elapsed. This re-examination of economic uncertainty provides support for policies that prevent or mitigate environmental damage. We examine these effects for three examples: a stylized renewable resource, management of a long-lived species (Atlantic Right Whales, and lake eutrophication.

  5. The financial accounting model from a system dynamics' perspective

    NARCIS (Netherlands)

    Melse, E.

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal

  6. Accountancy Modeling on Intangible Fixed Assets in Terms of the Main Provisions of International Accounting Standards

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2014-12-01

    Full Text Available Intangible fixed assets are of great importance in terms of progress of economic units. In recent years new approaches have been developed, additions to old standards so that intangible assets have gained a reputation both in the economic environment and in academia. We intend to develop a practical study on the main accounting approaches of the accounting modeling of the intangibles that impact on a company's brand development research PRORESEARCH SRL.

  7. Discovering rare behaviours in stochastic differential equations using decision procedures: applications to a minimal cell cycle model.

    Science.gov (United States)

    Ghosh, Arup Kumar; Hussain, Faraz; Jha, Susmit; Langmead, Christopher J; Jha, Sumit Kumar

    2014-01-01

    Stochastic Differential Equation (SDE) models are used to describe the dynamics of complex systems with inherent randomness. The primary purpose of these models is to study rare but interesting or important behaviours, such as the formation of a tumour. Stochastic simulations are the most common means for estimating (or bounding) the probability of rare behaviours, but the cost of simulations increases with the rarity of events. To address this problem, we introduce a new algorithm specifically designed to quantify the likelihood of rare behaviours in SDE models. Our approach relies on temporal logics for specifying rare behaviours of interest, and on the ability of bit-vector decision procedures to reason exhaustively about fixed-precision arithmetic. We apply our algorithm to a minimal parameterised model of the cell cycle, and take Brownian noise into account while investigating the likelihood of irregularities in cell size and time between cell divisions.

  8. Modelling adversary actions against a nuclear material accounting system

    International Nuclear Information System (INIS)

    Lim, J.J.; Huebel, J.G.

    1979-01-01

    A typical nuclear material accounting system employing double-entry bookkeeping is described. A logic diagram is used to model the interactions of the accounting system and the adversary when he attempts to thwart it. Boolean equations are derived from the logic diagram; solution of these equations yields the accounts and records through which the adversary may disguise a SSNM theft and the collusion requirements needed to accomplish this feat. Some technical highlights of the logic diagram are also discussed

  9. Project MAP: Model Accounting Plan for Special Education. Final Report.

    Science.gov (United States)

    Rossi, Robert J.

    The Model Accounting Plan (MAP) is a demographic accounting system designed to meet three major goals related to improving planning, evaluation, and monitoring of special education programs. First, MAP provides local-level data for administrators and parents to monitor the progress, transition patterns, expected attainments, and associated costs…

  10. Improved single particle potential for transport model simulations of nuclear reactions induced by rare isotope beams

    International Nuclear Information System (INIS)

    Xu Chang; Li Baoan

    2010-01-01

    Taking into account more accurately the isospin dependence of nucleon-nucleon interactions in the in-medium many-body force term of the Gogny effective interaction, new expressions for the single-nucleon potential and the symmetry energy are derived. Effects of both the spin (isospin) and the density dependence of nuclear effective interactions on the symmetry potential and the symmetry energy are examined. It is shown that they both play a crucial role in determining the symmetry potential and the symmetry energy at suprasaturation densities. The improved single-nucleon potential will be useful for more accurate simulation of nuclear reactions induced by rare-isotope beams within transport models.

  11. The Relevance of the CIPP Evaluation Model for Educational Accountability.

    Science.gov (United States)

    Stufflebeam, Daniel L.

    The CIPP Evaluation Model was originally developed to provide timely information in a systematic way for decision making, which is a proactive application of evaluation. This article examines whether the CIPP model also serves the retroactive purpose of providing information for accountability. Specifically, can the CIPP Model adequately assist…

  12. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Niculae Feleaga

    2006-04-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  13. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Liliana Feleaga

    2006-06-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  14. Can An Amended Standard Model Account For Cold Dark Matter?

    International Nuclear Information System (INIS)

    Goldhaber, Maurice

    2004-01-01

    It is generally believed that one has to invoke theories beyond the Standard Model to account for cold dark matter particles. However, there may be undiscovered universal interactions that, if added to the Standard Model, would lead to new members of the three generations of elementary fermions that might be candidates for cold dark matter particles

  15. Closed-Loop Supply Chain Planning Model of Rare Metals

    Directory of Open Access Journals (Sweden)

    Dongmin Son

    2018-04-01

    Full Text Available Rare metals (RMs are becoming increasingly important in high-tech industries associated with the Fourth Industrial Revolution, such as the electric vehicle (EV and 3D printer industries. As the growth of these industries accelerates in the near future, manufacturers will also face greater RM supply risks. For this reason, many countries are putting considerable effort into securing the RM supply. For example, countries including Japan, Korea, and the USA have adopted two major policies: the stockpile system and Extended Producer Responsibility (EPR. Therefore, it is necessary for the manufacturers with RMs to establish a suitable supply chain plan that reflects this situation. In this study, the RM classification matrix is created based on the stockpile and recycling level in Korea. Accordingly, three different types of supply chain are designed in order to develop the closed-loop supply chain (CLSC planning model of RM, and the CLSC planning models of RM are validated through experimental analysis. The results show that the stockpiling and the EPR recycling obligation increase the amount of recycled flow and reduce the total cost of the part manufacturing, which means that these two factors are significant for obtaining sustainability of the RMs’ CLSC. In addition, the government needs to set an appropriate sharing cost for promoting the manufacturer’s recycling. Also, from the manufacturer’s perspective, it is better to increase the return rate by making a contract with the collectors to guarantee the collection of used products.

  16. Accountability: a missing construct in models of adherence behavior and in clinical practice

    Directory of Open Access Journals (Sweden)

    Oussedik E

    2017-07-01

    Full Text Available Elias Oussedik,1 Capri G Foy,2 E J Masicampo,3 Lara K Kammrath,3 Robert E Anderson,1 Steven R Feldman1,4,5 1Center for Dermatology Research, Department of Dermatology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 2Department of Social Sciences and Health Policy, Wake Forest School of Medicine, Winston-Salem, NC, USA; 3Department of Psychology, Wake Forest University, Winston-Salem, NC, USA; 4Department of Pathology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 5Department of Public Health Sciences, Wake Forest School of Medicine, Winston-Salem, NC, USA Abstract: Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients’ motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8–12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability to patients’ autonomous internal desire to please a respected health care provider (autonomous accountability, the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura’s Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well

  17. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.

    2000-01-01

    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest

  18. Driving Strategic Risk Planning With Predictive Modelling For Managerial Accounting

    DEFF Research Database (Denmark)

    Nielsen, Steen; Pontoppidan, Iens Christian

    mathematical. The ultimate purpose of this paper is to “make the risk concept procedural and analytical” and to argue that accountants should now include stochastic risk management as a standard tool. Drawing on mathematical modelling and statistics, this paper methodically develops risk analysis approach......Currently, risk management in management/managerial accounting is treated as deterministic. Although it is well-known that risk estimates are necessarily uncertain or stochastic, until recently the methodology required to handle stochastic risk-based elements appear to be impractical and too...... for managerial accounting and shows how it can be used to determine the impact of different types of risk assessment input parameters on the variability of important outcome measures. The purpose is to: (i) point out the theoretical necessity of a stochastic risk framework; (ii) present a stochastic framework...

  19. Accounting for Business Models: Increasing the Visibility of Stakeholders

    Directory of Open Access Journals (Sweden)

    Colin Haslam

    2015-01-01

    Full Text Available Purpose: This paper conceptualises a firm’s business model employing stakeholder theory as a central organising element to help inform the purpose and objective(s of business model financial reporting and disclosure. Framework: Firms interact with a complex network of primary and secondary stakeholders to secure the value proposition of a firm’s business model. This value proposition is itself a complex amalgam of value creating, value capturing and value manipulating arrangements with stakeholders. From a financial accounting perspective the purpose of the value proposition for a firm’s business model is to sustain liquidity and solvency as a going concern. Findings: This article argues that stakeholder relations impact upon the financial viability of a firm’s business model value proposition. However current financial reporting by function of expenses and the central organising objectives of the accounting conceptual framework conceal firm-stakeholder relations and their impact on reported financials. Practical implications: The practical implication of our paper is that ‘Business Model’ financial reporting would require a reorientation in the accounting conceptual framework that defines the objectives and purpose of financial reporting. This reorientation would involve reporting about stakeholder relations and their impact on a firms financials not simply reporting financial information to ‘investors’. Social Implications: Business model financial reporting has the potential to be stakeholder inclusive because the numbers and narratives reported by firms in their annual financial statements will increase the visibility of stakeholder relations and how these are being managed. What is original/value of paper: This paper’s original perspective is that it argues that a firm’s business model is structured out of stakeholder relations. It presents the firm’s value proposition as the product of value creating, capturing and

  20. Accounting for small scale heterogeneity in ecohydrologic watershed models

    Science.gov (United States)

    Burke, W.; Tague, C.

    2017-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach

  1. Accounting for household heterogeneity in general equilibrium economic growth models

    International Nuclear Information System (INIS)

    Melnikov, N.B.; O'Neill, B.C.; Dalton, M.G.

    2012-01-01

    We describe and evaluate a new method of aggregating heterogeneous households that allows for the representation of changing demographic composition in a multi-sector economic growth model. The method is based on a utility and labor supply calibration that takes into account time variations in demographic characteristics of the population. We test the method using the Population-Environment-Technology (PET) model by comparing energy and emissions projections employing the aggregate representation of households to projections representing different household types explicitly. Results show that the difference between the two approaches in terms of total demand for energy and consumption goods is negligible for a wide range of model parameters. Our approach allows the effects of population aging, urbanization, and other forms of compositional change on energy demand and CO 2 emissions to be estimated and compared in a computationally manageable manner using a representative household under assumptions and functional forms that are standard in economic growth models.

  2. ACCOUNTING MODELS FOR OUTWARD PROCESSING TRANSACTIONS OF GOODS

    Directory of Open Access Journals (Sweden)

    Lucia PALIU-POPA

    2010-09-01

    Full Text Available In modern international trade, a significant expansion is experienced by commercial operations, also including goods outward processing transactions. The motivations for expanding these international economic affairs, which take place in a complex legal framework, consist of: capitalization of the production capacity for some partners, of the brand for others, leading to a significant commercial profit and thus increasing the currency contribution, without excluding the high and complex nature of risks, both in trading and extra-trading. Starting from the content of processing transactions of goods, as part of combined commercial operations and after clarifying the tax matters which affect the entry in the accounts, we shall present models for reflecting in the accounting of an entity established in Romania the operations of outward processing of goods, if the provider of such operations belongs to the extra-Community or Community area

  3. Accommodating environmental variation in population models: metaphysiological biomass loss accounting.

    Science.gov (United States)

    Owen-Smith, Norman

    2011-07-01

    1. There is a pressing need for population models that can reliably predict responses to changing environmental conditions and diagnose the causes of variation in abundance in space as well as through time. In this 'how to' article, it is outlined how standard population models can be modified to accommodate environmental variation in a heuristically conducive way. This approach is based on metaphysiological modelling concepts linking populations within food web contexts and underlying behaviour governing resource selection. Using population biomass as the currency, population changes can be considered at fine temporal scales taking into account seasonal variation. Density feedbacks are generated through the seasonal depression of resources even in the absence of interference competition. 2. Examples described include (i) metaphysiological modifications of Lotka-Volterra equations for coupled consumer-resource dynamics, accommodating seasonal variation in resource quality as well as availability, resource-dependent mortality and additive predation, (ii) spatial variation in habitat suitability evident from the population abundance attained, taking into account resource heterogeneity and consumer choice using empirical data, (iii) accommodating population structure through the variable sensitivity of life-history stages to resource deficiencies, affecting susceptibility to oscillatory dynamics and (iv) expansion of density-dependent equations to accommodate various biomass losses reducing population growth rate below its potential, including reductions in reproductive outputs. Supporting computational code and parameter values are provided. 3. The essential features of metaphysiological population models include (i) the biomass currency enabling within-year dynamics to be represented appropriately, (ii) distinguishing various processes reducing population growth below its potential, (iii) structural consistency in the representation of interacting populations and

  4. Micromapping of very rare anemias: the model of CDA

    Directory of Open Access Journals (Sweden)

    Hermann Heimpel

    2013-03-01

    Full Text Available The congenital dyserythropoietic anemias, first described in 1966 and classified into four distinct types in 1968, are still very rare. However, many cases were described in recent years, mainly in European Countries. The detection of mutations of the CDAN1-gene (Tamary et al. from Israel in all cases of CDA I and of the SEC23B-gene in almost all cases of CDA II (Schwarz, Iolascon, Heimpel et al. from Germany and Italy, Zanella, Bianchi et al. from Italy stimulated greater awarness of the CDAs and resulted in many recent reports from all continents. Period prevalance of the the last 50 years in the European countries collated in the German Registry on CDAs were calculated in 2009 and published in 2010. The cumulated incidence of both types combined variedwidely between European regions, with minimal values of 0.08 cases⁄million in Scandinavia and 2.60 cases⁄million in Italy...

  5. Photo darkening in Rare earth doped silica: Model and Experiment

    DEFF Research Database (Denmark)

    Mattsson, Kent Erik

    2011-01-01

    A model for photo darkening based on chemical bond formation is presented. The formation process, color center spectral response and bleaching is discussed and model predictions is found to follow high power fiber laser operation...

  6. Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.

    Science.gov (United States)

    Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih

    2016-10-01

    In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.

  7. Rare muon and tau decays in A4 Models

    CERN Document Server

    Feruglio, Ferruccio

    2010-01-01

    We analyze the most general dimension six effective Lagrangian, invariant under the flavour symmetry A4 x Z3 x U(1) proposed to reproduce the near tri-bimaximal lepton mixing observed in neutrino oscillations. The effective Lagrangian includes four-lepton operators that violate the individual lepton numbers in the limit of exact flavor symmetry and allow unsuppressed processes satisfying the rule |Delta L_e x Delta L_mu x Delta L_tau| = 2. The most stringent bounds on the strength of the new interactions come from the observed universality of leptonic muon and tau decays, from the agreement between the Fermi constant measured in the muon decay and that extracted from the mW/mZ ratio, and from the limits on the rare decays tau^- -> mu^+ e^- e^- and tau^- -> e^+ mu^- mu^-. We also investigate these effects in a specific supersymmetric (SUSY) realization of the flavour symmetry and we find large suppression factors for all the processes allowed by the selection rule. We explain why this rule is violated in the S...

  8. Rare B sub s -> nu nu-bar gamma Decay beyond the Standard Model

    CERN Document Server

    Cakir, O

    2003-01-01

    Using the most general model independent form of the effective Hamiltonian the rare decay B sub s -> nu nu-bar gamma is studied. The sensitivity of the photon energy distribution and branching ratio to new Wilson coefficients are investigated.

  9. Large animal models of rare genetic disorders: sheep as phenotypically relevant models of human genetic disease.

    Science.gov (United States)

    Pinnapureddy, Ashish R; Stayner, Cherie; McEwan, John; Baddeley, Olivia; Forman, John; Eccles, Michael R

    2015-09-02

    Animals that accurately model human disease are invaluable in medical research, allowing a critical understanding of disease mechanisms, and the opportunity to evaluate the effect of therapeutic compounds in pre-clinical studies. Many types of animal models are used world-wide, with the most common being small laboratory animals, such as mice. However, rodents often do not faithfully replicate human disease, despite their predominant use in research. This discordancy is due in part to physiological differences, such as body size and longevity. In contrast, large animal models, including sheep, provide an alternative to mice for biomedical research due to their greater physiological parallels with humans. Completion of the full genome sequences of many species, and the advent of Next Generation Sequencing (NGS) technologies, means it is now feasible to screen large populations of domesticated animals for genetic variants that resemble human genetic diseases, and generate models that more accurately model rare human pathologies. In this review, we discuss the notion of using sheep as large animal models, and their advantages in modelling human genetic disease. We exemplify several existing naturally occurring ovine variants in genes that are orthologous to human disease genes, such as the Cln6 sheep model for Batten disease. These, and other sheep models, have contributed significantly to our understanding of the relevant human disease process, in addition to providing opportunities to trial new therapies in animals with similar body and organ size to humans. Therefore sheep are a significant species with respect to the modelling of rare genetic human disease, which we summarize in this review.

  10. Rare earth-doped integrated glass components: modeling and optimization

    DEFF Research Database (Denmark)

    Lumholt, Ole; Bjarklev, Anders Overgaard; Rasmussen, Thomas

    1995-01-01

    For the integrated optic erbium-doped phosphate silica-amplifier, a comprehensive model is presented which includes high-concentration dissipative ion-ion interactions. Based on actual waveguide parameters, the model is seen to reproduce measured gains closely. A rigorous design optimization is p...

  11. Accounting for Water Insecurity in Modeling Domestic Water Demand

    Science.gov (United States)

    Galaitsis, S. E.; Huber-lee, A. T.; Vogel, R. M.; Naumova, E.

    2013-12-01

    Water demand management uses price elasticity estimates to predict consumer demand in relation to water pricing changes, but studies have shown that many additional factors effect water consumption. Development scholars document the need for water security, however, much of the water security literature focuses on broad policies which can influence water demand. Previous domestic water demand studies have not considered how water security can affect a population's consumption behavior. This study is the first to model the influence of water insecurity on water demand. A subjective indicator scale measuring water insecurity among consumers in the Palestinian West Bank is developed and included as a variable to explore how perceptions of control, or lack thereof, impact consumption behavior and resulting estimates of price elasticity. A multivariate regression model demonstrates the significance of a water insecurity variable for data sets encompassing disparate water access. When accounting for insecurity, the R-squaed value improves and the marginal price a household is willing to pay becomes a significant predictor for the household quantity consumption. The model denotes that, with all other variables held equal, a household will buy more water when the users are more water insecure. Though the reasons behind this trend require further study, the findings suggest broad policy implications by demonstrating that water distribution practices in scarcity conditions can promote consumer welfare and efficient water use.

  12. Mantle rare gas relative abundances in a steady-state mass transport model

    Science.gov (United States)

    Porcelli, D.; Wasserburg, G. J.

    1994-01-01

    A model for He and Xe was presented previously which incorporates mass transfer of rare gases from an undegassed lower mantle (P) and the atmosphere into a degassed upper mantle (D). We extend the model to include Ne and Ar. Model constraints on rare gas relative abundances within P are derived. Discussions of terrestrial volatile acquisition have focused on the rare gas abundance pattern of the atmosphere relative to meteoritic components, and the pattern of rare gases still trapped in the Ear,th is important in identifying volatile capture and loss processes operating during Earth formation. The assumptions and principles of the model are discussed in Wasserburg and Porcelli (this volume). For P, the concentrations in P of the decay/nuclear products 4 He, 21 Ne, 40 Ar, and 136 Xe can be calculated from the concentrations of the parent elements U, Th, K, and Pu. The total concentration of the daughter element in P is proportional to the isotopic shifts in P. For Ar, ((40)Ar/(36)Ar)p - ((40)Ar/(36)Ar)o =Delta (exp 40) p= 40 Cp/(exp 36)C where(i)C(sub j) the concentration of isotope i in j. In D, isotope compositions are the result of mixing rare gases from P, decay/nuclear products generated in the upper mantle, and subducted rare gases (for Ar and Xe).

  13. The origins of the spanish railroad accounting model: a qualitative study of the MZA'S operating account (1856-1874

    Directory of Open Access Journals (Sweden)

    Beatriz Santos

    2014-12-01

    Full Text Available The lack of external regulation about the form and substance of the financial statements that railroad companies had to report during the implementation phase of the Spanish railway, meant that each company developed its own accounting model. In this study we have described, analysed and interpreted the more relevant changes in the accounting information in relation to the business result. Using the analysis of an historical case, we developed an ad-hoc research tool, for recording all the changes of the operating account. The results of the study prove that MZA’s operating account reflected the particularities of the railway business although subject to limitations, and the reported information improved during the study period in terms of relevance and reliability

  14. Accrual based accounting implementation: An approach for modelling major decisions

    Directory of Open Access Journals (Sweden)

    Ratno Agriyanto

    2016-12-01

    Full Text Available Over the last three decades the main issues of implementation of accrual based accounting government institutions in Indonesia. Implementation of accrual based accounting in government institutions amid debate about the usefulness of accounting information for decision-making. Empirical study shows that the accrual based of accounting information on a government institution is not used for decision making. The research objective was to determine the impact of the implementation of the accrual based accounting to the accrual basis of accounting information use for decision-making basis. We used the survey questionnaires. The data were processed by SEM using statistical software WarpPLS. The results showed that the implementation of the accrual based accounting in City Government Semarang has significantly positively associated with decision-making. Another important finding is the City Government officials of Semarang have personality, low tolerance of ambiguity is a negative effect on the relationship between the implementation of the accrual based accounting for decision making

  15. Bedrijfsrisico's van de accountant en het Audit Risk Model

    NARCIS (Netherlands)

    Wallage, Ph.; Klijnsmit, P.; Sodekamp, M.

    2003-01-01

    In de afgelopen jaren is het bedrijfsrisico van de controlerend accountant sterk toegenomen. De bedrijfsrisico’s van de accountant beginnen in toenemende mate een belemmering te vormen voor het aanvaarden van opdrachten. In dit artikel wordt aandacht besteed aan de wijze waarop de bedrijfsrisico’s

  16. Modeling Rare and Unique Documents: Using FRBR[subscript OO]/CIDOC CRM

    Science.gov (United States)

    Le Boeuf, Patrick

    2012-01-01

    Both the library and the museum communities have developed conceptual models for the information they produce about the collections they hold: FRBR (Functional Requirements for Bibliographic Records) and CIDOC CRM (Conceptual Reference Model). But neither proves perfectly adequate when it comes to some specific types of rare and unique materials:…

  17. Creative Accounting and Financial Reporting: Model Development and Empirical Testing

    OpenAIRE

    Fizza Tassadaq; Qaisar Ali Malik

    2015-01-01

    This paper empirically and critically investigates the issue of creative accounting in financial reporting. It not only analyzes the ethical responsibility of creative accounting but also focuses on other factors which influence the financial reporting like role of auditors, role of government regulations or international standards, impact of manipulative behaviors and impact of ethical values of an individual. Data has been collected through structured questionnaire from industrial sector. D...

  18. Creative Accounting & Financial Reporting: Model Development & Empirical Testing

    OpenAIRE

    Tassadaq, Fizza; Malik, Qaisar Ali

    2015-01-01

    This paper empirically and critically investigates the issue of creative accounting in financial reporting. It not only analyzes the ethical responsibility of creative accounting but also focuses on other factors which influence the financial reporting like role of auditors, role of govt. regulations or international standards, impact of manipulative behaviors and impact of ethical values of an individual. Data has been collected through structured questionnaire from industrial sector. Descri...

  19. A Model Driven Approach to domain standard specifications examplified by Finance Accounts receivable/ Accounts payable

    OpenAIRE

    Khan, Bahadar

    2005-01-01

    This thesis was written as a part of a master degree at the University of Oslo. The thesis work was conducted at SINTEF. The work has been carried out in the period November 2002 and April 2005. This thesis might be interesting to anyone interested in Domain Standard Specification Language developed by using the MDA approach to software development. The Model Driven Architecture (MDA) allows to separate the system functionality specification from its implementation on any specific technolo...

  20. Models and Rules of Evaluation in International Accounting

    OpenAIRE

    Liliana Feleaga; Niculae Feleaga

    2006-01-01

    The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is freque...

  1. Accounting for heterogeneity of public lands in hedonic property models

    Science.gov (United States)

    Charlotte Ham; Patricia A. Champ; John B. Loomis; Robin M. Reich

    2012-01-01

    Open space lands, national forests in particular, are usually treated as homogeneous entities in hedonic price studies. Failure to account for the heterogeneous nature of public open spaces may result in inappropriate inferences about the benefits of proximate location to such lands. In this study the hedonic price method is used to estimate the marginal values for...

  2. Tissue Chips to aid drug development and modeling for rare diseases.

    Science.gov (United States)

    Low, Lucie A; Tagle, Danilo A

    2016-01-01

    The technologies used to design, create and use microphysiological systems (MPS, "tissue chips" or "organs-on-chips") have progressed rapidly in the last 5 years, and validation studies of the functional relevance of these platforms to human physiology, and response to drugs for individual model organ systems, are well underway. These studies are paving the way for integrated multi-organ systems that can model diseases and predict drug efficacy and toxicology of multiple organs in real-time, improving the potential for diagnostics and development of novel treatments of rare diseases in the future. This review will briefly summarize the current state of tissue chip research and highlight model systems where these microfabricated (or bioengineered) devices are already being used to screen therapeutics, model disease states, and provide potential treatments in addition to helping elucidate the basic molecular and cellular phenotypes of rare diseases. Microphysiological systems hold great promise and potential for modeling rare disorders, as well as for their potential use to enhance the predictive power of new drug therapeutics, plus potentially increase the statistical power of clinical trials while removing the inherent risks of these trials in rare disease populations.

  3. Modelling Financial-Accounting Decisions by Means of OLAP Tools

    Directory of Open Access Journals (Sweden)

    Diana Elena CODREAN

    2011-03-01

    Full Text Available At present, one can say that a company’s good running largely depends on the information quantity and quality it relies on when making decisions. The information needed to underlie decisions and be obtained due to the existence of a high-performing information system which makes it possible for the data to be shown quickly, synthetically and truly, also providing the opportunity for complex analyses and predictions. In such circumstances, computerized accounting systems, too, have grown their complexity by means of data analyzing information solutions such as OLAP and Data Mining which help perform a multidimensional analysis of financial-accounting data, potential frauds can be detected and data hidden information can be revealed, trends for certain indicators can be set up, therefore ensuring useful information to a company’s decision making.

  4. Accounting for seasonal isotopic patterns of forest canopy intercepted precipitation in streamflow modeling

    Science.gov (United States)

    Stockinger, Michael P.; Lücke, Andreas; Vereecken, Harry; Bogena, Heye R.

    2017-12-01

    Forest canopy interception alters the isotopic tracer signal of precipitation leading to significant isotopic differences between open precipitation (δOP) and throughfall (δTF). This has important consequences for the tracer-based modeling of streamwater transit times. Some studies have suggested using a simple static correction to δOP by uniformly increasing it because δTF is rarely available for hydrological modeling. Here, we used data from a 38.5 ha spruce forested headwater catchment where three years of δOP and δTF were available to develop a data driven method that accounts for canopy effects on δOP. Changes in isotopic composition, defined as the difference δTF-δOP, varied seasonally with higher values during winter and lower values during summer. We used this pattern to derive a corrected δOP time series and analyzed the impact of using (1) δOP, (2) reference throughfall data (δTFref) and (3) the corrected δOP time series (δOPSine) in estimating the fraction of young water (Fyw), i.e., the percentage of streamflow younger than two to three months. We found that Fyw derived from δOPSine came closer to δTFref in comparison to δOP. Thus, a seasonally-varying correction for δOP can be successfully used to infer δTF where it is not available and is superior to the method of using a fixed correction factor. Seasonal isotopic enrichment patterns should be accounted for when estimating Fyw and more generally in catchment hydrology studies using other tracer methods to reduce uncertainty.

  5. The Charitable Trust Model: An Alternative Approach For Department Of Defense Accounting

    Science.gov (United States)

    2016-12-01

    unqualified opinion creates accountability issues that extend beyond the agency by making an audit of the U.S. consolidated financial statements challenging ...the foundation of contemporary reporting. The chapter then discusses the establishment and purpose of the Federal Accounting Standards Advisory...TRUST MODEL: AN ALTERNATIVE APPROACH FOR DEPARTMENT OF DEFENSE ACCOUNTING by Gerald V. Weers Jr. December 2016 Thesis Advisor: Philip J

  6. Resource Allocation Models and Accountability: A Jamaican Case Study

    Science.gov (United States)

    Nkrumah-Young, Kofi K.; Powell, Philip

    2008-01-01

    Higher education institutions (HEIs) may be funded privately, by the state or by a mixture of the two. Nevertheless, any state financing of HE necessitates a mechanism to determine the level of support and the channels through which it is to be directed; that is, a resource allocation model. Public funding, through resource allocation models,…

  7. Modeling Antibiotic Tolerance in Biofilms by Accounting for Nutrient Limitation

    OpenAIRE

    Roberts, Mark E.; Stewart, Philip S.

    2004-01-01

    A mathematical model of biofilm dynamics was used to investigate the protection from antibiotic killing that can be afforded to microorganisms in biofilms based on a mechanism of localized nutrient limitation and slow growth. The model assumed that the rate of killing by the antibiotic was directly proportional to the local growth rate. Growth rates in the biofilm were calculated by using the local concentration of a single growth-limiting substrate with Monod kinetics. The concentration prof...

  8. Accounting Fundamentals and Variations of Stock Price: Methodological Refinement with Recursive Simultaneous Model

    OpenAIRE

    Sumiyana, Sumiyana; Baridwan, Zaki

    2013-01-01

    This study investigates association between accounting fundamentals and variations of stock prices using recursive simultaneous equation model. The accounting fundamentalsconsist of earnings yield, book value, profitability, growth opportunities and discount rate. The prior single relationships model has been investigated by Chen and Zhang (2007),Sumiyana (2011) and Sumiyana et al. (2010). They assume that all accounting fundamentals associate direct-linearly to the stock returns. This study ...

  9. ACCOUNTING FUNDAMENTALS AND VARIATIONS OF STOCK PRICE: METHODOLOGICAL REFINEMENT WITH RECURSIVE SIMULTANEOUS MODEL

    OpenAIRE

    Sumiyana, Sumiyana; Baridwan, Zaki

    2015-01-01

    This study investigates association between accounting fundamentals and variations of stock prices using recursive simultaneous equation model. The accounting fundamentalsconsist of earnings yield, book value, profitability, growth opportunities and discount rate. The prior single relationships model has been investigated by Chen and Zhang (2007),Sumiyana (2011) and Sumiyana et al. (2010). They assume that all accounting fundamentals associate direct-linearly to the stock returns. This study ...

  10. Searching for extensions to the standard model in rare kaon decays

    International Nuclear Information System (INIS)

    Sanders, G.H.

    1989-01-01

    Small effects that are beyond the current standard models of physics are often signatures for new physics, revealing fields and mass scales far removed from contemporary experimental capabilities. This perspective motivates sensitive searches for rare decays of the kaon. The current status of these searches is reviewed, new results are presented, and progress in the near future is discussed. Opportunities for exciting physics research at a hadron facility are noted. 5 refs., 8 figs., 1 tab

  11. Spectral statistics of rare-earth nuclei: Investigation of shell model configuration effect

    Energy Technology Data Exchange (ETDEWEB)

    Sabri, H., E-mail: h-sabri@tabrizu.ac.ir

    2015-09-15

    The spectral statistics of even–even rare-earth nuclei are investigated by using all the available empirical data for Ba, Ce, Nd, Sm, Gd, Dy, Er, Yb and Hf isotopes. The Berry–Robnik distribution and Maximum Likelihood estimation technique are used for analyses. An obvious deviation from GOE is observed for considered nuclei and there are some suggestions about the effect due to mass, deformation parameter and shell model configurations.

  12. Spectral statistics of rare-earth nuclei: Investigation of shell model configuration effect

    International Nuclear Information System (INIS)

    Sabri, H.

    2015-01-01

    The spectral statistics of even–even rare-earth nuclei are investigated by using all the available empirical data for Ba, Ce, Nd, Sm, Gd, Dy, Er, Yb and Hf isotopes. The Berry–Robnik distribution and Maximum Likelihood estimation technique are used for analyses. An obvious deviation from GOE is observed for considered nuclei and there are some suggestions about the effect due to mass, deformation parameter and shell model configurations

  13. Applying the International Medical Graduate Program Model to Alleviate the Supply Shortage of Accounting Doctoral Faculty

    Science.gov (United States)

    HassabElnaby, Hassan R.; Dobrzykowski, David D.; Tran, Oanh Thikie

    2012-01-01

    Accounting has been faced with a severe shortage in the supply of qualified doctoral faculty. Drawing upon the international mobility of foreign scholars and the spirit of the international medical graduate program, this article suggests a model to fill the demand in accounting doctoral faculty. The underlying assumption of the suggested model is…

  14. A cellular automation model accounting for bicycle's group behavior

    Science.gov (United States)

    Tang, Tie-Qiao; Rui, Ying-Xu; Zhang, Jian; Shang, Hua-Yan

    2018-02-01

    Recently, bicycle has become an important traffic tool in China, again. Due to the merits of bicycle, the group behavior widely exists in urban traffic system. However, little effort has been made to explore the impacts of the group behavior on bicycle flow. In this paper, we propose a CA (cellular automaton) model with group behavior to explore the complex traffic phenomena caused by shoulder group behavior and following group behavior on an open road. The numerical results illustrate that the proposed model can qualitatively describe the impacts of the two kinds of group behaviors on bicycle flow and that the effects are related to the mode and size of group behaviors. The results can help us to better understand the impacts of the bicycle's group behaviors on urban traffic system and effectively control the bicycle's group behavior.

  15. Accounting for latent classes in movie box office modeling

    OpenAIRE

    Antipov, Evgeny; Pokryshevskaya, Elena

    2010-01-01

    This paper addresses the issue of unobserved heterogeneity in film characteristics influence on box-office. We argue that the analysis of pooled samples, most common among researchers, does not shed light on underlying segmentations and leads to significantly different estimates obtained by researchers running similar regressions for movie success modeling. For instance, it may be expected that a restrictive MPAA rating is a box office poison for a family comedy, while it insignificantly infl...

  16. Biblical Scriptures impact on six ethical models influencing accounting practices

    OpenAIRE

    Rodgers, Waymond; Gago Rodríguez, Susana

    2006-01-01

    The recent frauds in organizations have been a point for reflection among researchers and practitioners regarding the lack of morality in certain decision-making. We argue for a modification of decision-making models that has been accepted in organizations with stronger links with ethics and morality. With this aim we propose a return to the base value of Christianity, supported by Bible scriptures, underlying six dominant ethical approaches that drive practices in organizations. Publicado

  17. Spherical Detector Device Mathematical Modelling with Taking into Account Detector Module Symmetry

    International Nuclear Information System (INIS)

    Batyj, V.G.; Fedorchenko, D.V.; Prokopets, S.I.; Prokopets, I.M.; Kazhmuradov, M.A.

    2005-01-01

    Mathematical Model for spherical detector device accounting to symmetry properties is considered. Exact algorithm for simulation of measurement procedure with multiple radiation sources is developed. Modelling results are shown to have perfect agreement with calibration measurements

  18. Computer-Based Resource Accounting Model for Automobile Technology Impact Assessment

    Science.gov (United States)

    1976-10-01

    A computer-implemented resource accounting model has been developed for assessing resource impacts of future automobile technology options. The resources tracked are materials, energy, capital, and labor. The model has been used in support of the Int...

  19. Financial Organization Information Security System Development using Modeling, IT assets and Accounts Classification Processes

    Directory of Open Access Journals (Sweden)

    Anton Sergeevich Zaytsev

    2013-12-01

    Full Text Available This article deals with processes of modeling, IT assets and account classification. Key principles of these processes configuration are pointed up. Also a model of Russian Federation banking system organization is developed.

  20. Nurse-directed care model in a psychiatric hospital: a model for clinical accountability.

    Science.gov (United States)

    E-Morris, Marlene; Caldwell, Barbara; Mencher, Kathleen J; Grogan, Kimberly; Judge-Gorny, Margaret; Patterson, Zelda; Christopher, Terrian; Smith, Russell C; McQuaide, Teresa

    2010-01-01

    The focus on recovery for persons with severe and persistent mental illness is leading state psychiatric hospitals to transform their method of care delivery. This article describes a quality improvement project involving a hospital's administration and multidisciplinary state-university affiliation that collaborated in the development and implementation of a nursing care delivery model in a state psychiatric hospital. The quality improvement project team instituted a new model to promote the hospital's vision of wellness and recovery through utilization of the therapeutic relationship and greater clinical accountability. Implementation of the model was accomplished in 2 phases: first, the establishment of a structure to lay the groundwork for accountability and, second, the development of a mechanism to provide a clinical supervision process for staff in their work with clients. Effectiveness of the model was assessed by surveys conducted at baseline and after implementation. Results indicated improvement in clinical practices and client living environment. As a secondary outcome, these improvements appeared to be associated with increased safety on the units evidenced by reduction in incidents of seclusion and restraint. Restructuring of the service delivery system of care so that clients are the center of clinical focus improves safety and can enhance the staff's attention to work with clients on their recovery. The role of the advanced practice nurse can influence the recovery of clients in state psychiatric hospitals. Future research should consider the impact on clients and their perceptions of the new service models.

  1. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  2. Towards accounting for dissolved iron speciation in global ocean models

    Directory of Open Access Journals (Sweden)

    A. Tagliabue

    2011-10-01

    Full Text Available The trace metal iron (Fe is now routinely included in state-of-the-art ocean general circulation and biogeochemistry models (OGCBMs because of its key role as a limiting nutrient in regions of the world ocean important for carbon cycling and air-sea CO2 exchange. However, the complexities of the seawater Fe cycle, which impact its speciation and bioavailability, are simplified in such OGCBMs due to gaps in understanding and to avoid high computational costs. In a similar fashion to inorganic carbon speciation, we outline a means by which the complex speciation of Fe can be included in global OGCBMs in a reasonably cost-effective manner. We construct an Fe speciation model based on hypothesised relationships between rate constants and environmental variables (temperature, light, oxygen, pH, salinity and assumptions regarding the binding strengths of Fe complexing organic ligands and test hypotheses regarding their distributions. As a result, we find that the global distribution of different Fe species is tightly controlled by spatio-temporal environmental variability and the distribution of Fe binding ligands. Impacts on bioavailable Fe are highly sensitive to assumptions regarding which Fe species are bioavailable and how those species vary in space and time. When forced by representations of future ocean circulation and climate we find large changes to the speciation of Fe governed by pH mediated changes to redox kinetics. We speculate that these changes may exert selective pressure on phytoplankton Fe uptake strategies in the future ocean. In future work, more information on the sources and sinks of ocean Fe ligands, their bioavailability, the cycling of colloidal Fe species and kinetics of Fe-surface coordination reactions would be invaluable. We hope our modeling approach can provide a means by which new observations of Fe speciation can be tested against hypotheses of the processes present in governing the ocean Fe cycle in an

  3. An extended lattice model accounting for traffic jerk

    Science.gov (United States)

    Redhu, Poonam; Siwach, Vikash

    2018-02-01

    In this paper, a flux difference lattice hydrodynamics model is extended by considering the traffic jerk effect which comes due to vehicular motion of non-motor automobiles. The effect of traffic jerk has been examined through linear stability analysis and shown that it can significantly enlarge the unstable region on the phase diagram. To describe the phase transition of traffic flow, mKdV equation near the critical point is derived through nonlinear stability analysis. The theoretical findings have been verified using numerical simulation which confirms that the jerk parameter plays an important role in stabilizing the traffic jam efficiently in sensing the flux difference of leading sites.

  4. MODEL OF ACCOUNTING ENGINEERING IN VIEW OF EARNINGS MANAGEMENT IN POLAND

    Directory of Open Access Journals (Sweden)

    Leszek Michalczyk

    2012-10-01

    Full Text Available The article introduces the theoretical foundations of the author’s original concept of accounting engineering. We assume a theoretical premise whereby accounting engineering is understood as a system of accounting practice utilising differences in economic events resultant from the use of divergent accounting methods. Unlike, for instance, creative or praxeological accounting, accounting engineering is composed only, and under all circumstances, of lawful activities and adheres to the current regulations of the balance sheet law. The aim of the article is to construct a model of accounting engineering exploiting taking into account differences inherently present in variant accounting. These differences result in disparate financial results of identical economic events. Given the fact that regardless of which variant is used in accounting, all settlements are eventually equal to one another, a new class of differences emerges - the accounting engineering potential. It is transferred to subsequent reporting (balance sheet periods. In the end, the profit “made” in a given period reduces the financial result of future periods. This effect is due to the “transfer” of costs from one period to another. Such actions may have sundry consequences and are especially dangerous whenever many individuals are concerned with the profit of a given company, e.g. on a stock exchange. The reverse may be observed when a company is privatised and its value is being intentionally reduced by a controlled recording of accounting provisions, depending on the degree to which they are justified. The reduction of a company’s goodwill in Balcerowicz’s model of no-tender privatisation allows to justify the low value of the purchased company. These are only some of many manifestations of variant accounting which accounting engineering employs. A theoretical model of the latter is presented in this article.

  5. Predicting habitat suitability for rare plants at local spatial scales using a species distribution model.

    Science.gov (United States)

    Gogol-Prokurat, Melanie

    2011-01-01

    If species distribution models (SDMs) can rank habitat suitability at a local scale, they may be a valuable conservation planning tool for rare, patchily distributed species. This study assessed the ability of Maxent, an SDM reported to be appropriate for modeling rare species, to rank habitat suitability at a local scale for four edaphic endemic rare plants of gabbroic soils in El Dorado County, California, and examined the effects of grain size, spatial extent, and fine-grain environmental predictors on local-scale model accuracy. Models were developed using species occurrence data mapped on public lands and were evaluated using an independent data set of presence and absence locations on surrounding lands, mimicking a typical conservation-planning scenario that prioritizes potential habitat on unsurveyed lands surrounding known occurrences. Maxent produced models that were successful at discriminating between suitable and unsuitable habitat at the local scale for all four species, and predicted habitat suitability values were proportional to likelihood of occurrence or population abundance for three of four species. Unfortunately, models with the best discrimination (i.e., AUC) were not always the most useful for ranking habitat suitability. The use of independent test data showed metrics that were valuable for evaluating which variables and model choices (e.g., grain, extent) to use in guiding habitat prioritization for conservation of these species. A goodness-of-fit test was used to determine whether habitat suitability values ranked habitat suitability on a continuous scale. If they did not, a minimum acceptable error predicted area criterion was used to determine the threshold for classifying habitat as suitable or unsuitable. I found a trade-off between model extent and the use of fine-grain environmental variables: goodness of fit was improved at larger extents, and fine-grain environmental variables improved local-scale accuracy, but fine-grain variables

  6. A new model in achieving Green Accounting at hotels in Bali

    Science.gov (United States)

    Astawa, I. P.; Ardina, C.; Yasa, I. M. S.; Parnata, I. K.

    2018-01-01

    The concept of green accounting becomes a debate in terms of its implementation in a company. The result of previous studies indicates that there are no standard model regarding its implementation to support performance. The research aims to create a different green accounting model to other models by using local cultural elements as the variables in building it. The research is conducted in two steps. The first step is designing the model based on theoretical studies by considering the main and supporting elements in building the concept of green accounting. The second step is conducting a model test at 60 five stars hotels started with data collection through questionnaire and followed by data processing using descriptive statistic. The result indicates that the hotels’ owner has implemented green accounting attributes and it supports previous studies. Another result, which is a new finding, shows that the presence of local culture, government regulation, and the awareness of hotels’ owner has important role in the development of green accounting concept. The results of the research give contribution to accounting science in terms of green reporting. The hotel management should adopt local culture in building the character of accountant hired in the accounting department.

  7. An Integrative Model of the Strategic Management Accounting at the Enterprises of Chemical Industry

    Directory of Open Access Journals (Sweden)

    Aleksandra Vasilyevna Glushchenko

    2016-06-01

    Full Text Available Currently, the issues of information and analytical support of strategic management enabling to take timely and high-quality management decisions, are extremely relevant. Conflicting and poor information, haphazard collected in the practice of large companies from unreliable sources, affects the effective implementation of their development strategies and carries the threat of risk, by the increasing instability of the external environment. Thus chemical industry is one of the central places in the industry of Russia and, of course, has its specificity in the formation of the informationsupport system. Such an information system suitable for the development and implementation of strategic directions, changes in recognized competitive advantages of strategic management accounting. The issues of the lack of requirements for strategic accounting information, its inconsistency in the result of simultaneous accumulation in different parts and using different methods of calculation and assessment of indicators is impossible without a well-constructed model of organization of strategic management accounting. The purpose of this study is to develop such a model, the implementation of which will allow realizing the possibility of achieving strategic goals by harmonizing information from the individual objects of the strategic account to increase the functional effectiveness of management decisions with a focus on strategy. Case study was based on dialectical logic and methods of system analysis, and identifying causal relationships in building a model of strategic management accounting that contributes to the forecasts of its development. The study proposed to implement an integrative model of organization of strategic management accounting. The purpose of a phased implementation of this model defines the objects and tools of strategic management accounting. Moreover, it is determined that from the point of view of increasing the usefulness of management

  8. A model for genetic and epigenetic regulatory networks identifies rare pathways for transcription factor induced pluripotency

    Science.gov (United States)

    Artyomov, Maxim; Meissner, Alex; Chakraborty, Arup

    2010-03-01

    Most cells in an organism have the same DNA. Yet, different cell types express different proteins and carry out different functions. This is because of epigenetic differences; i.e., DNA in different cell types is packaged distinctly, making it hard to express certain genes while facilitating the expression of others. During development, upon receipt of appropriate cues, pluripotent embryonic stem cells differentiate into diverse cell types that make up the organism (e.g., a human). There has long been an effort to make this process go backward -- i.e., reprogram a differentiated cell (e.g., a skin cell) to pluripotent status. Recently, this has been achieved by transfecting certain transcription factors into differentiated cells. This method does not use embryonic material and promises the development of patient-specific regenerative medicine, but it is inefficient. The mechanisms that make reprogramming rare, or even possible, are poorly understood. We have developed the first computational model of transcription factor-induced reprogramming. Results obtained from the model are consistent with diverse observations, and identify the rare pathways that allow reprogramming to occur. If validated, our model could be further developed to design optimal strategies for reprogramming and shed light on basic questions in biology.

  9. Accounting for false-positive acoustic detections of bats using occupancy models

    Science.gov (United States)

    Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.

    2014-01-01

    1. Acoustic surveys have become a common survey method for bats and other vocal taxa. Previous work shows that bat echolocation may be misidentified, but common analytic methods, such as occupancy models, assume that misidentifications do not occur. Unless rare, such misidentifications could lead to incorrect inferences with significant management implications.

  10. Chinese Basic Pension Substitution Rate: A Monte Carlo Demonstration of the Individual Account Model

    OpenAIRE

    Dong, Bei; Zhang, Ling; Lu, Xuan

    2008-01-01

    At the end of 2005, the State Council of China passed ”The Decision on adjusting the Individual Account of Basic Pension System”, which adjusted the individual account in the 1997 basic pension system. In this essay, we will analyze the adjustment above, and use Life Annuity Actuarial Theory to establish the basic pension substitution rate model. Monte Carlo simulation is also used to prove the rationality of the model. Some suggestions are put forward associated with the substitution rate ac...

  11. Fitting a code-red virus spread model: An account of putting theory into practice

    NARCIS (Netherlands)

    Kolesnichenko, A.V.; Haverkort, Boudewijn R.H.M.; Remke, Anne Katharina Ingrid; de Boer, Pieter-Tjerk

    This paper is about fitting a model for the spreading of a computer virus to measured data, contributing not only the fitted model, but equally important, an account of the process of getting there. Over the last years, there has been an increased interest in epidemic models to study the speed of

  12. Estimating effects of rare haplotypes on failure time using a penalized Cox proportional hazards regression model

    Directory of Open Access Journals (Sweden)

    Tanck Michael WT

    2008-01-01

    Full Text Available Abstract Background This paper describes a likelihood approach to model the relation between failure time and haplotypes in studies with unrelated individuals where haplotype phase is unknown, while dealing with the problem of unstable estimates due to rare haplotypes by considering a penalized log-likelihood. Results The Cox model presented here incorporates the uncertainty related to the unknown phase of multiple heterozygous individuals as weights. Estimation is performed with an EM algorithm. In the E-step the weights are estimated, and in the M-step the parameter estimates are estimated by maximizing the expectation of the joint log-likelihood, and the baseline hazard function and haplotype frequencies are calculated. These steps are iterated until the parameter estimates converge. Two penalty functions are considered, namely the ridge penalty and a difference penalty, which is based on the assumption that similar haplotypes show similar effects. Simulations were conducted to investigate properties of the method, and the association between IL10 haplotypes and risk of target vessel revascularization was investigated in 2653 patients from the GENDER study. Conclusion Results from simulations and real data show that the penalized log-likelihood approach produces valid results, indicating that this method is of interest when studying the association between rare haplotypes and failure time in studies of unrelated individuals.

  13. MODELING SPATIAL DISTRIBUTION OF A RARE AND ENDANGERED PLANT SPECIES (Brainea insignis IN CENTRAL TAIWAN

    Directory of Open Access Journals (Sweden)

    W.-C. Wang

    2012-07-01

    Full Text Available With an increase in the rate of species extinction, we should choose right methods that are sustainable on the basis of appropriate science and human needs to conserve ecosystems and rare species. Species distribution modeling (SDM uses 3S technology and statistics and becomes increasingly important in ecology. Brainea insignis (cycad-fern, CF has been categorized a rare, endangered plant species, and thus was chosen as a target for the study. Five sampling schemes were created with different combinations of CF samples collected from three sites in Huisun forest station and one site, 10 km farther north from Huisun. Four models, MAXENT, GARP, generalized linear models (GLM, and discriminant analysis (DA, were developed based on topographic variables, and were evaluated by five sampling schemes. The accuracy of MAXENT was the highest, followed by GLM and GARP, and DA was the lowest. More importantly, they can identify the potential habitat less than 10% of the study area in the first round of SDM, thereby prioritizing either the field-survey area where microclimatic, edaphic or biotic data can be collected for refining predictions of potential habitat in the later rounds of SDM or search areas for new population discovery. However, it was shown unlikely to extend spatial patterns of CFs from one area to another with a big separation or to a larger area by predictive models merely based on topographic variables. Follow-up studies will attempt to incorporate proxy indicators that can be extracted from hyperspectral images or LIDAR DEM and substitute for direct parameters to make predictive models applicable on a broader scale.

  14. Mutual Calculations in Creating Accounting Models: A Demonstration of the Power of Matrix Mathematics in Accounting Education

    Science.gov (United States)

    Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg

    2016-01-01

    The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…

  15. Review of searches for rare processes and physics beyond the Standard Model at HERA

    International Nuclear Information System (INIS)

    South, David M.; Turcato, Monica

    2016-01-01

    The electron-proton collisions collected by the H1 and ZEUS experiments at HERA comprise a unique particle physics data set, and a comprehensive range of measurements has been performed to provide new insight into the structure of the proton. The high centre of mass energy at HERA has also allowed rare processes to be studied, including the production of W and Z 0 bosons and events with multiple leptons in the final state. The data have also opened up a new domain to searches for physics beyond the Standard Model including contact interactions, leptoquarks, excited fermions and a number of supersymmetric models. This review presents a summary of such results, where the analyses reported correspond to an integrated luminosity of up to 1 fb -1 , representing the complete data set recorded by the H1 and ZEUS experiments. (orig.)

  16. Modeling the Distribution of Rare or Cryptic Bird Species of Taiwan

    Directory of Open Access Journals (Sweden)

    Tsai-Yu Wu

    2012-12-01

    Full Text Available For the study of the macroecology and conservation of Taiwan’s birds, there was an urgent need to develop distribution models of bird species whose distribution had never before been modeled. Therefore, we here model the distributions of 27 mostly rare and cryptic breeding bird species using a statistical approach which has been shown to be especially reliable for modeling species with a low sample size of presence localities, namely the maximum entropy (Maxent modeling technique. For this purpose, we began with a dedicated attempt to collate as much high-quality distributional data as possible, assembling databases from several scientific reports, contacting individual data recorders and searching publicly accessible database, the internet and the available literature. This effort resulted in 2022 grid cells of 1 × 1 km size being associated with a presence record for one of the 27 species. These records and 10 pre-selected environmental variables were then used to model each species’ probability distribution which we show here with all grid cells below the lowest presence threshold being converted to zeros. We then in detail discuss the interpretation and applicability of these distributions, whereby we pay close attention to habitat requirements, the intactness and fragmentation of their habitat, the general detectability of the species and data reliability. This study is another one in an ongoing series of studies which highlight the usefulness of using large electronic databases and modern analytical methods to help with the monitoring and assessment of Taiwan’s bird species.

  17. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    Science.gov (United States)

    Guerrier, C.; Holcman, D.

    2017-07-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  18. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    Energy Technology Data Exchange (ETDEWEB)

    Guerrier, C. [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Holcman, D., E-mail: david.holcman@ens.fr [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Mathematical Institute, Oxford OX2 6GG, Newton Institute (United Kingdom)

    2017-07-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  19. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    International Nuclear Information System (INIS)

    Guerrier, C.; Holcman, D.

    2017-01-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  20. A Social Accountable Model for Medical Education System in Iran: A Grounded-Theory

    Directory of Open Access Journals (Sweden)

    Mohammadreza Abdolmaleki

    2017-10-01

    Full Text Available Social accountability has been increasingly discussed over the past three decades in various fields providing service to the community and has been expressed as a goal for various areas. In medical education system, like other social accountability areas, it is considered as one of the main objectives globally. The aim of this study was to seek a social accountability theory in the medical education system that is capable of identifying all the standards, norms, and conditions within the country related to the study subject and recognize their relationship. In this study, a total of eight experts in the field of social accountability in medical education system with executive or study experience were interviewedpersonally. After analysis of interviews, 379 codes, 59 secondary categories, 16 subcategories, and 9 main categories were obtained. The resulting data was collected and analyzed at three levels of open coding, axial coding, and selective coding in the form of grounded theory study of “Accountability model of medical education in Iran”, which can be used in education system’s policies and planning for social accountability, given that almost all effective components of social accountability in highereducation health system with causal and facilitator associations were determined.Keywords: SOCIAL ACCOUNTABILITY, COMMUNITY–ORIENTED MEDICINE, COMMUNITY MEDICINE, EDUCATION SYSTEM, GROUNDED THEORY

  1. Toward a Useful Model for Group Mentoring in Public Accounting Firms

    Directory of Open Access Journals (Sweden)

    Steven J. Johnson

    2013-07-01

    Full Text Available Today’s public accounting firms face a number of challenges in relation to their most valuable resource and primary revenue generator, human capital. Expanding regulations, technology advances, increased competition and high turnover rates are just a few of the issues confronting public accounting leaders in today’s complex business environment. In recent years, some public accounting firms have attempted to combat low retention and high burnout rates with traditional one-to-one mentoring programs, with varying degrees of success. Many firms have found that they lack the resources necessary to successfully implement and maintain such programs. In other industries, organizations have used a group mentoring approach in attempt to remove potential barriers to mentoring success. Although the research regarding group mentoring shows promise for positive organizational outcomes, no cases could be found in the literature regarding its usage in a public accounting firm. Because of the unique challenges associated with public accounting firms, this paper attempts to answer two questions: (1Does group mentoring provide a viable alternative to traditional mentoring in a public accounting firm? (2 If so, what general model might be used for implementing such a program? In answering these questions, a review of the group mentoring literature is provided, along with a suggested model for the implementation of group mentoring in a public accounting firm.

  2. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    DEFF Research Database (Denmark)

    Jensen, Ninna Reitzel; Schomacker, Kristian Juul

    2015-01-01

    Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death......, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance...... and unit-linked insurance. By use of a two-account model, we are able to illustrate general concepts without making the model too abstract. To allow for complicated financial markets without dramatically increasing the mathematical complexity, we focus on economic scenarios. We illustrate the use of our...

  3. A simulation model of hospital management based on cost accounting analysis according to disease.

    Science.gov (United States)

    Tanaka, Koji; Sato, Junzo; Guo, Jinqiu; Takada, Akira; Yoshihara, Hiroyuki

    2004-12-01

    Since a little before 2000, hospital cost accounting has been increasingly performed at Japanese national university hospitals. At Kumamoto University Hospital, for instance, departmental costs have been analyzed since 2000. And, since 2003, the cost balance has been obtained according to certain diseases for the preparation of Diagnosis-Related Groups and Prospective Payment System. On the basis of these experiences, we have constructed a simulation model of hospital management. This program has worked correctly at repeated trials and with satisfactory speed. Although there has been room for improvement of detailed accounts and cost accounting engine, the basic model has proved satisfactory. We have constructed a hospital management model based on the financial data of an existing hospital. We will later improve this program from the viewpoint of construction and using more various data of hospital management. A prospective outlook may be obtained for the practical application of this hospital management model.

  4. Modelling the winter distribution of a rare and endangered migrant, the Aquatic Warbler Acrocephalus paludicola

    DEFF Research Database (Denmark)

    Walther, Bruno A; Schäffer, Norbert; van Niekerk, Adriaan

    2007-01-01

    . Such model predictions may be useful guidelines to focus further field research on the Aquatic Warbler. Given the excellent model predictions in this study, this novel technique may prove useful to model the distribution of other rare and endangered species, thus providing a means to guide future survey...

  5. Facility level SSAC for model country - an introduction and material balance accounting principles

    International Nuclear Information System (INIS)

    Jones, R.J.

    1989-01-01

    A facility level State System of Accounting for and Control of Nuclear Materials (SSAC) for a model country and the principles of materials balance accounting relating to that country are described. The seven principal elements of a SSAC are examined and a facility level system based on them discussed. The seven elements are organization and management; nuclear material measurements; measurement quality; records and reports; physical inventory taking; material balance closing; containment and surveillance. 11 refs., 19 figs., 5 tabs

  6. Assimilation of tourism satellite accounts and applied general equilibrium models to inform tourism policy analysis

    OpenAIRE

    Rossouw, Riaan; Saayman, Melville

    2011-01-01

    Historically, tourism policy analysis in South Africa has posed challenges to accurate measurement. The primary reason for this is that tourism is not designated as an 'industry' in standard economic accounts. This paper therefore demonstrates the relevance and need for applied general equilibrium (AGE) models to be completed and extended through an integration with tourism satellite accounts (TSAs) as a tool for policy makers (especially tourism policy makers) in South Africa. The paper sets...

  7. Analysis Social Security System Model in South Sulawesi Province: On Accounting Perspective

    OpenAIRE

    Mediaty,; Said, Darwis; Syahrir,; Indrijawati, Aini

    2015-01-01

    - This research aims to analyze the poverty, education, and health in social security system model based on accounting perspective using empirical study on South Sulawesi Province. Issued Law No. 40 for 2004 regarding National Social Security System is one of attentions from government about social welfare. Accounting as a social science deserves to create social security mechanisms. One of the crucial mechanisms is social security system. This research is a grounded exploratory research w...

  8. School Board Improvement Plans in Relation to the AIP Model of Educational Accountability: A Content Analysis

    Science.gov (United States)

    van Barneveld, Christina; Stienstra, Wendy; Stewart, Sandra

    2006-01-01

    For this study we analyzed the content of school board improvement plans in relation to the Achievement-Indicators-Policy (AIP) model of educational accountability (Nagy, Demeris, & van Barneveld, 2000). We identified areas of congruence and incongruence between the plans and the model. Results suggested that the content of the improvement…

  9. Lessons learned for spatial modelling of ecosystem services in support of ecosystem accounting

    NARCIS (Netherlands)

    Schroter, M.; Remme, R.P.; Sumarga, E.; Barton, D.N.; Hein, L.G.

    2015-01-01

    Assessment of ecosystem services through spatial modelling plays a key role in ecosystem accounting. Spatial models for ecosystem services try to capture spatial heterogeneity with high accuracy. This endeavour, however, faces several practical constraints. In this article we analyse the trade-offs

  10. Drift diffusion model of reward and punishment learning in rare alpha-synuclein gene carriers.

    Science.gov (United States)

    Moustafa, Ahmed A; Kéri, Szabolcs; Polner, Bertalan; White, Corey

    To understand the cognitive effects of alpha-synuclein polymorphism, we employed a drift diffusion model (DDM) to analyze reward- and punishment-guided probabilistic learning task data of participants with the rare alpha-synuclein gene duplication and age- and education-matched controls. Overall, the DDM analysis showed that, relative to controls, asymptomatic alpha-synuclein gene duplication carriers had significantly increased learning from negative feedback, while they tended to show impaired learning from positive feedback. No significant differences were found in response caution, response bias, or motor/encoding time. We here discuss the implications of these computational findings to the understanding of the neural mechanism of alpha-synuclein gene duplication.

  11. Accounting for differences in dieting status: steps in the refinement of a model.

    Science.gov (United States)

    Huon, G; Hayne, A; Gunewardene, A; Strong, K; Lunn, N; Piira, T; Lim, J

    1999-12-01

    The overriding objective of this paper is to outline the steps involved in refining a structural model to explain differences in dieting status. Cross-sectional data (representing the responses of 1,644 teenage girls) derive from the preliminary testing in a 3-year longitudinal study. A battery of measures assessed social influence, vulnerability (to conformity) disposition, protective (social coping) skills, and aspects of positive familial context as core components in a model proposed to account for the initiation of dieting. Path analyses were used to establish the predictive ability of those separate components and their interrelationships in accounting for differences in dieting status. Several components of the model were found to be important predictors of dieting status. The model incorporates significant direct, indirect (or mediated), and moderating relationships. Taking all variables into account, the strongest prediction of dieting status was from peer competitiveness, using a new scale developed specifically for this study. Systematic analyses are crucial for the refinement of models to be used in large-scale multivariate studies. In the short term, the model investigated in this study has been shown to be useful in accounting for cross-sectional differences in dieting status. The refined model will be most powerfully employed in large-scale time-extended studies of the initiation of dieting to lose weight. Copyright 1999 by John Wiley & Sons, Inc.

  12. Modelling characteristics of photovoltaic panels with thermal phenomena taken into account

    International Nuclear Information System (INIS)

    Krac, Ewa; Górecki, Krzysztof

    2016-01-01

    In the paper a new form of the electrothermal model of photovoltaic panels is proposed. This model takes into account the optical, electrical and thermal properties of the considered panels, as well as electrical and thermal properties of the protecting circuit and thermal inertia of the considered panels. The form of this model is described and some results of measurements and calculations of mono-crystalline and poly-crystalline panels are presented

  13. Estimating the probabilities of rare arrhythmic events in multiscale computational models of cardiac cells and tissue.

    Directory of Open Access Journals (Sweden)

    Mark A Walker

    2017-11-01

    Full Text Available Ectopic heartbeats can trigger reentrant arrhythmias, leading to ventricular fibrillation and sudden cardiac death. Such events have been attributed to perturbed Ca2+ handling in cardiac myocytes leading to spontaneous Ca2+ release and delayed afterdepolarizations (DADs. However, the ways in which perturbation of specific molecular mechanisms alters the probability of ectopic beats is not understood. We present a multiscale model of cardiac tissue incorporating a biophysically detailed three-dimensional model of the ventricular myocyte. This model reproduces realistic Ca2+ waves and DADs driven by stochastic Ca2+ release channel (RyR gating and is used to study mechanisms of DAD variability. In agreement with previous experimental and modeling studies, key factors influencing the distribution of DAD amplitude and timing include cytosolic and sarcoplasmic reticulum Ca2+ concentrations, inwardly rectifying potassium current (IK1 density, and gap junction conductance. The cardiac tissue model is used to investigate how random RyR gating gives rise to probabilistic triggered activity in a one-dimensional myocyte tissue model. A novel spatial-average filtering method for estimating the probability of extreme (i.e. rare, high-amplitude stochastic events from a limited set of spontaneous Ca2+ release profiles is presented. These events occur when randomly organized clusters of cells exhibit synchronized, high amplitude Ca2+ release flux. It is shown how reduced IK1 density and gap junction coupling, as observed in heart failure, increase the probability of extreme DADs by multiple orders of magnitude. This method enables prediction of arrhythmia likelihood and its modulation by alterations of other cellular mechanisms.

  14. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  15. Localized-itinerant magnetism: a simple model with applications to intermetallic of heavy rare-earths

    International Nuclear Information System (INIS)

    Ranke Perlingueiro, P.J. von.

    1986-01-01

    We have investigated various magnetic quantities of a system consisting of conduction electrons coupled to localized spins. In obtaining the magnetic state equations (which relate the ionic and electronic magnetisations to temperature and the model parameters) we have adopted the molecular field approximation. This simple model is of interest to the magnetism of the heavy rare earth intermettallics. For these systems the localized spin is that of the 4f shell; it is described by the parameters g (the Lande's factor) and J (the total angular momentum of the 4f electrons in the ground state). We derive an analytical linear relation between the critical temperature and The Gennes Factors J(J+1)(g-1) which is experimentally observed for RAl 2 . A fitting between the experimental points and the theoretical prediction gives for the exchange parameter the value J o = 48.6 meV. We have also performed a parametric study of the model, using a rectangular energy density of states. The results are shown on tables and diagrams. (author) [pt

  16. PREDICTIVE CAPACITY OF INSOLVENCY MODELS BASED ON ACCOUNTING NUMBERS AND DESCRIPTIVE DATA

    Directory of Open Access Journals (Sweden)

    Rony Petson Santana de Souza

    2012-09-01

    Full Text Available In Brazil, research into models to predict insolvency started in the 1970s, with most authors using discriminant analysis as a statistical tool in their models. In more recent years, authors have increasingly tried to verify whether it is possible to forecast insolvency using descriptive data contained in firms’ reports. This study examines the capacity of some insolvency models to predict the failure of Brazilian companies that have gone bankrupt. The study is descriptive in nature with a quantitative approach, based on research of documents. The sample is omposed of 13 companies that were declared bankrupt between 1997 and 2003. The results indicate that the majority of the insolvency prediction models tested showed high rates of correct forecasts. The models relying on descriptive reports on average were more likely to succeed than those based on accounting figures. These findings demonstrate that although some studies indicate a lack of validity of predictive models created in different business settings, some of these models have good capacity to forecast insolvency in Brazil. We can conclude that both models based on accounting numbers and those relying on descriptive reports can predict the failure of firms. Therefore, it can be inferred that the majority of bankruptcy prediction models that make use of accounting numbers can succeed in predicting the failure of firms.

  17. Hierarchical modeling for rare event detection and cell subset alignment across flow cytometry samples.

    Science.gov (United States)

    Cron, Andrew; Gouttefangeas, Cécile; Frelinger, Jacob; Lin, Lin; Singh, Satwinder K; Britten, Cedrik M; Welters, Marij J P; van der Burg, Sjoerd H; West, Mike; Chan, Cliburn

    2013-01-01

    Flow cytometry is the prototypical assay for multi-parameter single cell analysis, and is essential in vaccine and biomarker research for the enumeration of antigen-specific lymphocytes that are often found in extremely low frequencies (0.1% or less). Standard analysis of flow cytometry data relies on visual identification of cell subsets by experts, a process that is subjective and often difficult to reproduce. An alternative and more objective approach is the use of statistical models to identify cell subsets of interest in an automated fashion. Two specific challenges for automated analysis are to detect extremely low frequency event subsets without biasing the estimate by pre-processing enrichment, and the ability to align cell subsets across multiple data samples for comparative analysis. In this manuscript, we develop hierarchical modeling extensions to the Dirichlet Process Gaussian Mixture Model (DPGMM) approach we have previously described for cell subset identification, and show that the hierarchical DPGMM (HDPGMM) naturally generates an aligned data model that captures both commonalities and variations across multiple samples. HDPGMM also increases the sensitivity to extremely low frequency events by sharing information across multiple samples analyzed simultaneously. We validate the accuracy and reproducibility of HDPGMM estimates of antigen-specific T cells on clinically relevant reference peripheral blood mononuclear cell (PBMC) samples with known frequencies of antigen-specific T cells. These cell samples take advantage of retrovirally TCR-transduced T cells spiked into autologous PBMC samples to give a defined number of antigen-specific T cells detectable by HLA-peptide multimer binding. We provide open source software that can take advantage of both multiple processors and GPU-acceleration to perform the numerically-demanding computations. We show that hierarchical modeling is a useful probabilistic approach that can provide a consistent labeling

  18. Hierarchical modeling for rare event detection and cell subset alignment across flow cytometry samples.

    Directory of Open Access Journals (Sweden)

    Andrew Cron

    Full Text Available Flow cytometry is the prototypical assay for multi-parameter single cell analysis, and is essential in vaccine and biomarker research for the enumeration of antigen-specific lymphocytes that are often found in extremely low frequencies (0.1% or less. Standard analysis of flow cytometry data relies on visual identification of cell subsets by experts, a process that is subjective and often difficult to reproduce. An alternative and more objective approach is the use of statistical models to identify cell subsets of interest in an automated fashion. Two specific challenges for automated analysis are to detect extremely low frequency event subsets without biasing the estimate by pre-processing enrichment, and the ability to align cell subsets across multiple data samples for comparative analysis. In this manuscript, we develop hierarchical modeling extensions to the Dirichlet Process Gaussian Mixture Model (DPGMM approach we have previously described for cell subset identification, and show that the hierarchical DPGMM (HDPGMM naturally generates an aligned data model that captures both commonalities and variations across multiple samples. HDPGMM also increases the sensitivity to extremely low frequency events by sharing information across multiple samples analyzed simultaneously. We validate the accuracy and reproducibility of HDPGMM estimates of antigen-specific T cells on clinically relevant reference peripheral blood mononuclear cell (PBMC samples with known frequencies of antigen-specific T cells. These cell samples take advantage of retrovirally TCR-transduced T cells spiked into autologous PBMC samples to give a defined number of antigen-specific T cells detectable by HLA-peptide multimer binding. We provide open source software that can take advantage of both multiple processors and GPU-acceleration to perform the numerically-demanding computations. We show that hierarchical modeling is a useful probabilistic approach that can provide a

  19. The Anachronism of the Local Public Accountancy Determinate by the Accrual European Model

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2009-01-01

    Full Text Available Placing the European accrual model upon cash accountancy model,presently used in Romania, at the level of the local communities, makespossible that the anachronism of the model to manifest itself on the discussion’sconcentration at the nominalization about the model’s inclusion in everydaypublic practice. The basis of the accrual model were first defined in the lawregarding the commercial societies adopted in Great Britain in 1985, when theydetermined that all income and taxes referring to the financial year “will betaken into consideration without any boundary to the reception or paymentdate.”1 The accrual model in accountancy needs the recording of the non-casheffects in transactions or financial events for their appearance periods and not inany generated cash, received or paid. The business development was the basisfor “sophistication” of the recordings of the transactions and financial events,being prerequisite for recording the debtors’ or creditors’ sums.

  20. Multiple Imputation to Account for Measurement Error in Marginal Structural Models.

    Science.gov (United States)

    Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J

    2015-09-01

    Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.

  1. Model Application of Accounting Information Systems of Spare Parts Sales and Purchase on Car Service Company

    Directory of Open Access Journals (Sweden)

    Lianawati Christian

    2015-09-01

    Full Text Available The purpose of this research is to analyze accounting information systems of sales and purchases of spare parts in general car service companies and to identify the problems encountered and the needs of necessary information. This research used literature study to collect data, field study with observation, and design using UML (Unified Modeling Language with activity diagrams, class diagrams, use case diagrams, database design, form design, display design, draft reports. The result achieved is an application model of accounting information systems of sales and purchases of spare parts in general car service companies. As a conclusion, the accounting information systems of sales and purchases provides ease for management to obtain information quickly and easily as well as the presentation of reports quickly and accurately.

  2. The application of release models to the interpretation of rare gas coolant activities

    International Nuclear Information System (INIS)

    Wise, C.

    1985-01-01

    Much research is carried out into the release of fission products from UO 2 fuel and from failed pins. A significant application of this data is to define models of release which can be used to interpret measured coolant activities of rare gas isotopes. Such interpretation is necessary to extract operationally relevant parameters, such as the number and size of failures in the core and the 131 I that might be released during depressurization faults. The latter figure forms part of the safety case for all operating CAGRs. This paper describes and justifies the models which are used in the ANAGRAM program to interpret CAGR coolant activities, highlighting any remaining uncertainties. The various methods by which the program can extract relevant information from the measurements are outlined, and examples are given of the analysis of coolant data. These analyses point to a generally well understood picture of fission gas release from low temperature failures. Areas of higher temperature release are identified where further research would be beneficial to coolant activity analysis. (author)

  3. Rare decays of the Z and the standard model, 4th generation, and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Weiler, T.J.

    1989-01-01

    Several issues in rare decays of the Z are addressed. The rate for flavor-changing Z decay grows as the fourth power of the fermion masses internal to the quantum loop, and so offers a window to the existence of ultraheavy (m > M{sub W}) fermions. In the standard model, with three generations, BR(Z {yields} bs) < 10{sup -7} and BR(Z{yields}tc)<10{sup -13}. With four generations, BR(Z {yields} bb{sub 4}) may be as large as 10{sup -5} if m{sub b4} < M{sub Z}; and similarly for BR(Z {yields} N{sub 4}v), where N{sub 4} is the possibly heavy fourth generation neutrino. In supersymmetric and other two Higgs doublet models, BR(Z {yields} tc) may be as large as 5 {times} 10{sup -6} in the three generation scheme. With minimal supersymmetry, the reaction Z {yields} H{gamma} is guaranteed to go, with a parameter-dependent branching ratio of 10{sup -6 {plus minus} 3}. With mirror fermions or exotic E{sub 6} fermions, the branching ratios for Z {yields} ct (70 GeV), Z {yields} {mu}{tau}, and Z {yields} bb{sub 4} (70 GeV) are typically 10{sup -4}, 10{sup -4}, and 10{sup -3} respectively, clearly measurable at LEP. Depending on unknown quark masses, the Z may mix with vector (b{sub 4}{bar b}{sub 4}) and the W may mix with vector (t{bar b}) or (t{bar s}). CP violating asymmetries in flavor-changing Z decay are immeasurably small in the standard model, but may be large in supersymmetric and other nonstandard models. 28 refs.

  4. Rare decays of the Z and the standard model, 4th generation, and beyond

    International Nuclear Information System (INIS)

    Weiler, T.J.

    1989-01-01

    Several issues in rare decays of the Z are addressed. The rate for flavor-changing Z decay grows as the fourth power of the fermion masses internal to the quantum loop, and so offers a window to the existence of ultraheavy (m > M W ) fermions. In the standard model, with three generations, BR(Z → bs) -7 and BR(Z→tc) -13 . With four generations, BR(Z → bb 4 ) may be as large as 10 -5 if m b4 Z ; and similarly for BR(Z → N 4 v), where N 4 is the possibly heavy fourth generation neutrino. In supersymmetric and other two Higgs doublet models, BR(Z → tc) may be as large as 5 x 10 -6 in the three generation scheme. With minimal supersymmetry, the reaction Z → Hγ is guaranteed to go, with a parameter-dependent branching ratio of 10 -6 ± 3 . With mirror fermions or exotic E 6 fermions, the branching ratios for Z → ct (70 GeV), Z → μτ, and Z → bb 4 (70 GeV) are typically 10 -4 , 10 -4 , and 10 -3 respectively, clearly measurable at LEP. Depending on unknown quark masses, the Z may mix with vector (b 4 bar b 4 ) and the W may mix with vector (t bar b) or (t bar s). CP violating asymmetries in flavor-changing Z decay are immeasurably small in the standard model, but may be large in supersymmetric and other nonstandard models. 28 refs

  5. The importance of data quality for generating reliable distribution models for rare, elusive, and cryptic species.

    Directory of Open Access Journals (Sweden)

    Keith B Aubry

    Full Text Available The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated for rare, elusive, and cryptic species that are prone to misidentification in the field. We investigated this question for the fisher (Pekania pennanti, a forest carnivore of conservation concern in the Pacific States that is often confused with the more common Pacific marten (Martes caurina. Fisher occurrence records supported by physical evidence (verifiable records were available from a limited area, whereas occurrence records of unknown quality (unscreened records were available from throughout the fisher's historical range. We reserved 20% of the verifiable records to use as a test sample for both models and generated SDMs with each dataset using Maxent. The verifiable model performed substantially better than the unscreened model based on multiple metrics including AUCtest values (0.78 and 0.62, respectively, evaluation of training and test gains, and statistical tests of how well each model predicted test localities. In addition, the verifiable model was consistent with our knowledge of the fisher's habitat relations and potential distribution, whereas the unscreened model indicated a much broader area of high-quality habitat (indices > 0.5 that included large expanses of high-elevation habitat that fishers do not occupy. Because Pacific martens remain relatively common in upper elevation habitats in the Cascade Range and Sierra Nevada, the SDM based on unscreened records likely reflects primarily a conflation of marten and fisher habitat. Consequently, accurate identifications are far more important than the spatial extent of occurrence records for generating reliable SDMs

  6. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    Science.gov (United States)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  7. Accounting for Co-Teaching: A Guide for Policymakers and Developers of Value-Added Models

    Science.gov (United States)

    Isenberg, Eric; Walsh, Elias

    2015-01-01

    We outline the options available to policymakers for addressing co-teaching in a value-added model. Building on earlier work, we propose an improvement to a method of accounting for co-teaching that treats co-teachers as teams, with each teacher receiving equal credit for co-taught students. Hock and Isenberg (2012) described a method known as the…

  8. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  9. One Model Fits All: Explaining Many Aspects of Number Comparison within a Single Coherent Model-A Random Walk Account

    Science.gov (United States)

    Reike, Dennis; Schwarz, Wolf

    2016-01-01

    The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…

  10. Theoretical analysis of a hybrid traffic model accounting for safe velocity

    Science.gov (United States)

    Wang, Yu-Qing; Zhou, Chao-Fan; Yan, Bo-Wen; Zhang, De-Chen; Wang, Ji-Xin; Jia, Bin; Gao, Zi-You; Wu, Qing-Song

    2017-04-01

    A hybrid traffic-flow model [Wang-Zhou-Yan (WZY) model] is brought out in this paper. In WZY model, the global equilibrium velocity is replaced by the local equilibrium one, which emphasizes that the modification of vehicle velocity is based on the view of safe-driving rather than the global deployment. In the view of safe-driving, the effect of drivers’ estimation is taken into account. Moreover, the linear stability of the traffic model has been performed. Furthermore, in order to test the robustness of the system, the evolvement of the density wave and the velocity wave of the traffic flow has been numerically calculated.

  11. Improved Model-Independent Analysis of Semileptonic and Radiative Rare B Decays

    Energy Technology Data Exchange (ETDEWEB)

    Hiller, Gudrun

    2001-12-21

    We update the branching ratios for the inclusive decays B {yields} X{sub s}{ell}{sup +}{ell}{sup -} and the exclusive decays B {yields} (K,K*){ell}{sup +}{ell}{sup -}, with {ell} = e, {mu}, in the standard model by including the explicit {Omicron}({alpha}{sub s}) and {Lambda}{sub QCD}/m{sub b} corrections. This framework is used in conjunction with the current measurements of the branching ratios for B {yields} X{sub s{gamma}} and B {yields} K{ell}{sup +}{ell}{sup -} decays and upper limits on the branching ratios for the decays B {yields} (K*, X{sub s}){ell}{sup +}{ell}{sup -} to work out bounds on the Wilson coefficients C{sub 7}, C{sub 8}, C{sub 9} and C{sub 10} appearing in the effective Hamiltonian formalism. The resulting bounds are found to be consistent with the predictions of the standard model and some variants of supersymmetric theories. We illustrate the constraints on supersymmetric parameters that the current data on rare B decays implies in the context of minimal flavour violating model and in more general scenarios admitting additional flavour changing mechanisms. Precise measurements of the dilepton invariant mass distributions in the decays B {yields} (X{sub s}, K*, K){ell}{sup +}{ell}{sup -}, in particular in the lower dilepton mass region, and the forward-backward asymmetry in the decays B {yields} (X{sub s}, K*){ell}{sup +}{ell}{sup -}, will greatly help in discriminating among the SM and various supersymmetric theories.

  12. Accounting of inter-electron correlations in the model of mobile electron shells

    International Nuclear Information System (INIS)

    Panov, Yu.D.; Moskvin, A.S.

    2000-01-01

    One studied the basic peculiar features of the model for mobile electron shells for multielectron atom or cluster. One offered a variation technique to take account of the electron correlations where the coordinates of the centre of single-particle atomic orbital served as variation parameters. It enables to interpret dramatically variation of electron density distribution under anisotropic external effect in terms of the limited initial basis. One studied specific correlated states that might make correlation contribution into the orbital current. Paper presents generalization of the typical MO-LCAO pattern with the limited set of single particle functions enabling to take account of additional multipole-multipole interactions in the cluster [ru

  13. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    Science.gov (United States)

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  15. Model-Based Multifactor Dimensionality Reduction for Rare Variant Association Analysis.

    Science.gov (United States)

    Fouladi, Ramouna; Bessonov, Kyrylo; Van Lishout, François; Van Steen, Kristel

    2015-01-01

    Genome-wide association studies have revealed a vast amount of common loci associated to human complex diseases. Still, a large proportion of heritability remains unexplained. The extent to which rare genetic variants (RVs) are able to explain a relevant portion of the genetic heritability for complex traits leaves room for several debates and paves the way to the collection of RV databases and the development of novel analytic tools to analyze these. To date, several statistical methods have been proposed to uncover the association of RVs with complex diseases, but none of them is the clear winner in all possible scenarios of study design and assumed underlying disease model. The latter may involve differences in the distributions of effect sizes, proportions of causal variants, and ratios of protective to deleterious variants at distinct regions throughout the genome. Therefore, there is a need for robust scalable methods with acceptable overall performance in terms of power and type I error under various realistic scenarios. In this paper, we propose a novel RV association analysis strategy, which satisfies several of the desired properties that a RV analysis tool should exhibit. 2015 S. Karger AG, Basel.

  16. Using a Rasch Model to Account for Guessing as a Source of Low Discrimination.

    Science.gov (United States)

    Humphry, Stephen

    2015-01-01

    The most common approach to modelling item discrimination and guessing for multiple-choice questions is the three parameter logistic (3PL) model. However, proponents of Rasch models generally avoid using the 3PL model because to model guessing entails sacrificing the distinctive property and advantages of Rasch models. One approach to dealing with guessing based on the application of Rasch models is to omit responses in which guessing appears to play a significant role. However, this approach entails loss of information and it does not account for variable item discrimination. It has been shown, though, that provided specific constraints are met, it is possible to parameterize discrimination while preserving the distinctive property of Rasch models. This article proposes an approach that uses Rasch models to account for guessing on standard multiple-choice items simply by treating it as a source of low item discrimination. Technical considerations are noted although a detailed examination of such considerations is beyond the scope of this article.

  17. Accounting for Local Dependence with the Rasch Model: The Paradox of Information Increase.

    Science.gov (United States)

    Andrich, David

    Test theories imply statistical, local independence. Where local independence is violated, models of modern test theory that account for it have been proposed. One violation of local independence occurs when the response to one item governs the response to a subsequent item. Expanding on a formulation of this kind of violation between two items in the dichotomous Rasch model, this paper derives three related implications. First, it formalises how the polytomous Rasch model for an item constituted by summing the scores of the dependent items absorbs the dependence in its threshold structure. Second, it shows that as a consequence the unit when the dependence is accounted for is not the same as if the items had no response dependence. Third, it explains the paradox, known, but not explained in the literature, that the greater the dependence of the constituent items the greater the apparent information in the constituted polytomous item when it should provide less information.

  18. Cost accounting models used for price-setting of health services: an international review.

    Science.gov (United States)

    Raulinajtys-Grzybek, Monika

    2014-12-01

    The aim of the article was to present and compare cost accounting models which are used in the area of healthcare for pricing purposes in different countries. Cost information generated by hospitals is further used by regulatory bodies for setting or updating prices of public health services. The article presents a set of examples from different countries of the European Union, Australia and the United States and concentrates on DRG-based payment systems as they primarily use cost information for pricing. Differences between countries concern the methodology used, as well as the data collection process and the scope of the regulations on cost accounting. The article indicates that the accuracy of the calculation is only one of the factors that determine the choice of the cost accounting methodology. Important aspects are also the selection of the reference hospitals, precise and detailed regulations and the existence of complex healthcare information systems in hospitals. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide.

    Science.gov (United States)

    Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark

    2011-01-01

    Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.

  20. Modeling of Accounting Doctoral Thesis with Emphasis on Solution for Financial Problems

    Directory of Open Access Journals (Sweden)

    F. Mansoori

    2015-02-01

    Full Text Available By passing the instruction period and increase of graduate students and also research budget, knowledge of accounting in Iran entered to the field of research in a way that number of accounting projects has been implemented in the real world. Because of that different experience in implementing the accounting standards were achieved. So, it was expected the mentioned experiences help to solve the financial problems in country, in spite of lots of efforts which were done for researching; we still have many financial and accounting problems in our country. PHD projects could be considered as one of the important solutions to improve the University subjects including accounting. PHD projects are considered as team work job and it will be legitimate by supervisor teams in universities.It is obvious that applied projects should solve part of the problems in accounting field but unfortunately it is not working in the real world. The question which came in to our mind is how come that the out put of the applied and knowledge base projects could not make the darkness of the mentioned problems clear and also why politicians in difficult situations prefer to use their own previous experiences in important decision makings instead of using the consultant’s knowledge base suggestions.In this research I’m going to study, the reasons behind that prevent the applied PHD projects from success in real world which relates to the point of view that consider the political suggestions which are out put of knowledge base projects are not qualified enough for implementation. For this purpose, the indicators of an applied PHD project were considered and 110 vise people were categorized the mentioned indicators and then in a comprehensive study other applied PHD accounting projects were compared to each other.As result, in this study problems of the studied researches were identified and a proper and applied model for creating applied research was developed.

  1. The self-consistent field model for Fermi systems with account of three-body interactions

    Directory of Open Access Journals (Sweden)

    Yu.M. Poluektov

    2015-12-01

    Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.

  2. Modeling and Visualizing the Particle Beam in the Rare Isotope Accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Rosenthal, Christopher [Argonne National Lab., IL (United States); Erdelyi, Bela [Argonne National Lab., IL (United States); Northern Illinois Univ. (United States)

    2006-01-01

    Argonne National Laboratory is actively pursuing research and design for a Rare Isotope Accelerator (RIA) facility that will aid basic research in nuclear physics by creating beams of unstable isotopes. Such a facility has been labeled as a high priority by the joint Department of Energy and National Science Foundation Nuclear Science Advisory Committee because it will allow more study on the nature of nucleonic matter, the origin of the elements, the Standard Model, and nuclear medicine. An important part of this research is computer simulations that model the behavior of the particle beam, specifically in the Fragment Separator. The Fragment Separator selects isotopes based on their trajectory in electromagnetic fields and then uses absorbers to separate particles with a certain mass and charge from the rest of the beam. This project focused on the development of a multivariate, correlated Gaussian distribution to model the distribution of particles in the beam as well as visualizations and analysis to view how this distribution changed when passing through an absorber. The distribution was developed in the COSY INFINITY programming language. The user inputs a covariance matrix and a vector of means for the six phase space variables, and the program outputs a vector of correlated, Gaussian random variables. A variety of random test cases were conducted in two, three and six variables. In each case, the expectation values, variances and covariances were calculated and they converged to the input values. The output of the absorber code is a large data set that stores all of the variables for each particle in the distribution. It is impossible to analyze such a large data set by hand, so visualizations and summary statistics had to be developed. The first visualization is a three-dimensional graph that shows the number of each isotope present after each slice of the absorber. A second graph plots any of the six phase space variables against any of the others to see

  3. Analysis of a microscopic model of taking into account 2p2h configurations

    International Nuclear Information System (INIS)

    Kamerdzhiev, S.P.; Tkachev, V.N.

    1986-01-01

    The Green's-function method has been used to obtain a general equation for the effective field in a nucleus, taking into account both 1p1h and 2p2h configurations. This equation has been used as the starting point for derivation of a previously developed microscopic model of taking 1p1h+phonon configurations into account in magic nuclei. The equation for the density matrix is analyzed in this model. It is shown that the number of quasiparticles is conserved. An equation is obtained for the effective field in the coordinate representation, which provides a formulation of the problem in the 1p1h+2p2h+continuum approximation. The equation is derived and quantitatively analyzed in the space of one-phonon states

  4. A Simple Accounting-based Valuation Model for the Debt Tax Shield

    Directory of Open Access Journals (Sweden)

    Andreas Scholze

    2010-05-01

    Full Text Available This paper describes a simple way to integrate the debt tax shield into an accounting-based valuation model. The market value of equity is determined by forecasting residual operating income, which is calculated by charging operating income for the operating assets at a required return that accounts for the tax benefit that comes from borrowing to raise cash for the operations. The model assumes that the firm maintains a deterministic financial leverage ratio, which tends to converge quickly to typical steady-state levels over time. From a practical point of view, this characteristic is of particular help, because it allows a continuing value calculation at the end of a short forecast period.

  5. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect

    Science.gov (United States)

    Carenzo, M.; Pellicciotti, F.; Mabillard, J.; Reid, T.; Brock, B. W.

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the

  6. Integrative analysis of functional genomic annotations and sequencing data to identify rare causal variants via hierarchical modeling

    Directory of Open Access Journals (Sweden)

    Marinela eCapanu

    2015-05-01

    Full Text Available Identifying the small number of rare causal variants contributing to disease has beena major focus of investigation in recent years, but represents a formidable statisticalchallenge due to the rare frequencies with which these variants are observed. In thiscommentary we draw attention to a formal statistical framework, namely hierarchicalmodeling, to combine functional genomic annotations with sequencing data with theobjective of enhancing our ability to identify rare causal variants. Using simulations weshow that in all configurations studied, the hierarchical modeling approach has superiordiscriminatory ability compared to a recently proposed aggregate measure of deleteriousness,the Combined Annotation-Dependent Depletion (CADD score, supportingour premise that aggregate functional genomic measures can more accurately identifycausal variants when used in conjunction with sequencing data through a hierarchicalmodeling approach

  7. [Application of detecting and taking overdispersion into account in Poisson regression model].

    Science.gov (United States)

    Bouche, G; Lepage, B; Migeot, V; Ingrand, P

    2009-08-01

    Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.

  8. Modeling Laterally Loaded Single Piles Accounting for Nonlinear Soil-Pile Interactions

    Directory of Open Access Journals (Sweden)

    Maryam Mardfekri

    2013-01-01

    Full Text Available The nonlinear behavior of a laterally loaded monopile foundation is studied using the finite element method (FEM to account for soil-pile interactions. Three-dimensional (3D finite element modeling is a convenient and reliable approach to account for the continuity of the soil mass and the nonlinearity of the soil-pile interactions. Existing simple methods for predicting the deflection of laterally loaded single piles in sand and clay (e.g., beam on elastic foundation, p-y method, and SALLOP are assessed using linear and nonlinear finite element analyses. The results indicate that for the specific case considered here the p-y method provides a reasonable accuracy, in spite of its simplicity, in predicting the lateral deflection of single piles. A simplified linear finite element (FE analysis of piles, often used in the literature, is also investigated and the influence of accounting for the pile diameter in the simplified linear FE model is evaluated. It is shown that modeling the pile as a line with beam-column elements results in a reduced contribution of the surrounding soil to the lateral stiffness of the pile and an increase of up to 200% in the predicted maximum lateral displacement of the pile head.

  9. Bayesian model accounting for within-class biological variability in Serial Analysis of Gene Expression (SAGE

    Directory of Open Access Journals (Sweden)

    Brentani Helena

    2004-08-01

    Full Text Available Abstract Background An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE, "Digital Northern" or Massively Parallel Signature Sequencing (MPSS, is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. Results We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries" and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Conclusion Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.

  10. Bayesian model accounting for within-class biological variability in Serial Analysis of Gene Expression (SAGE).

    Science.gov (United States)

    Vêncio, Ricardo Z N; Brentani, Helena; Patrão, Diogo F C; Pereira, Carlos A B

    2004-08-31

    An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE), "Digital Northern" or Massively Parallel Signature Sequencing (MPSS), is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries") and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.

  11. A multiscale active structural model of the arterial wall accounting for smooth muscle dynamics.

    Science.gov (United States)

    Coccarelli, Alberto; Edwards, David Hughes; Aggarwal, Ankush; Nithiarasu, Perumal; Parthimos, Dimitris

    2018-02-01

    Arterial wall dynamics arise from the synergy of passive mechano-elastic properties of the vascular tissue and the active contractile behaviour of smooth muscle cells (SMCs) that form the media layer of vessels. We have developed a computational framework that incorporates both these components to account for vascular responses to mechanical and pharmacological stimuli. To validate the proposed framework and demonstrate its potential for testing hypotheses on the pathogenesis of vascular disease, we have employed a number of pharmacological probes that modulate the arterial wall contractile machinery by selectively inhibiting a range of intracellular signalling pathways. Experimental probes used on ring segments from the rabbit central ear artery are: phenylephrine, a selective α 1-adrenergic receptor agonist that induces vasoconstriction; cyclopiazonic acid (CPA), a specific inhibitor of sarcoplasmic/endoplasmic reticulum Ca 2+ -ATPase; and ryanodine, a diterpenoid that modulates Ca 2+ release from the sarcoplasmic reticulum. These interventions were able to delineate the role of membrane versus intracellular signalling, previously identified as main factors in smooth muscle contraction and the generation of vessel tone. Each SMC was modelled by a system of nonlinear differential equations that account for intracellular ionic signalling, and in particular Ca 2+ dynamics. Cytosolic Ca 2+ concentrations formed the catalytic input to a cross-bridge kinetics model. Contractile output from these cellular components forms the input to the finite-element model of the arterial rings under isometric conditions that reproduces the experimental conditions. The model does not account for the role of the endothelium, as the nitric oxide production was suppressed by the action of L-NAME, and also due to the absence of shear stress on the arterial ring, as the experimental set-up did not involve flow. Simulations generated by the integrated model closely matched experimental

  12. Modeling 2-alternative forced-choice tasks: Accounting for both magnitude and difference effects.

    Science.gov (United States)

    Ratcliff, Roger; Voskuilen, Chelsea; Teodorescu, Andrei

    2018-03-01

    We present a model-based analysis of two-alternative forced-choice tasks in which two stimuli are presented side by side and subjects must make a comparative judgment (e.g., which stimulus is brighter). Stimuli can vary on two dimensions, the difference in strength of the two stimuli and the magnitude of each stimulus. Differences between the two stimuli produce typical RT and accuracy effects (i.e., subjects respond more quickly and more accurately when there is a larger difference between the two). However, the overall magnitude of the pair of stimuli also affects RT and accuracy. In the more common two-choice task, a single stimulus is presented and the stimulus varies on only one dimension. In this two-stimulus task, if the standard diffusion decision model is fit to the data with only drift rate (evidence accumulation rate) differing among conditions, the model cannot fit the data. However, if either of one of two variability parameters is allowed to change with stimulus magnitude, the model can fit the data. This results in two models that are extremely constrained with about one tenth of the number of parameters than there are data points while at the same time the models account for accuracy and correct and error RT distributions. While both of these versions of the diffusion model can account for the observed data, the model that allows across-trial variability in drift to vary might be preferred for theoretical reasons. The diffusion model fits are compared to the leaky competing accumulator model which did not perform as well. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Model of inventory replenishment in periodic review accounting for the occurrence of shortages

    Directory of Open Access Journals (Sweden)

    Stanisław Krzyżaniak

    2014-03-01

    Full Text Available Background: Despite the development of alternative concepts of goods flow management, the inventory management under conditions of random variations of demand is still an important issue, both from the point of view of inventory keeping and replenishment costs and the service level measured as the level of inventory availability. There is a number of inventory replenishment systems used in these conditions, but they are mostly developments of two basic systems: reorder point-based and periodic review-based. The paper deals with the latter system. Numerous researches indicate the need to improve the classical models describing that system, the reason being mainly the necessity to adapt the model better to the actual conditions. This allows a correct selection of parameters that control the used inventory replenishment system and - as a result - to obtain expected economic effects. Methods: This research aimed at building a model of the periodic review system to reflect the relations (observed during simulation tests between the volume of inventory shortages and the degree of accounting for so-called deferred demand, and the service level expressed as the probability of satisfying the demand in the review and the inventory replenishment cycle. The following model building and testing method has been applied: numerical simulation of inventory replenishment - detailed analysis of simulation results - construction of the model taking into account the regularities observed during the simulations - determination of principles of solving the system of relations creating the model - verification of the results obtained from the model using the results from simulation. Results: Presented are selected results of calculations based on classical formulas and using the developed model, which describe the relations between the service level and the parameters controlling the discussed inventory replenishment system. The results are compared to the simulation

  14. A two-phase moisture transport model accounting for sorption hysteresis in layered porous building constructions

    DEFF Research Database (Denmark)

    Johannesson, Björn; Janz, Mårten

    2009-01-01

    and exhibits different transport properties. A successful model of such a case may shred light on the performance of different constructions with regards to, for example, mould growth and freeze thaw damages. For this purpose a model has been developed which is based on a two phase flow, vapor and liquid water......, with account also to sorption hysteresis. The different materials in the considered layered construction are assigned different properties, i.e. vapor and liquid water diffusivities and boundary (wetting and drying) sorption curves. Further, the scanning behavior between wetting and drying boundary curves...

  15. @AACAnatomy twitter account goes live: A sustainable social media model for professional societies.

    Science.gov (United States)

    Benjamin, Hannah K; Royer, Danielle F

    2018-05-01

    Social media, with its capabilities of fast, global information sharing, provides a useful medium for professional development, connecting and collaborating with peers, and outreach. The goals of this study were to describe a new, sustainable model for Twitter use by professional societies, and analyze its impact on @AACAnatomy, the Twitter account of the American Association of Clinical Anatomists. Under supervision of an Association committee member, an anatomy graduate student developed a protocol for publishing daily tweets for @AACAnatomy. Five tweet categories were used: Research, Announcements, Replies, Engagement, and Community. Analytics from the 6-month pilot phase were used to assess the impact of the new model. @AACAnatomy had a steady average growth of 33 new followers per month, with less than 10% likely representing Association members. Research tweets, based on Clinical Anatomy articles with an abstract link, were the most shared, averaging 5,451 impressions, 31 link clicks, and nine #ClinAnat hashtag clicks per month. However, tweets from non-Research categories accounted for the highest impression and engagement metrics in four out of six months. For all tweet categories, monthly averages show consistent interaction of followers with the account. Daily tweet publication resulted in a 103% follower increase. An active Twitter account successfully facilitated regular engagement with @AACAnatomy followers and the promotion of clinical anatomy topics within a broad community. This Twitter model has the potential for implementation by other societies as a sustainable medium for outreach, networking, collaboration, and member engagement. Clin. Anat. 31:566-575, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. MODELING ENERGY EXPENDITURE AND OXYGEN CONSUMPTION IN HUMAN EXPOSURE MODELS: ACCOUNTING FOR FATIGUE AND EPOC

    Science.gov (United States)

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled Individual's physical activity level as described in an activity diary. Activity level is quantified via standardized val...

  17. Accounting for misclassified outcomes in binary regression models using multiple imputation with internal validation data.

    Science.gov (United States)

    Edwards, Jessie K; Cole, Stephen R; Troester, Melissa A; Richardson, David B

    2013-05-01

    Outcome misclassification is widespread in epidemiology, but methods to account for it are rarely used. We describe the use of multiple imputation to reduce bias when validation data are available for a subgroup of study participants. This approach is illustrated using data from 308 participants in the multicenter Herpetic Eye Disease Study between 1992 and 1998 (48% female; 85% white; median age, 49 years). The odds ratio comparing the acyclovir group with the placebo group on the gold-standard outcome (physician-diagnosed herpes simplex virus recurrence) was 0.62 (95% confidence interval (CI): 0.35, 1.09). We masked ourselves to physician diagnosis except for a 30% validation subgroup used to compare methods. Multiple imputation (odds ratio (OR) = 0.60; 95% CI: 0.24, 1.51) was compared with naive analysis using self-reported outcomes (OR = 0.90; 95% CI: 0.47, 1.73), analysis restricted to the validation subgroup (OR = 0.57; 95% CI: 0.20, 1.59), and direct maximum likelihood (OR = 0.62; 95% CI: 0.26, 1.53). In simulations, multiple imputation and direct maximum likelihood had greater statistical power than did analysis restricted to the validation subgroup, yet all 3 provided unbiased estimates of the odds ratio. The multiple-imputation approach was extended to estimate risk ratios using log-binomial regression. Multiple imputation has advantages regarding flexibility and ease of implementation for epidemiologists familiar with missing data methods.

  18. Accounting for standard errors of vision-specific latent trait in regression models.

    Science.gov (United States)

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits

  19. Comparing dark matter models, modified Newtonian dynamics and modified gravity in accounting for galaxy rotation curves

    Science.gov (United States)

    Li, Xin; Tang, Li; Lin, Hai-Nan

    2017-05-01

    We compare six models (including the baryonic model, two dark matter models, two modified Newtonian dynamics models and one modified gravity model) in accounting for galaxy rotation curves. For the dark matter models, we assume NFW profile and core-modified profile for the dark halo, respectively. For the modified Newtonian dynamics models, we discuss Milgrom’s MOND theory with two different interpolation functions, the standard and the simple interpolation functions. For the modified gravity, we focus on Moffat’s MSTG theory. We fit these models to the observed rotation curves of 9 high-surface brightness and 9 low-surface brightness galaxies. We apply the Bayesian Information Criterion and the Akaike Information Criterion to test the goodness-of-fit of each model. It is found that none of the six models can fit all the galaxy rotation curves well. Two galaxies can be best fitted by the baryonic model without involving nonluminous dark matter. MOND can fit the largest number of galaxies, and only one galaxy can be best fitted by the MSTG model. Core-modified model fits about half the LSB galaxies well, but no HSB galaxies, while the NFW model fits only a small fraction of HSB galaxies but no LSB galaxies. This may imply that the oversimplified NFW and core-modified profiles cannot model the postulated dark matter haloes well. Supported by Fundamental Research Funds for the Central Universities (106112016CDJCR301206), National Natural Science Fund of China (11305181, 11547305 and 11603005), and Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y5KF181CJ1)

  20. An analytical model accounting for tip shape evolution during atom probe analysis of heterogeneous materials.

    Science.gov (United States)

    Rolland, N; Larson, D J; Geiser, B P; Duguay, S; Vurpillot, F; Blavette, D

    2015-12-01

    An analytical model describing the field evaporation dynamics of a tip made of a thin layer deposited on a substrate is presented in this paper. The difference in evaporation field between the materials is taken into account in this approach in which the tip shape is modeled at a mesoscopic scale. It was found that the non-existence of sharp edge on the surface is a sufficient condition to derive the morphological evolution during successive evaporation of the layers. This modeling gives an instantaneous and smooth analytical representation of the surface that shows good agreement with finite difference simulations results, and a specific regime of evaporation was highlighted when the substrate is a low evaporation field phase. In addition, the model makes it possible to calculate theoretically the tip analyzed volume, potentially opening up new horizons for atom probe tomographic reconstruction. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Extremely rare collapse and build-up of turbulence in stochastic models of transitional wall flows

    Science.gov (United States)

    Rolland, Joran

    2018-02-01

    This paper presents a numerical and theoretical study of multistability in two stochastic models of transitional wall flows. An algorithm dedicated to the computation of rare events is adapted on these two stochastic models. The main focus is placed on a stochastic partial differential equation model proposed by Barkley. Three types of events are computed in a systematic and reproducible manner: (i) the collapse of isolated puffs and domains initially containing their steady turbulent fraction; (ii) the puff splitting; (iii) the build-up of turbulence from the laminar base flow under a noise perturbation of vanishing variance. For build-up events, an extreme realization of the vanishing variance noise pushes the state from the laminar base flow to the most probable germ of turbulence which in turn develops into a full blown puff. For collapse events, the Reynolds number and length ranges of the two regimes of collapse of laminar-turbulent pipes, independent collapse or global collapse of puffs, is determined. The mean first passage time before each event is then systematically computed as a function of the Reynolds number r and pipe length L in the laminar-turbulent coexistence range of Reynolds number. In the case of isolated puffs, the faster-than-linear growth with Reynolds number of the logarithm of mean first passage time T before collapse is separated in two. One finds that ln(T ) =Apr -Bp , with Ap and Bp positive. Moreover, Ap and Bp are affine in the spatial integral of turbulence intensity of the puff, with the same slope. In the case of pipes initially containing the steady turbulent fraction, the length L and Reynolds number r dependence of the mean first passage time T before collapse is also separated. The author finds that T ≍exp[L (A r -B )] with A and B positive. The length and Reynolds number dependence of T are then discussed in view of the large deviations theoretical approaches of the study of mean first passage times and multistability

  2. Influential factors of red-light running at signalized intersection and prediction using a rare events logistic regression model.

    Science.gov (United States)

    Ren, Yilong; Wang, Yunpeng; Wu, Xinkai; Yu, Guizhen; Ding, Chuan

    2016-10-01

    Red light running (RLR) has become a major safety concern at signalized intersection. To prevent RLR related crashes, it is critical to identify the factors that significantly impact the drivers' behaviors of RLR, and to predict potential RLR in real time. In this research, 9-month's RLR events extracted from high-resolution traffic data collected by loop detectors from three signalized intersections were applied to identify the factors that significantly affect RLR behaviors. The data analysis indicated that occupancy time, time gap, used yellow time, time left to yellow start, whether the preceding vehicle runs through the intersection during yellow, and whether there is a vehicle passing through the intersection on the adjacent lane were significantly factors for RLR behaviors. Furthermore, due to the rare events nature of RLR, a modified rare events logistic regression model was developed for RLR prediction. The rare events logistic regression method has been applied in many fields for rare events studies and shows impressive performance, but so far none of previous research has applied this method to study RLR. The results showed that the rare events logistic regression model performed significantly better than the standard logistic regression model. More importantly, the proposed RLR prediction method is purely based on loop detector data collected from a single advance loop detector located 400 feet away from stop-bar. This brings great potential for future field applications of the proposed method since loops have been widely implemented in many intersections and can collect data in real time. This research is expected to contribute to the improvement of intersection safety significantly. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Regional Balance Model of Financial Flows through Sectoral Approaches System of National Accounts

    Directory of Open Access Journals (Sweden)

    Ekaterina Aleksandrovna Zaharchuk

    2017-03-01

    Full Text Available The main purpose of the study, the results of which are reflected in this article, is the theoretical and methodological substantiation of possibilities to build a regional balance model of financial flows consistent with the principles of the construction of the System of National Accounts (SNA. The paper summarizes the international experience of building regional accounts in the SNA as well as reflects the advantages and disadvantages of the existing techniques for constructing Social Accounting Matrix. The authors have proposed an approach to build the regional balance model of financial flows, which is based on the disaggregated tables of the formation, distribution and use of the added value of territory in the framework of institutional sectors of SNA (corporations, public administration, households. Within the problem resolution of the transition of value added from industries to sectors, the authors have offered an approach to the accounting of development, distribution and use of value added within the institutional sectors of the territories. The methods of calculation are based on the publicly available information base of statistics agencies and federal services. The authors provide the scheme of the interrelations of the indicators of the regional balance model of financial flows. It allows to coordinate mutually the movement of regional resources by the sectors of «corporation», «public administration» and «households» among themselves, and cash flows of the region — by the sectors and directions of use. As a result, they form a single account of the formation and distribution of territorial financial resources, which is a regional balance model of financial flows. This matrix shows the distribution of financial resources by income sources and sectors, where the components of the formation (compensation, taxes and gross profit, distribution (transfers and payments and use (final consumption, accumulation of value added are

  4. Accounting for Zero Inflation of Mussel Parasite Counts Using Discrete Regression Models

    Directory of Open Access Journals (Sweden)

    Emel Çankaya

    2017-06-01

    Full Text Available In many ecological applications, the absences of species are inevitable due to either detection faults in samples or uninhabitable conditions for their existence, resulting in high number of zero counts or abundance. Usual practice for modelling such data is regression modelling of log(abundance+1 and it is well know that resulting model is inadequate for prediction purposes. New discrete models accounting for zero abundances, namely zero-inflated regression (ZIP and ZINB, Hurdle-Poisson (HP and Hurdle-Negative Binomial (HNB amongst others are widely preferred to the classical regression models. Due to the fact that mussels are one of the economically most important aquatic products of Turkey, the purpose of this study is therefore to examine the performances of these four models in determination of the significant biotic and abiotic factors on the occurrences of Nematopsis legeri parasite harming the existence of Mediterranean mussels (Mytilus galloprovincialis L.. The data collected from the three coastal regions of Sinop city in Turkey showed more than 50% of parasite counts on the average are zero-valued and model comparisons were based on information criterion. The results showed that the probability of the occurrence of this parasite is here best formulated by ZINB or HNB models and influential factors of models were found to be correspondent with ecological differences of the regions.

  5. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  6. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  7. Accounting for and predicting the influence of spatial autocorrelation in water quality modeling

    Science.gov (United States)

    Miralha, L.; Kim, D.

    2017-12-01

    Although many studies have attempted to investigate the spatial trends of water quality, more attention is yet to be paid to the consequences of considering and ignoring the spatial autocorrelation (SAC) that exists in water quality parameters. Several studies have mentioned the importance of accounting for SAC in water quality modeling, as well as the differences in outcomes between models that account for and ignore SAC. However, the capacity to predict the magnitude of such differences is still ambiguous. In this study, we hypothesized that SAC inherently possessed by a response variable (i.e., water quality parameter) influences the outcomes of spatial modeling. We evaluated whether the level of inherent SAC is associated with changes in R-Squared, Akaike Information Criterion (AIC), and residual SAC (rSAC), after accounting for SAC during modeling procedure. The main objective was to analyze if water quality parameters with higher Moran's I values (inherent SAC measure) undergo a greater increase in R² and a greater reduction in both AIC and rSAC. We compared a non-spatial model (OLS) to two spatial regression approaches (spatial lag and error models). Predictor variables were the principal components of topographic (elevation and slope), land cover, and hydrological soil group variables. We acquired these data from federal online sources (e.g. USGS). Ten watersheds were selected, each in a different state of the USA. Results revealed that water quality parameters with higher inherent SAC showed substantial increase in R² and decrease in rSAC after performing spatial regressions. However, AIC values did not show significant changes. Overall, the higher the level of inherent SAC in water quality variables, the greater improvement of model performance. This indicates a linear and direct relationship between the spatial model outcomes (R² and rSAC) and the degree of SAC in each water quality variable. Therefore, our study suggests that the inherent level of

  8. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    Science.gov (United States)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  9. Rare earth industries: Strategies for Malaysia

    International Nuclear Information System (INIS)

    2011-01-01

    Evidently, many reports cite Malaysia as having reasonably substantial amounts of rare earths elements. In fact, based on the rare earths found in the residual tin deposits alone, Malaysia has about 30,000 tonnes. This does not take into account unmapped deposits which experts believe may offer more tonnages of rare earths. Brazil which is reported to have about 48,000 tonnes has announced plans to invest aggressively in the rare earths business. China has on record the largest reserves with about 36 million tonnes. This explains why China has invested heavily in the entire value chain of the rare earths business. Chinas committed investment in rare earths started many years ago when the country's foremost leaders proclaimed the strategic position of rare earths in the world economy. That forecast is now a reality where the rise in the green high-tech economy is seen driving global demand for rare earths in a big way. Malaysia needs to discover and venture into new economic growth areas. This will help fuel the country's drive to achieve a high income status by 2020 as articulated in the New Economic Model (NEM) and the many supporting Economic Transformation Plans that the Government has recently launched. Rare earths may be the new growth area for Malaysia. However, the business opportunities should not just be confined to the mining, extraction and production of rare earths elements alone if Malaysia is to maximise benefits from this industry. The industry's gold mine is in the downstream products. This is also the sector that China wants to expand. Japan which now controls about 50 % of the global market for downstream rare earths-based high-tech components is desperately looking for partners to grow their stake in the business. Malaysia needs to embark on the right strategies in order to build the rare earths industry in the country. What are the strategies? (author)

  10. A synthesis of literature on evaluation of models for policy applications, with implications for forest carbon accounting

    Science.gov (United States)

    Stephen P. Prisley; Michael J. Mortimer

    2004-01-01

    Forest modeling has moved beyond the realm of scientific discovery into the policy arena. The example that motivates this review is the application of models for forest carbon accounting. As negotiations determine the terms under which forest carbon will be accounted, reported, and potentially traded, guidelines and standards are being developed to ensure consistency,...

  11. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    International Nuclear Information System (INIS)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO 2 and NO x emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner

  12. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    Energy Technology Data Exchange (ETDEWEB)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO{sub 2} and NO{sub x} emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner.

  13. Traditional Market Accounting: Management or Financial Accounting?

    OpenAIRE

    Wiyarni, Wiyarni

    2017-01-01

    The purpose of this study is to explore the area of accounting in traditional market. There are two areas of accounting: management and financial accounting. Some of traditional market traders have prepared financial notes, whereas some of them do not. Their financial notes usually consist of receivables, payables, customer orders, inventories, sales and cost price, and salary expenses. The purpose of these financial notes is usually for decision making. It is very rare for the traditional ma...

  14. Matematical modeling of galophytic plants productivity taking into account the temperature factor and soil salinity level

    Science.gov (United States)

    Natalia, Slyusar; Pisman, Tamara; Pechurkin, Nikolai S.

    Among the most challenging tasks faced by contemporary ecology is modeling of biological production process in different plant communities. The difficulty of the task is determined by the complexity of the study material. Models showing the influence of climate and climate change on plant growth, which would also involve soil site parameters, could be of both practical and theoretical interest. In this work a mathematical model has been constructed to describe the growth dynamics of different plant communities of halophytic meadows as dependent upon the temperature factor and soil salinity level, which could be further used to predict yields of these plant communities. The study was performed on plants of halophytic meadows in the coastal area of Lake of the Republic of Khakasia in 2004 - 2006. Every plant community grew on the soil of a different level of salinity - the amount of the solid residue of the saline soil aqueous extract. The mathematical model was analyzed using field data of 2004 and 2006, the years of contrasting air temperatures. Results of model investigations show that there is a correlation between plant growth and the temperature of the air for plant communities growing on soils containing the lowest (0.1Thus, results of our study, in which we used a mathematical model describing the development of plant communities of halophytic meadows and field measurements, suggest that both climate conditions (temperature) and ecological factors of the plants' habitat (soil salinity level) should be taken into account when constructing models for predicting crop yields.

  15. The Dynamics of the Accounting Models and Their Impact upon the Financial Risk Evaluation

    Directory of Open Access Journals (Sweden)

    Victor Munteanu

    2015-03-01

    Full Text Available All the companies are exposed to risks and circumstances can take an unexpected turn at some point in time. What the company can control is how these risks are managed and firstly the steps to be taken to avoid them. The way how the scientific expertise, data and the advice on devising the risk strategies are understood, represented and incorporated into a structured system has visibly evolved since the 19th century until present, along with the accounting models and the main factors that triggered a higher concern in this sector.

  16. A model of gettering effects of rare-earth elements in III-V compounds

    Czech Academy of Sciences Publication Activity Database

    Šrobár, Fedor; Procházková, Olga

    2006-01-01

    Roč. 100, č. 8 (2006), s. 643-- ISSN 0009-2770. [Sjezd chemických společností /58./. Ústí nad Labem, 04.09.2006-08.09.2006] R&D Projects: GA ČR(CZ) GA102/06/0153 Institutional research plan: CEZ:AV0Z20670512 Keywords : semiconductor technology * rare earth metals * getters Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.431, year: 2006

  17. A three-dimensional model of mammalian tyrosinase active site accounting for loss of function mutations.

    Science.gov (United States)

    Schweikardt, Thorsten; Olivares, Concepción; Solano, Francisco; Jaenicke, Elmar; García-Borrón, José Carlos; Decker, Heinz

    2007-10-01

    Tyrosinases are the first and rate-limiting enzymes in the synthesis of melanin pigments responsible for colouring hair, skin and eyes. Mutation of tyrosinases often decreases melanin production resulting in albinism, but the effects are not always understood at the molecular level. Homology modelling of mouse tyrosinase based on recently published crystal structures of non-mammalian tyrosinases provides an active site model accounting for loss-of-function mutations. According to the model, the copper-binding histidines are located in a helix bundle comprising four densely packed helices. A loop containing residues M374, S375 and V377 connects the CuA and CuB centres, with the peptide oxygens of M374 and V377 serving as hydrogen acceptors for the NH-groups of the imidazole rings of the copper-binding His367 and His180. Therefore, this loop is essential for the stability of the active site architecture. A double substitution (374)MS(375) --> (374)GG(375) or a single M374G mutation lead to a local perturbation of the protein matrix at the active site affecting the orientation of the H367 side chain, that may be unable to bind CuB reliably, resulting in loss of activity. The model also accounts for loss of function in two naturally occurring albino mutations, S380P and V393F. The hydroxyl group in S380 contributes to the correct orientation of M374, and the substitution of V393 for a bulkier phenylalanine sterically impedes correct side chain packing at the active site. Therefore, our model explains the mechanistic necessity for conservation of not only active site histidines but also adjacent amino acids in tyrosinase.

  18. Palaeomagnetic dating method accounting for post-depositional remanence and its application to geomagnetic field modelling

    Science.gov (United States)

    Nilsson, A.; Suttie, N.

    2016-12-01

    Sedimentary palaeomagnetic data may exhibit some degree of smoothing of the recorded field due to the gradual processes by which the magnetic signal is `locked-in' over time. Here we present a new Bayesian method to construct age-depth models based on palaeomagnetic data, taking into account and correcting for potential lock-in delay. The age-depth model is built on the widely used "Bacon" dating software by Blaauw and Christen (2011, Bayesian Analysis 6, 457-474) and is designed to combine both radiocarbon and palaeomagnetic measurements. To our knowledge, this is the first palaeomagnetic dating method that addresses the potential problems related post-depositional remanent magnetisation acquisition in age-depth modelling. Age-depth models, including site specific lock-in depth and lock-in filter function, produced with this method are shown to be consistent with independent results based on radiocarbon wiggle match dated sediment sections. Besides its primary use as a dating tool, our new method can also be used specifically to identify the most likely lock-in parameters for a specific record. We explore the potential to use these results to construct high-resolution geomagnetic field models based on sedimentary palaeomagnetic data, adjusting for smoothing induced by post-depositional remanent magnetisation acquisition. Potentially, this technique could enable reconstructions of Holocene geomagnetic field with the same amplitude of variability observed in archaeomagnetic field models for the past three millennia.

  19. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    Directory of Open Access Journals (Sweden)

    Czoli Christine

    2011-10-01

    Full Text Available Abstract Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories. These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed. Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment.

  20. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Science.gov (United States)

    Molenaar, Dylan; de Boeck, Paul

    2018-02-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  1. A Modified Model to Estimate Building Rental Multipiers Accounting for Advalorem Operating Expenses

    Directory of Open Access Journals (Sweden)

    Smolyak S.A.

    2016-09-01

    Full Text Available To develop ideas on building element valuation contained in the first article on the subject published in REMV, we propose an elaboration of the approach accounting for ad valorem expenses incidental to property management, such as land taxes, income/capital gains tax, and insurance premium costs; all such costs, being of an ad valorem nature in the first instance, cause circularity in the logic of the model, which, however, is not intractable under the proposed approach. The resulting formulas for carrying out practical estimation of building rental multipliers and, in consequence, of building values, turn out to be somewhat modified, and we demonstrate the sensitivity of the developed approach to the impact of these ad valorem factors. On the other hand, it is demonstrated that (accounting for building depreciation charges, which should seemingly be included among the considered ad valorem factors, cancel out and do not have any impact on the resulting estimates. However, treating the depreciation of buildings in quantifiable economic terms as a reduction in derivable operating benefits over time (instead of mere physical indications, such as age, we also demonstrate that the approach has implications for estimating the economic service lives of buildings and can be practical when used in conjunction with the market-related approach to valuation – from which the requisite model inputs can be extracted as shown in the final part of the paper.

  2. Radiative transfer modeling through terrestrial atmosphere and ocean accounting for inelastic processes: Software package SCIATRAN

    Science.gov (United States)

    Rozanov, V. V.; Dinter, T.; Rozanov, A. V.; Wolanin, A.; Bracher, A.; Burrows, J. P.

    2017-06-01

    SCIATRAN is a comprehensive software package which is designed to model radiative transfer processes in the terrestrial atmosphere and ocean in the spectral range from the ultraviolet to the thermal infrared (0.18-40 μm). It accounts for multiple scattering processes, polarization, thermal emission and ocean-atmosphere coupling. The main goal of this paper is to present a recently developed version of SCIATRAN which takes into account accurately inelastic radiative processes in both the atmosphere and the ocean. In the scalar version of the coupled ocean-atmosphere radiative transfer solver presented by Rozanov et al. [61] we have implemented the simulation of the rotational Raman scattering, vibrational Raman scattering, chlorophyll and colored dissolved organic matter fluorescence. In this paper we discuss and explain the numerical methods used in SCIATRAN to solve the scalar radiative transfer equation including trans-spectral processes, and demonstrate how some selected radiative transfer problems are solved using the SCIATRAN package. In addition we present selected comparisons of SCIATRAN simulations with those published benchmark results, independent radiative transfer models, and various measurements from satellite, ground-based, and ship-borne instruments. The extended SCIATRAN software package along with a detailed User's Guide is made available for scientists and students, who are undertaking their own research typically at universities, via the web page of the Institute of Environmental Physics (IUP), University of Bremen: http://www.iup.physik.uni-bremen.de.

  3. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    Science.gov (United States)

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  4. Implementation of a cost-accounting model in a biobank: practical implications.

    Science.gov (United States)

    Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C

    2014-01-01

    Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level.

  5. Accounting for rainfall evaporation using dual-polarization radar and mesoscale model data

    Science.gov (United States)

    Pallardy, Quinn; Fox, Neil I.

    2018-02-01

    Implementation of dual-polarization radar should allow for improvements in quantitative precipitation estimates due to dual-polarization capability allowing for the retrieval of the second moment of the gamma drop size distribution. Knowledge of the shape of the DSD can then be used in combination with mesoscale model data to estimate the motion and evaporation of each size of drop falling from the height at which precipitation is observed by the radar to the surface. Using data from Central Missouri at a range between 130 and 140 km from the operational National Weather Service radar a rain drop tracing scheme was developed to account for the effects of evaporation, where individual raindrops hitting the ground were traced to the point in space and time where they interacted with the radar beam. The results indicated evaporation played a significant role in radar rainfall estimation in situations where the atmosphere was relatively dry. Improvements in radar estimated rainfall were also found in these situations by accounting for evaporation. The conclusion was made that the effects of raindrop evaporation were significant enough to warrant further research into the inclusion high resolution model data in the radar rainfall estimation process for appropriate locations.

  6. An extended car-following model accounting for the average headway effect in intelligent transportation system

    Science.gov (United States)

    Kuang, Hua; Xu, Zhi-Peng; Li, Xing-Li; Lo, Siu-Ming

    2017-04-01

    In this paper, an extended car-following model is proposed to simulate traffic flow by considering average headway of preceding vehicles group in intelligent transportation systems environment. The stability condition of this model is obtained by using the linear stability analysis. The phase diagram can be divided into three regions classified as the stable, the metastable and the unstable ones. The theoretical result shows that the average headway plays an important role in improving the stabilization of traffic system. The mKdV equation near the critical point is derived to describe the evolution properties of traffic density waves by applying the reductive perturbation method. Furthermore, through the simulation of space-time evolution of the vehicle headway, it is shown that the traffic jam can be suppressed efficiently with taking into account the average headway effect, and the analytical result is consistent with the simulation one.

  7. MODELLING OF THERMOELASTIC TRANSIENT CONTACT INTERACTION FOR BINARY BEARING TAKING INTO ACCOUNT CONVECTION

    Directory of Open Access Journals (Sweden)

    Igor KOLESNIKOV

    2016-12-01

    Full Text Available Serviceability of metal-polymeric "dry-friction" sliding bearings depends on many parameters, including the rotational speed, friction coefficient, thermal and mechanical properties of the bearing system and, as a result, the value of contact temperature. The objective of this study is to develop a computational model for the metallic-polymer bearing, determination on the basis of this model temperature distribution, equivalent and contact stresses for elements of the bearing arrangement and selection of the optimal parameters for the bearing system to achieve thermal balance. Static problem for the combined sliding bearing with the account of heat generation due to friction has been studied in [1]; the dynamic thermoelastic problem of the shaft rotation in a single and double layer bronze bearings were investigated in [2, 3].

  8. Accounting for genetic interactions improves modeling of individual quantitative trait phenotypes in yeast.

    Science.gov (United States)

    Forsberg, Simon K G; Bloom, Joshua S; Sadhu, Meru J; Kruglyak, Leonid; Carlborg, Örjan

    2017-04-01

    Experiments in model organisms report abundant genetic interactions underlying biologically important traits, whereas quantitative genetics theory predicts, and data support, the notion that most genetic variance in populations is additive. Here we describe networks of capacitating genetic interactions that contribute to quantitative trait variation in a large yeast intercross population. The additive variance explained by individual loci in a network is highly dependent on the allele frequencies of the interacting loci. Modeling of phenotypes for multilocus genotype classes in the epistatic networks is often improved by accounting for the interactions. We discuss the implications of these results for attempts to dissect genetic architectures and to predict individual phenotypes and long-term responses to selection.

  9. Calibration of an experimental model of tritium storage bed designed for 'in situ' accountability

    International Nuclear Information System (INIS)

    Bidica, Nicolae; Stefanescu, Ioan; Bucur, Ciprian; Bulubasa, Gheorghe; Deaconu, Mariea

    2009-01-01

    Full text: Objectives: Tritium accountancy of the storage beds in tritium facilities is an important issue for tritium inventory control. The purpose of our work was to perform calibration of an experimental model of tritium storage bed with a special design, using electric heaters to simulate tritium decay, and to evaluate the detection limit of the accountancy method. The objective of this paper is to present an experimental method used for calibration of the storage bed and the experimental results consisting of calibration curves and detection limit. Our method is based on a 'self-assaying' tritium storage bed. The basic characteristics of the design of our storage bed consists, in principle, of a uniform distribution of the storage material on several copper thin fins (in order to obtain a uniform temperature field inside the bed), an electrical heat source to simulate the tritium decay heat, a system of thermocouples for measuring the temperature field inside the bed, and good thermal isolation of the bed from the external environment. Within this design of the tritium storage bed, the tritium accounting method is based on determining the decay heat of tritium by measuring the temperature increase of the isolated storage bed. Experimental procedure consisted in measuring of temperature field inside the bed for few values of the power injected with the aid of electrical heat source. Data have been collected for few hours and the temperature increase rate was determined for each value of the power injected. Graphical representation of temperature rise versus injected powers was obtained. This accounting method of tritium inventory stored as metal tritide is a reliable solution for in-situ tritium accountability in a tritium handling facility. Several improvements can be done regarding the design of the storage bed in order to improve the measurement accuracy and to obtain a lower detection limit as for instance use of more accurate thermocouples or special

  10. Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.

    Science.gov (United States)

    Martínez, C A; Khare, K; Rahman, S; Elzo, M A

    2017-10-01

    Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.

  11. A New Evapotranspiration Model Accounting for Advection and Its Validation during SMEX02

    Directory of Open Access Journals (Sweden)

    Yongmin Yang

    2013-01-01

    Full Text Available Based on the crop water stress index (CWSI concept, a new model was proposed to account for advection to estimate evapotranspiration. Both local scale evaluation with sites observations and regional scale evaluation with a remote dataset from Landsat 7 ETM+ were carried out to assess the performance of this model. Local scale evaluation indicates that this newly developed model can effectively characterize the daily variations of evapotranspiration and the predicted results show good agreement with the site observations. For all the 6 corn sites, the coefficient of determination (R2 is 0.90 and the root mean square difference (RMSD is 58.52W/m2. For all the 6 soybean sites, the R2 and RMSD are 0.85 and 49.46W/m2, respectively. Regional scale evaluation shows that the model can capture the spatial variations of evapotranspiration at the Landsat-based scale. Clear spatial patterns were observed at the Landsat-based scale and are closely related to the dominant land covers, corn and soybean. Furthermore, the surface resistance derived from instantaneous CWSI was applied to the Penman-Monteith equation to estimate daily evapotranspiration. Overall, results indicate that this newly developed model is capable of estimating reliable surface heat fluxes using remotely sensed data.

  12. A model proposal concerning balance scorecard application integrated with resource consumption accounting in enterprise performance management

    Directory of Open Access Journals (Sweden)

    ORHAN ELMACI

    2014-06-01

    Full Text Available The present study intended to investigate the “Balance Scorecard (BSC model integrated with Resource Consumption Accounting (RCA” which helps to evaluate the enterprise as matrix structure in its all parts. It aims to measure how much tangible and intangible values (assets of enterprises contribute to the enterprises. In other words, it measures how effectively, actively, and efficiently these values (assets are used. In short, it aims to measure sustainable competency of enterprises. As expressing the effect of tangible and intangible values (assets of the enterprise on the performance in mathematical and statistical methods is insufficient, it is targeted that RCA Method integrated with BSC model is based on matrix structure and control models. The effects of all complex factors in the enterprise on the performance (productivity and efficiency estimated algorithmically with cause and effect diagram. The contributions of matrix structures for reaching the management functional targets of the enterprises that operate in market competitive environment increasing day to day, is discussed. So in the context of modern management theories, as a contribution to BSC approach which is in the foreground in today’s administrative science of enterprises in matrix organizational structures, multidimensional performance evaluation model -RCA integrated with BSC Model proposal- is presented as strategic planning and strategic evaluation instrument.

  13. A Model for Urban Environment and Resource Planning Based on Green GDP Accounting System

    Directory of Open Access Journals (Sweden)

    Linyu Xu

    2013-01-01

    Full Text Available The urban environment and resources are currently on course that is unsustainable in the long run due to excessive human pursuit of economic goals. Thus, it is very important to develop a model to analyse the relationship between urban economic development and environmental resource protection during the process of rapid urbanisation. This paper proposed a model to identify the key factors in urban environment and resource regulation based on a green GDP accounting system, which consisted of four parts: economy, society, resource, and environment. In this model, the analytic hierarchy process (AHP method and a modified Pearl curve model were combined to allow for dynamic evaluation, with higher green GDP value as the planning target. The model was applied to the environmental and resource planning problem of Wuyishan City, and the results showed that energy use was a key factor that influenced the urban environment and resource development. Biodiversity and air quality were the most sensitive factors that influenced the value of green GDP in the city. According to the analysis, the urban environment and resource planning could be improved for promoting sustainable development in Wuyishan City.

  14. Air quality modeling for accountability research: Operational, dynamic, and diagnostic evaluation

    Science.gov (United States)

    Henneman, Lucas R. F.; Liu, Cong; Hu, Yongtao; Mulholland, James A.; Russell, Armistead G.

    2017-10-01

    Photochemical grid models play a central role in air quality regulatory frameworks, including in air pollution accountability research, which seeks to demonstrate the extent to which regulations causally impacted emissions, air quality, and public health. There is a need, however, to develop and demonstrate appropriate practices for model application and evaluation in an accountability framework. We employ a combination of traditional and novel evaluation techniques to assess four years (2001-02, 2011-12) of simulated pollutant concentrations across a decade of major emissions reductions using the Community Multiscale Air Quality (CMAQ) model. We have grouped our assessments in three categories: Operational evaluation investigates how well CMAQ captures absolute concentrations; dynamic evaluation investigates how well CMAQ captures changes in concentrations across the decade of changing emissions; diagnostic evaluation investigates how CMAQ attributes variability in concentrations and sensitivities to emissions between meteorology and emissions, and how well this attribution compares to empirical statistical models. In this application, CMAQ captures O3 and PM2.5 concentrations and change over the decade in the Eastern United States similarly to past CMAQ applications and in line with model evaluation guidance; however, some PM2.5 species-EC, OC, and sulfate in particular-exhibit high biases in various months. CMAQ-simulated PM2.5 has a high bias in winter months and low bias in the summer, mainly due to a high bias in OC during the cold months and low bias in OC and sulfate during the summer. Simulated O3 and PM2.5 changes across the decade have normalized mean bias of less than 2.5% and 17%, respectively. Detailed comparisons suggest biased EC emissions, negative wintertime SO42- sensitivities to mobile source emissions, and incomplete capture of OC chemistry in the summer and winter. Photochemical grid model-simulated O3 and PM2.5 responses to emissions and

  15. Modeling cystic fibrosis disease progression in patients with the rare CFTR mutation P67L.

    Science.gov (United States)

    MacKenzie, Isobel E R; Paquette, Valerie; Gosse, Frances; George, Sheenagh; Chappe, Frederic; Chappe, Valerie

    2017-05-01

    The progression of cystic fibrosis (CF) in patients with the rare mutation P67L was examined to determine if it induced a milder form of CF compared to the common severe ΔF508 mutation. Parameters of lung function, level of bacterial infection, nutritional status and hospitalization were used to represent CF progression. Age at diagnosis and pancreatic status were used to assess CF presentation. Analysis of data from the CF Canada Registry collected over a 15-year period included 266 ΔF508/ΔF508 homozygote patients from CF clinics in Atlantic Canada and 26 compound heterozygote patients with the rare P67L mutation from clinics across Canada. Late age at diagnosis, high incidence of pancreatic sufficiency, maintained Body Mass Index (BMI) with age, delayed life-threatening bacterial infection, and fewer days in hospital were observed for P67L heterozygote patients included in this study. Although the decline of lung function did not differ from ΔF508 homozygotes, the fact that a greater proportion of P67L heterozygotes live to an older age suggests that lung function is not the primary factor determining CF progression for P67L heterozygote patients. The P67L mutation is associated with a mild disease, even when combined with the severe ΔF508 mutation. Copyright © 2017 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  16. Radiative transfer modeling through terrestrial atmosphere and ocean accounting for inelastic processes: Software package SCIATRAN

    International Nuclear Information System (INIS)

    Rozanov, V.V.; Dinter, T.; Rozanov, A.V.; Wolanin, A.; Bracher, A.; Burrows, J.P.

    2017-01-01

    SCIATRAN is a comprehensive software package which is designed to model radiative transfer processes in the terrestrial atmosphere and ocean in the spectral range from the ultraviolet to the thermal infrared (0.18–40 μm). It accounts for multiple scattering processes, polarization, thermal emission and ocean–atmosphere coupling. The main goal of this paper is to present a recently developed version of SCIATRAN which takes into account accurately inelastic radiative processes in both the atmosphere and the ocean. In the scalar version of the coupled ocean–atmosphere radiative transfer solver presented by Rozanov et al. we have implemented the simulation of the rotational Raman scattering, vibrational Raman scattering, chlorophyll and colored dissolved organic matter fluorescence. In this paper we discuss and explain the numerical methods used in SCIATRAN to solve the scalar radiative transfer equation including trans-spectral processes, and demonstrate how some selected radiative transfer problems are solved using the SCIATRAN package. In addition we present selected comparisons of SCIATRAN simulations with those published benchmark results, independent radiative transfer models, and various measurements from satellite, ground-based, and ship-borne instruments. The extended SCIATRAN software package along with a detailed User's Guide is made available for scientists and students, who are undertaking their own research typically at universities, via the web page of the Institute of Environmental Physics (IUP), University of Bremen: (http://www.iup.physik.uni-bremen.de). - Highlights: • A new version of the software package SCIATRAN is presented. • Inelastic scattering in water and atmosphere is implemented in SCIATRAN. • Raman scattering and fluorescence can be included in radiative transfer calculations. • Comparisons to other radiative transfer models show excellent agreement. • Comparisons to observations show consistent results.

  17. Modeling of ethylbenzene dehydrogenation kinetics process taking into account deactivation of catalyst bed of the reactor

    Directory of Open Access Journals (Sweden)

    V. K. Bityukov

    2017-01-01

    Full Text Available Styrene synthesis process occurring in a two-stage continuous adiabatic reactor is a complex chemical engineering system. It is characterized by indeterminacy, nonstationarity and occurs in permanent uncontrolled disturbances. Therefore, the task of developing the predictive control system of the main product concentration of the dehydrogenation reaction - styrene to maintain this value within a predetermined range throughout the period of operation is important. This solution is impossible without the development of the process model on the basis of the kinetic revised scheme, taking into account the drop of the reactor catalytic bed activity due to coke formation on the surface. The article justifies and proposes: the drop changes dependence of catalyst bed activity as a time of reactor block operation function and improved model of chemical reactions kinetics. The synthesized mathematical model of the process is a system of ordinary differential equations and allows you: to calculate the concentration profiles of reaction mixture components during the passage of the charge through the adiabatic reactor stage, to determine the contact gas composition at the outlet of the reactor stages throughout the cycle of catalytic system, taking into account temperature changes and drop of the catalyst bed activity. The compensation of the decreased catalyst bed activity is carried out by raising the temperature in the reactor block for the duration of the operation. The estimation of the values of chemical reactions rate constants, as well as the calculation and analysis of the main and by-products concentrations of dehydrogenation reactions at the outlet of the reactor plant is curried out. Simulation results show that the change of temperature of the reactor, carried out by the exponential law considering deactivation of the catalyst bed allows the yield in a given range of technological regulations throughout the operation cycle of the reactor block.

  18. A margin model to account for respiration-induced tumour motion and its variability

    International Nuclear Information System (INIS)

    Coolens, Catherine; Webb, Steve; Evans, Phil M; Shirato, H; Nishioka, K

    2008-01-01

    In order to reduce the sensitivity of radiotherapy treatments to organ motion, compensation methods are being investigated such as gating of treatment delivery, tracking of tumour position, 4D scanning and planning of the treatment, etc. An outstanding problem that would occur with all these methods is the assumption that breathing motion is reproducible throughout the planning and delivery process of treatment. This is obviously not a realistic assumption and is one that will introduce errors. A dynamic internal margin model (DIM) is presented that is designed to follow the tumour trajectory and account for the variability in respiratory motion. The model statistically describes the variation of the breathing cycle over time, i.e. the uncertainty in motion amplitude and phase reproducibility, in a polar coordinate system from which margins can be derived. This allows accounting for an additional gating window parameter for gated treatment delivery as well as minimizing the area of normal tissue irradiated. The model was illustrated with abdominal motion for a patient with liver cancer and tested with internal 3D lung tumour trajectories. The results confirm that the respiratory phases around exhale are most reproducible and have the smallest variation in motion amplitude and phase (approximately 2 mm). More importantly, the margin area covering normal tissue is significantly reduced by using trajectory-specific margins (as opposed to conventional margins) as the angular component is by far the largest contributor to the margin area. The statistical approach to margin calculation, in addition, offers the possibility for advanced online verification and updating of breathing variation as more data become available

  19. FPLUME-1.0: An integral volcanic plume model accounting for ash aggregation

    Science.gov (United States)

    Folch, Arnau; Costa, Antonio; Macedonio, Giovanni

    2016-04-01

    Eruption Source Parameters (ESP) characterizing volcanic eruption plumes are crucial inputs for atmospheric tephra dispersal models, used for hazard assessment and risk mitigation. We present FPLUME-1.0, a steady-state 1D cross-section averaged eruption column model based on the Buoyant Plume Theory (BPT). The model accounts for plume bending by wind, entrainment of ambient moisture, effects of water phase changes, particle fallout and re-entrainment, a new parameterization for the air entrainment coefficients and a model for wet aggregation of ash particles in presence of liquid water or ice. In the occurrence of wet aggregation, the model predicts an "effective" grain size distribution depleted in fines with respect to that erupted at the vent. Given a wind profile, the model can be used to determine the column height from the eruption mass flow rate or vice-versa. The ultimate goal is to improve ash cloud dispersal forecasts by better constraining the ESP (column height, eruption rate and vertical distribution of mass) and the "effective" particle grain size distribution resulting from eventual wet aggregation within the plume. As test cases we apply the model to the eruptive phase-B of the 4 April 1982 El Chichón volcano eruption (México) and the 6 May 2010 Eyjafjallajökull eruption phase (Iceland). The modular structure of the code facilitates the implementation in the future code versions of more quantitative ash aggregation parameterization as further observations and experiments data will be available for better constraining ash aggregation processes.

  20. A Unifying Modeling of Plant Shoot Gravitropism With an Explicit Account of the Effects of Growth

    Directory of Open Access Journals (Sweden)

    Renaud eBastien

    2014-04-01

    Full Text Available Gravitropism, the slow reorientation of plant growth in response to gravity, is a major determinant of the form and posture of land plants. Recently a universal model of shoot gravitropism, the AC model, has been presented, in which the dynamics of the tropic movement is only determined by the contradictory controls of i graviception, that tends to curve the plants towards the vertical, and ii proprioception, that tends to keep the stem straights. This model was found valid over a large range of species and over two order of magnitude in organ size. However the motor of the movement, the elongation, has been neglected in the AC model. Taking into account explicit growth effects, however, requires consideration of the material derivative, i.e. the rate of change of curvature bound to an expanding and convected organ elements. Here we show that it is possible to rewrite the material equation of curvature in a compact simplified form that express directly the curvature variation as a function of the median elongation andof the distribution of the differential growth. Through this extended model, called the ACE model, two main destabilizing effects of growth on the tropic movement are identified : i the passive orientation drift, which occurs when a curved element elongates without differential growth and ii the fixed curvature which occurs when a element leaves the elongation zone and is no longer able to change its curvature actively. By comparing the AC and ACE models to experiments, these two effects were however found negligible, revealing a probable selection for rapid convergence to the steady state shape during the tropic movement so as to escape the growth destabilizing effects, involving in particular a selection over proprioceptive sensitivity. Then the simplified AC mode can be used to analyze gravitropism and posture control in actively elongating plant organs without significant information loss.

  1. Material Protection, Accounting, and Control Technologies (MPACT): Modeling and Simulation Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    Cipiti, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dunn, Timothy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Durbin, Samual [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Durkee, Joe W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); England, Jeff [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, Robert [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Ketusky, Edward [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Li, Shelly [Idaho National Lab. (INL), Idaho Falls, ID (United States); Lindgren, Eric [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meier, David [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Miller, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Osburn, Laura Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pereira, Candido [Argonne National Lab. (ANL), Argonne, IL (United States); Rauch, Eric Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Scaglione, John [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Scherer, Carolynn P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sprinkle, James K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Yoo, Tae-Sic [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-05

    The development of sustainable advanced nuclear fuel cycles is a long-term goal of the Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technologies program. The Material Protection, Accounting, and Control Technologies (MPACT) campaign is supporting research and development (R&D) of advanced instrumentation, analysis tools, and integration methodologies to meet this goal. This advanced R&D is intended to facilitate safeguards and security by design of fuel cycle facilities. The lab-scale demonstration of a virtual facility, distributed test bed, that connects the individual tools being developed at National Laboratories and university research establishments, is a key program milestone for 2020. These tools will consist of instrumentation and devices as well as computer software for modeling. To aid in framing its long-term goal, during FY16, a modeling and simulation roadmap is being developed for three major areas of investigation: (1) radiation transport and sensors, (2) process and chemical models, and (3) shock physics and assessments. For each area, current modeling approaches are described, and gaps and needs are identified.

  2. Accounting for exhaust gas transport dynamics in instantaneous emission models via smooth transition regression.

    Science.gov (United States)

    Kamarianakis, Yiannis; Gao, H Oliver

    2010-02-15

    Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.

  3. Comprehensive impedance model of cobalt deposition in sulfate solutions accounting for homogeneous reactions and adsorptive effects

    International Nuclear Information System (INIS)

    Vazquez-Arenas, Jorge; Pritzker, Mark

    2011-01-01

    A comprehensive physicochemical model for cobalt deposition onto a cobalt rotating disk electrode in sulfate-borate (pH 3) solutions is derived and statistically fit to experimental EIS spectra obtained over a range of CoSO 4 concentrations, overpotentials and rotation speeds. The model accounts for H + and water reduction, homogeneous reactions and mass transport within the boundary layer. Based on a thermodynamic analysis, the species CoSO 4(aq) , B(OH) 3(aq) , B 3 O 3 (OH) 4 - , H + and OH - and two homogeneous reactions (B(OH) 3(aq) hydrolysis and water dissociation) are included in the model. Kinetic and transport parameters are estimated by minimizing the sum-of-squares error between the model and experimental measurements using a simplex method. The electrode response is affected most strongly by parameters associated with the first step of Co(II) reduction, reflecting its control of the rate of Co deposition, and is moderately sensitive to the parameters for H + reduction and the Co(II) diffusion coefficient. Water reduction is found not to occur to any significant extent under the conditions studied. These trends are consistent with that obtained by fitting equivalent electrical circuits to the experimental spectra. The simplest circuit that best fits the data consists of two RQ elements (resistor-constant phase element) in parallel or series with the solution resistance.

  4. Modelling of gas-metal arc welding taking into account metal vapour

    Energy Technology Data Exchange (ETDEWEB)

    Schnick, M; Fuessel, U; Hertel, M; Haessler, M [Institute of Surface and Manufacturing Technology, Technische Universitaet Dresden, D-01062 Dresden (Germany); Spille-Kohoff, A [CFX Berlin Software GmbH, Karl-Marx-Allee 90, 10243 Berlin (Germany); Murphy, A B [CSIRO Materials Science and Engineering, PO Box 218, Lindfield NSW 2070 (Australia)

    2010-11-03

    The most advanced numerical models of gas-metal arc welding (GMAW) neglect vaporization of metal, and assume an argon atmosphere for the arc region, as is also common practice for models of gas-tungsten arc welding (GTAW). These models predict temperatures above 20 000 K and a temperature distribution similar to GTAW arcs. However, spectroscopic temperature measurements in GMAW arcs demonstrate much lower arc temperatures. In contrast to measurements of GTAW arcs, they have shown the presence of a central local minimum of the radial temperature distribution. This paper presents a GMAW model that takes into account metal vapour and that is able to predict the local central minimum in the radial distributions of temperature and electric current density. The influence of different values for the net radiative emission coefficient of iron vapour, which vary by up to a factor of hundred, is examined. It is shown that these net emission coefficients cause differences in the magnitudes, but not in the overall trends, of the radial distribution of temperature and current density. Further, the influence of the metal vaporization rate is investigated. We present evidence that, for higher vaporization rates, the central flow velocity inside the arc is decreased and can even change direction so that it is directed from the workpiece towards the wire, although the outer plasma flow is still directed towards the workpiece. In support of this thesis, we have attempted to reproduce the measurements of Zielinska et al for spray-transfer mode GMAW numerically, and have obtained reasonable agreement.

  5. Rare earths

    International Nuclear Information System (INIS)

    Cranstone, D.A.

    1980-01-01

    There has been no Canadian production of the rare earth oxides since 1977. World production in 1978, the last year for which figures are available, is estimated to have been about 41000 tonnes, mostly from Australia and the United States. The United States Bureau of Mines estimates that world reserves contain about 7 million tonnes of rare earth oxides and 35 million tonnes of yttrium. The largest yttrium reserves are in India, while China is believed to have the world's largest reserves of rare earth oxides. World consumption of rare aarths increased slightly in 1980, but is still only a small fraction of known reserves. Rare earths are used mainly in high-strength magnets, automobile exhaust systems, fluorescent tube and television screen phosphors, metallurgical applications, petroleum cracking catalysts, and glass polishing

  6. Integrated Approach Model of Risk, Control and Auditing of Accounting Information Systems

    Directory of Open Access Journals (Sweden)

    Claudiu BRANDAS

    2013-01-01

    Full Text Available The use of IT in the financial and accounting processes is growing fast and this leads to an increase in the research and professional concerns about the risks, control and audit of Ac-counting Information Systems (AIS. In this context, the risk and control of AIS approach is a central component of processes for IT audit, financial audit and IT Governance. Recent studies in the literature on the concepts of risk, control and auditing of AIS outline two approaches: (1 a professional approach in which we can fit ISA, COBIT, IT Risk, COSO and SOX, and (2 a research oriented approach in which we emphasize research on continuous auditing and fraud using information technology. Starting from the limits of existing approaches, our study is aimed to developing and testing an Integrated Approach Model of Risk, Control and Auditing of AIS on three cycles of business processes: purchases cycle, sales cycle and cash cycle in order to improve the efficiency of IT Governance, as well as ensuring integrity, reality, accuracy and availability of financial statements.

  7. Accounting for treatment use when validating a prognostic model: a simulation study.

    Science.gov (United States)

    Pajouheshnia, Romin; Peelen, Linda M; Moons, Karel G M; Reitsma, Johannes B; Groenwold, Rolf H H

    2017-07-14

    Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW) on the estimated model discrimination (c-index) and calibration (observed:expected ratio and calibration plots) in scenarios with different patterns and effects of treatment use. Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and should not be ignored. When treatment use is random, treated

  8. Accounting for treatment use when validating a prognostic model: a simulation study

    Directory of Open Access Journals (Sweden)

    Romin Pajouheshnia

    2017-07-01

    Full Text Available Abstract Background Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. Methods We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW on the estimated model discrimination (c-index and calibration (observed:expected ratio and calibration plots in scenarios with different patterns and effects of treatment use. Results Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. Conclusions When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and

  9. Accounting for detectability in fish distribution models: an approach based on time-to-first-detection

    Directory of Open Access Journals (Sweden)

    Mário Ferreira

    2015-12-01

    Full Text Available Imperfect detection (i.e., failure to detect a species when the species is present is increasingly recognized as an important source of uncertainty and bias in species distribution modeling. Although methods have been developed to solve this problem by explicitly incorporating variation in detectability in the modeling procedure, their use in freshwater systems remains limited. This is probably because most methods imply repeated sampling (≥ 2 of each location within a short time frame, which may be impractical or too expensive in most studies. Here we explore a novel approach to control for detectability based on the time-to-first-detection, which requires only a single sampling occasion and so may find more general applicability in freshwaters. The approach uses a Bayesian framework to combine conventional occupancy modeling with techniques borrowed from parametric survival analysis, jointly modeling factors affecting the probability of occupancy and the time required to detect a species. To illustrate the method, we modeled large scale factors (elevation, stream order and precipitation affecting the distribution of six fish species in a catchment located in north-eastern Portugal, while accounting for factors potentially affecting detectability at sampling points (stream depth and width. Species detectability was most influenced by depth and to lesser extent by stream width and tended to increase over time for most species. Occupancy was consistently affected by stream order, elevation and annual precipitation. These species presented a widespread distribution with higher uncertainty in tributaries and upper stream reaches. This approach can be used to estimate sampling efficiency and provide a practical framework to incorporate variations in the detection rate in fish distribution models.

  10. Constraints on the rare tau decays from {mu} {yields} e{gamma} in the supersymmetric see-saw model

    Energy Technology Data Exchange (ETDEWEB)

    Ibarra, A. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Simonetto, C. [Technische Univ., Muenchen (Germany). Physik-Department

    2008-02-15

    It is now a firmly established fact that all family lepton numbers are violated in Nature. In this paper we discuss the implications of this observation for future searches for rare tau decays in the supersymmetric see-saw model. Using the two loop renormalization group evolution of the soft terms and the Yukawa couplings we show that there exists a lower bound on the rate of the rare process {mu}{yields}e{gamma} of the form BR({mu}{yields}e{gamma})>or similar C x BR({tau}{yields}{mu}{gamma})BR({tau}{yields}e{gamma}), where C is a constant that depends on supersymmetric parameters. Our only assumption is the absence of cancellations among the high-energy see-saw parameters. We also discuss the implications of this bound for future searches for rare tau decays. In particular, for large regions of the mSUGRA parameter space, we show that present B-factories could discover either {tau}{yields}{mu}{gamma} or {tau}{yields}e{gamma}, but not both. (orig.)

  11. Beta-Binomial Model for the Detection of Rare Mutations in Pooled Next-Generation Sequencing Experiments.

    Science.gov (United States)

    Jakaitiene, Audrone; Avino, Mariano; Guarracino, Mario Rosario

    2017-04-01

    Against diminishing costs, next-generation sequencing (NGS) still remains expensive for studies with a large number of individuals. As cost saving, sequencing genome of pools containing multiple samples might be used. Currently, there are many software available for the detection of single-nucleotide polymorphisms (SNPs). Sensitivity and specificity depend on the model used and data analyzed, indicating that all software have space for improvement. We use beta-binomial model to detect rare mutations in untagged pooled NGS experiments. We propose a multireference framework for pooled data with ability being specific up to two patients affected by neuromuscular disorders (NMD). We assessed the results comparing with The Genome Analysis Toolkit (GATK), CRISP, SNVer, and FreeBayes. Our results show that the multireference approach applying beta-binomial model is accurate in predicting rare mutations at 0.01 fraction. Finally, we explored the concordance of mutations between the model and software, checking their involvement in any NMD-related gene. We detected seven novel SNPs, for which the functional analysis produced enriched terms related to locomotion and musculature.

  12. A Thermodamage Strength Theoretical Model of Ceramic Materials Taking into Account the Effect of Residual Stress

    Directory of Open Access Journals (Sweden)

    Weiguo Li

    2012-01-01

    Full Text Available A thermodamage strength theoretical model taking into account the effect of residual stress was established and applied to each temperature phase based on the study of effects of various physical mechanisms on the fracture strength of ultrahigh-temperature ceramics. The effects of SiC particle size, crack size, and SiC particle volume fraction on strength corresponding to different temperatures were studied in detail. This study showed that when flaw size is not large, the bigger SiC particle size results in the greater effect of tensile residual stress in the matrix grains on strength reduction, and this prediction coincides with experimental results; and the residual stress and the combined effort of particle size and crack size play important roles in controlling material strength.

  13. A hybrid mode choice model to account for the dynamic effect of inertia over time

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Börjesson, Maria; Bierlaire, Michel

    The influence of habits, giving rise to inertia effect, in the choice process has been intensely debated in the literature. Typically inertia is accounted for by letting the indirect utility functions of the alternatives of the choice situation at time t depend on the outcome of the choice made...... gathered over a continuous period of time, six weeks, to study both inertia and the influence of habits. Tendency to stick with the same alternative is measured through lagged variables that link the current choice with the previous trip made with the same purpose, mode and time of day. However, the lagged...... effect of the previous trips is not constant but it depends on the individual propensity to undertake habitual trips which is captured by the individual specific latent variable. And the frequency of the trips in the previous week is used as an indicator of the habitual behavior. The model estimation...

  14. REGRESSION MODEL FOR RISK REPORTING IN FINANCIAL STATEMENTS OF ACCOUNTING SERVICES ENTITIES

    Directory of Open Access Journals (Sweden)

    Mirela NICHITA

    2015-06-01

    Full Text Available The purpose of financial reports is to provide useful information to users; the utility of information is defined through the qualitative characteristics (fundamental and enhancing. The financial crisis emphasized the limits of financial reporting which has been unable to prevent investors about the risks they were facing. Due to the current changes in business environment, managers have been highly motivated to rethink and improve the risk governance philosophy, processes and methodologies. The lack of quality, timely data and adequate systems to capture, report and measure the right information across the organization is a fundamental challenge for implementing and sustaining all aspects of effective risk management. Starting with the 80s, the investors are more interested in narratives (Notes to financial statements, than in primary reports (financial position and performance. The research will apply a regression model for assessment of risk reporting by the professional (accounting and taxation services for major companies from Romania during the period 2009 – 2013.

  15. Modeling Pair Distribution Functions of Rare-Earth Phosphate Glasses Using Principal Component Analysis.

    Science.gov (United States)

    Cole, Jacqueline M; Cheng, Xie; Payne, Michael C

    2016-11-07

    The use of principal component analysis (PCA) to statistically infer features of local structure from experimental pair distribution function (PDF) data is assessed on a case study of rare-earth phosphate glasses (REPGs). Such glasses, codoped with two rare-earth ions (R and R') of different sizes and optical properties, are of interest to the laser industry. The determination of structure-property relationships in these materials is an important aspect of their technological development. Yet, realizing the local structure of codoped REPGs presents significant challenges relative to their singly doped counterparts; specifically, R and R' are difficult to distinguish in terms of establishing relative material compositions, identifying atomic pairwise correlation profiles in a PDF that are associated with each ion, and resolving peak overlap of such profiles in PDFs. This study demonstrates that PCA can be employed to help overcome these structural complications, by statistically inferring trends in PDFs that exist for a restricted set of experimental data on REPGs, and using these as training data to predict material compositions and PDF profiles in unknown codoped REPGs. The application of these PCA methods to resolve individual atomic pairwise correlations in t(r) signatures is also presented. The training methods developed for these structural predictions are prevalidated by testing their ability to reproduce known physical phenomena, such as the lanthanide contraction, on PDF signatures of the structurally simpler singly doped REPGs. The intrinsic limitations of applying PCA to analyze PDFs relative to the quality control of source data, data processing, and sample definition, are also considered. While this case study is limited to lanthanide-doped REPGs, this type of statistical inference may easily be extended to other inorganic solid-state materials and be exploited in large-scale data-mining efforts that probe many t(r) functions.

  16. Evaluating rare amino acid substitutions (RGC_CAMs in a yeast model clade.

    Directory of Open Access Journals (Sweden)

    Kenneth Polzin

    Full Text Available When inferring phylogenetic relationships, not all sites in a sequence alignment are equally informative. One recently proposed approach that takes advantage of this inequality relies on sites that contain amino acids whose replacement requires multiple substitutions. Identifying these so-called RGC_CAM substitutions (after Rare Genomic Changes as Conserved Amino acids-Multiple substitutions requires that, first, at any given site in the amino acid sequence alignment, there must be a minimum of two different amino acids; second, each amino acid must be present in at least two taxa; and third, the amino acids must require a minimum of two nucleotide substitutions to replace each other. Although theory suggests that RGC_CAM substitutions are expected to be rare and less likely to be homoplastic, the informativeness of RGC_CAM substitutions has not been extensively evaluated in biological data sets. We investigated the quality of RGC_CAM substitutions by examining their degree of homoplasy and internode certainty in nearly 2.7 million aligned amino acid sites from 5,261 proteins from five species belonging to the yeast Saccharomyces sensu stricto clade whose phylogeny is well-established. We identified 2,647 sites containing RGC_CAM substitutions, a number that contrasts sharply with the 100,887 sites containing RGC_non-CAM substitutions (i.e., changes between amino acids that require only a single nucleotide substitution. We found that RGC_CAM substitutions had significantly lower homoplasy than RGC_non-CAM ones; specifically RGC_CAM substitutions showed a per-site average homoplasy index of 0.100, whereas RGC_non-CAM substitutions had a homoplasy index of 0.215. Internode certainty values were also higher for sites containing RGC_CAM substitutions than for RGC_non-CAM ones. These results suggest that RGC_CAM substitutions possess a strong phylogenetic signal and are useful markers for phylogenetic inference despite their rarity.

  17. Hydrodynamic modeling of urban flooding taking into account detailed data about city infrastructure

    Science.gov (United States)

    Belikov, Vitaly; Norin, Sergey; Aleksyuk, Andrey; Krylenko, Inna; Borisova, Natalya; Rumyantsev, Alexey

    2017-04-01

    Flood waves moving across urban areas have specific features. Thus, the linear objects of infrastructure (such as embankments, roads, dams) can change the direction of flow or block the water movement. On the contrary, paved avenues and wide streets in the cities contribute to the concentration of flood waters. Buildings create an additional resistance to the movement of water, which depends on the urban density and the type of constructions; this effect cannot be completely described by Manning's resistance law. In addition, part of the earth surface, occupied by buildings, is excluded from the flooded area, which results in a substantial (relative to undeveloped areas) increase of the depth of flooding, especially for unsteady flow conditions. An approach to numerical simulation of urban areas flooding that consists in direct allocating of all buildings and structures on the computational grid are proposed. This can be done in almost full automatic way with usage of modern software. Real geometry of all objects of infrastructure can be taken into account on the base of highly detailed digital maps and satellite images. The calculations based on two-dimensional Saint-Venant equations on irregular adaptive computational meshes, which can contain millions of cells and take into account tens of thousands of buildings and other objects of infrastructure. Flood maps, received as result of modeling, are the basis for the damage and risk assessment for urban areas. The main advantage of the developed method is high-precision calculations, realistic modeling results and appropriate graphical display of the flood dynamics and dam-break wave's propagation on urban areas. Verification of this method has been done on the experimental data and real events simulations, including catastrophic flooding of the Krymsk city in 2012 year.

  18. Biological parametric mapping accounting for random regressors with regression calibration and model II regression.

    Science.gov (United States)

    Yang, Xue; Lauzon, Carolyn B; Crainiceanu, Ciprian; Caffo, Brian; Resnick, Susan M; Landman, Bennett A

    2012-09-01

    Massively univariate regression and inference in the form of statistical parametric mapping have transformed the way in which multi-dimensional imaging data are studied. In functional and structural neuroimaging, the de facto standard "design matrix"-based general linear regression model and its multi-level cousins have enabled investigation of the biological basis of the human brain. With modern study designs, it is possible to acquire multi-modal three-dimensional assessments of the same individuals--e.g., structural, functional and quantitative magnetic resonance imaging, alongside functional and ligand binding maps with positron emission tomography. Largely, current statistical methods in the imaging community assume that the regressors are non-random. For more realistic multi-parametric assessment (e.g., voxel-wise modeling), distributional consideration of all observations is appropriate. Herein, we discuss two unified regression and inference approaches, model II regression and regression calibration, for use in massively univariate inference with imaging data. These methods use the design matrix paradigm and account for both random and non-random imaging regressors. We characterize these methods in simulation and illustrate their use on an empirical dataset. Both methods have been made readily available as a toolbox plug-in for the SPM software. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Context-Specific Proportion Congruency Effects: An Episodic Learning Account and Computational Model.

    Science.gov (United States)

    Schmidt, James R

    2016-01-01

    In the Stroop task, participants identify the print color of color words. The congruency effect is the observation that response times and errors are increased when the word and color are incongruent (e.g., the word "red" in green ink) relative to when they are congruent (e.g., "red" in red). The proportion congruent (PC) effect is the finding that congruency effects are reduced when trials are mostly incongruent rather than mostly congruent. This PC effect can be context-specific. For instance, if trials are mostly incongruent when presented in one location and mostly congruent when presented in another location, the congruency effect is smaller for the former location. Typically, PC effects are interpreted in terms of strategic control of attention in response to conflict, termed conflict adaptation or conflict monitoring. In the present manuscript, however, an episodic learning account is presented for context-specific proportion congruent (CSPC) effects. In particular, it is argued that context-specific contingency learning can explain part of the effect, and context-specific rhythmic responding can explain the rest. Both contingency-based and temporal-based learning can parsimoniously be conceptualized within an episodic learning framework. An adaptation of the Parallel Episodic Processing model is presented. This model successfully simulates CSPC effects, both for contingency-biased and contingency-unbiased (transfer) items. The same fixed-parameter model can explain a range of other findings from the learning, timing, binding, practice, and attentional control domains.

  20. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    Science.gov (United States)

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both. © 2015 Society for Risk Analysis.

  1. The Models of Distance Forms of Learning in National Academy of Statistics, Accounting and Audit

    Directory of Open Access Journals (Sweden)

    L. V.

    2017-03-01

    Full Text Available Finding solutions to the problems faced by the Ukrainian education system require an adequate organizing structure for education system, enabling for transition to the principle of life-long education. The best option for this is the distance learning systems (DLS, which are considered by leading Ukrainian universities as high-performance information technologies in modern education, envisaged by the National Informatization Program, with the goals of reforming higher education in Ukraine in the context of joining the European educational area. The experience of implementing DLS “Prometheus” and Moodle and the main directions of distance learning development at the National Academy of Statistics, Accounting and Audit (NASAA are analyzed and summed up. The emphasis is made on the need to improve the skills of teachers with use of open distance courses and gradual preparation of students for the learning process in the new conditions. The structure of distance courses for different forms of education (full-time, part-time, and blended is built. The forms of learning (face-to-face, the driver complementary, rotation model; flex model, etc. are analyzed. The dynamic version of implementing blended learning models in NASAA using DLS “Prometheus” and Moodle is presented. It is concluded that the experience of NASAA shows that the blended form of distance learning based on Moodle platform is the most adequate to the requirements of Ukraine’s development within the framework of European education.

  2. A common signal detection model accounts for both perception and discrimination of the watercolor effect.

    Science.gov (United States)

    Devinck, Frédéric; Knoblauch, Kenneth

    2012-03-21

    Establishing the relation between perception and discrimination is a fundamental objective in psychophysics, with the goal of characterizing the neural mechanisms mediating perception. Here, we show that a procedure for estimating a perceptual scale based on a signal detection model also predicts discrimination performance. We use a recently developed procedure, Maximum Likelihood Difference Scaling (MLDS), to measure the perceptual strength of a long-range, color, filling-in phenomenon, the Watercolor Effect (WCE), as a function of the luminance ratio between the two components of its generating contour. MLDS is based on an equal-variance, gaussian, signal detection model and yields a perceptual scale with interval properties. The strength of the fill-in percept increased 10-15 times the estimate of the internal noise level for a 3-fold increase in the luminance ratio. Each observer's estimated scale predicted discrimination performance in a subsequent paired-comparison task. A common signal detection model accounts for both the appearance and discrimination data. Since signal detection theory provides a common metric for relating discrimination performance and neural response, the results have implications for comparing perceptual and neural response functions.

  3. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    Science.gov (United States)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  4. Accounting comparability and the accuracy of peer-based valuation models

    NARCIS (Netherlands)

    Young, S.; Zeng, Y.

    2015-01-01

    We examine the link between enhanced accounting comparability and the valuation performance of pricing multiples. Using the warranted multiple method proposed by Bhojraj and Lee (2002, Journal of Accounting Research), we demonstrate how enhanced accounting comparability leads to better peer-based

  5. The cyclicality of loan loss provisions under three different accounting models: the United Kingdom, Spain, and Brazil

    Directory of Open Access Journals (Sweden)

    Antônio Maria Henri Beyle de Araújo

    2017-11-01

    Full Text Available ABSTRACT A controversy involving loan loss provisions in banks concerns their relationship with the business cycle. While international accounting standards for recognizing provisions (incurred loss model would presumably be pro-cyclical, accentuating the effects of the current economic cycle, an alternative model, the expected loss model, has countercyclical characteristics, acting as a buffer against economic imbalances caused by expansionary or contractionary phases in the economy. In Brazil, a mixed accounting model exists, whose behavior is not known to be pro-cyclical or countercyclical. The aim of this research is to analyze the behavior of these accounting models in relation to the business cycle, using an econometric model consisting of financial and macroeconomic variables. The study allowed us to identify the impact of credit risk behavior, earnings management, capital management, Gross Domestic Product (GDP behavior, and the behavior of the unemployment rate on provisions in countries that use different accounting models. Data from commercial banks in the United Kingdom (incurred loss, in Spain (expected loss, and in Brazil (mixed model were used, covering the period from 2001 to 2012. Despite the accounting models of the three countries being formed by very different rules regarding possible effects on the business cycles, the results revealed a pro-cyclical behavior of provisions in each country, indicating that when GDP grows, provisions tend to fall and vice versa. The results also revealed other factors influencing the behavior of loan loss provisions, such as earning management.

  6. Investigation of a new model accounting for rotors of finite tip-speed ratio in yaw or tilt

    DEFF Research Database (Denmark)

    Branlard, Emmanuel; Gaunaa, Mac; Machefaux, Ewan

    2014-01-01

    The main results from a recently developed vortex model are implemented into a Blade Element Momentum(BEM) code. This implementation accounts for the effect of finite tip-speed ratio, an effect which was not considered in standard BEM yaw-models. The model and its implementation are presented. Data...

  7. A hill-type muscle model expansion accounting for effects of varying transverse muscle load.

    Science.gov (United States)

    Siebert, Tobias; Stutzig, Norman; Rode, Christian

    2018-01-03

    Recent studies demonstrated that uniaxial transverse loading (F G ) of a rat gastrocnemius medialis muscle resulted in a considerable reduction of maximum isometric muscle force (ΔF im ). A hill-type muscle model assuming an identical gearing G between both ΔF im and F G as well as lifting height of the load (Δh) and longitudinal muscle shortening (Δl CC ) reproduced experimental data for a single load. Here we tested if this model is able to reproduce experimental changes in ΔF im and Δh for increasing transverse loads (0.64 N, 1.13 N, 1.62 N, 2.11 N, 2.60 N). Three different gearing ratios were tested: (I) constant G c representing the idea of a muscle specific gearing parameter (e.g. predefined by the muscle geometry), (II) G exp determined in experiments with varying transverse load, and (III) G f that reproduced experimental ΔF im for each transverse load. Simulations using G c overestimated ΔF im (up to 59%) and Δh (up to 136%) for increasing load. Although the model assumption (equal G for forces and length changes) held for the three lower loads using G exp and G f , simulations resulted in underestimation of ΔF im by 38% and overestimation of Δh by 58% for the largest load, respectively. To simultaneously reproduce experimental ΔF im and Δh for the two larger loads, it was necessary to reduce F im by 1.9% and 4.6%, respectively. The model seems applicable to account for effects of muscle deformation within a range of transverse loading when using a linear load-dependent function for G. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A mass-density model can account for the size-weight illusion

    Science.gov (United States)

    Bergmann Tiest, Wouter M.; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness

  9. Modeling the dynamic behavior of railway track taking into account the occurrence of defects in the system wheel-rail

    OpenAIRE

    Loktev Alexey; Sychev Vyacheslav; Gluzberg Boris; Gridasova Ekaterina

    2017-01-01

    This paper investigates the influence of wheel defects on the development of rail defects up to a state where rail prompt replacement becomes necessary taking into account different models of the dynamic contact between a wheel and a rail. In particular, the quasistatic Hertz model, the linear elastic model and the elastoplastic Aleksandrov-Kadomtsev model. Based on the model of the wheel-rail contact the maximum stresses are determined which take place in the rail in the presence of wheel de...

  10. A Hybrid Monte Carlo importance sampling of rare events in Turbulence and in Turbulent Models

    Science.gov (United States)

    Margazoglou, Georgios; Biferale, Luca; Grauer, Rainer; Jansen, Karl; Mesterhazy, David; Rosenow, Tillmann; Tripiccione, Raffaele

    2017-11-01

    Extreme and rare events is a challenging topic in the field of turbulence. Trying to investigate those instances through the use of traditional numerical tools turns to be a notorious task, as they fail to systematically sample the fluctuations around them. On the other hand, we propose that an importance sampling Monte Carlo method can selectively highlight extreme events in remote areas of the phase space and induce their occurrence. We present a brand new computational approach, based on the path integral formulation of stochastic dynamics, and employ an accelerated Hybrid Monte Carlo (HMC) algorithm for this purpose. Through the paradigm of stochastic one-dimensional Burgers' equation, subjected to a random noise that is white-in-time and power-law correlated in Fourier space, we will prove our concept and benchmark our results with standard CFD methods. Furthermore, we will present our first results of constrained sampling around saddle-point instanton configurations (optimal fluctuations). The research leading to these results has received funding from the EU Horizon 2020 research and innovation programme under Grant Agreement No. 642069, and from the EU Seventh Framework Programme (FP7/2007-2013) under ERC Grant Agreement No. 339032.

  11. Historical Account to the State of the Art in Debris Flow Modeling

    Science.gov (United States)

    Pudasaini, Shiva P.

    2013-04-01

    In this contribution, I present a historical account of debris flow modelling leading to the state of the art in simulations and applications. A generalized two-phase model is presented that unifies existing avalanche and debris flow theories. The new model (Pudasaini, 2012) covers both the single-phase and two-phase scenarios and includes many essential and observable physical phenomena. In this model, the solid-phase stress is closed by Mohr-Coulomb plasticity, while the fluid stress is modeled as a non-Newtonian viscous stress that is enhanced by the solid-volume-fraction gradient. A generalized interfacial momentum transfer includes viscous drag, buoyancy and virtual mass forces, and a new generalized drag force is introduced to cover both solid-like and fluid-like drags. Strong couplings between solid and fluid momentum transfer are observed. The two-phase model is further extended to describe the dynamics of rock-ice avalanches with new mechanical models. This model explains dynamic strength weakening and includes internal fluidization, basal lubrication, and exchanges of mass and momentum. The advantages of the two-phase model over classical (effectively single-phase) models are discussed. Advection and diffusion of the fluid through the solid are associated with non-linear fluxes. Several exact solutions are constructed, including the non-linear advection-diffusion of fluid, kinematic waves of debris flow front and deposition, phase-wave speeds, and velocity distribution through the flow depth and through the channel length. The new model is employed to study two-phase subaerial and submarine debris flows, the tsunami generated by the debris impact at lakes/oceans, and rock-ice avalanches. Simulation results show that buoyancy enhances flow mobility. The virtual mass force alters flow dynamics by increasing the kinetic energy of the fluid. Newtonian viscous stress substantially reduces flow deformation, whereas non-Newtonian viscous stress may change the

  12. ACCOUNTING HARMONIZATION AND HISTORICAL COST ACCOUNTING

    OpenAIRE

    Valentin Gabriel CRISTEA

    2017-01-01

    There is a huge interest in accounting harmonization and historical costs accounting, in what they offer us. In this article, different valuation models are discussed. Although one notices the movement from historical cost accounting to fair value accounting, each one has its advantages.

  13. ACCOUNTING HARMONIZATION AND HISTORICAL COST ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Valentin Gabriel CRISTEA

    2017-05-01

    Full Text Available There is a huge interest in accounting harmonization and historical costs accounting, in what they offer us. In this article, different valuation models are discussed. Although one notices the movement from historical cost accounting to fair value accounting, each one has its advantages.

  14. Development of the Mathematical Model of Diesel Fuel Catalytic Dewaxing Process Taking into Account Factors of Nonstationarity

    Directory of Open Access Journals (Sweden)

    Frantsina Evgeniya

    2016-01-01

    Full Text Available The paper describes the results of mathematical modelling of diesel fuel catalytic dewaxing process, performed taking into account the factors of process nonstationarity driven by changes in process technological parameters, feedstock composition and catalyst deactivation. The error of hydrocarbon contents calculation via the use of the developed model does not exceed 1.6 wt.%. This makes it possible to apply the model for solution to optimization and forecasting problems occurred in catalytic systems under industrial conditions. It was shown through the model calculation that temperature in the dewaxing reactor without catalyst deactivation is lower by 19 °C than actual and catalyst deactivation degree accounts for 32 %.

  15. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    Science.gov (United States)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS

  16. Mathematical model of quasi-equilibrium counter-flow processes of rare earth metal separation by solvent extraction when varying the composition of initial raw materials

    International Nuclear Information System (INIS)

    Pyartman, A.K.; Puzikov, E.A.

    1994-01-01

    A mathematical model for the description of rare earths (3) distribution, depending on the number of contact steps for quasiequilibrium counter-flow processes of rare earths separation by the method of solvent extraction with varying the composition of initial source material, has been suggested. Algorithm of computer calculation is provided. The mathematical model has been employed to choose the optimal conditions for didymium concentrate separation. 7 refs.; 3 figs.; 1 tab

  17. A mathematical model of quasi-equilibrium counter-flow processes of rare earth metal separation by solvent extraction when varying the composition of raw material

    International Nuclear Information System (INIS)

    Pyartman, A.K.; Puzikov, E.A.; Kopyrin, A.A.

    1994-01-01

    A mathematical model for description of rare earth metals (3) distribution, depending on the number of contact steps, for quasiequilibrium counter-flow processes of rare earth metals (3) separation by the method of solvent extraction with varying the composition of initial source material is suggested. Algorithm of calculation using computer is provided. The mathematical model is employed for selecting the optimal conditions of didymium concentrate separation. 7 refs., 3 figs., 1 tab

  18. Short-run analysis of fiscal policy and the current account in a finite horizon model

    OpenAIRE

    Heng-fu Zou

    1995-01-01

    This paper utilizes a technique developed by Judd to quantify the short-run effects of fiscal policies and income shocks on the current account in a small open economy. It is found that: (1) a future increase in government spending improves the short-run current account; (2) a future tax increase worsens the short-run current account; (3) a present increase in the government spending worsens the short-run current account dollar by dollar, while a present increase in the income improves the cu...

  19. A near-real-time material accountancy model and its preliminary demonstration in the Tokai reprocessing plant

    International Nuclear Information System (INIS)

    Ikawa, K.; Ihara, H.; Nishimura, H.; Tsutsumi, M.; Sawahata, T.

    1983-01-01

    The study of a near-real-time (n.r.t.) material accountancy system as applied to small or medium-sized spent fuel reprocessing facilities has been carried out since 1978 under the TASTEX programme. In this study, a model of the n.r.t. accountancy system, called the ten-day-detection-time model, was developed and demonstrated in the actual operating plant. The programme was closed on May 1981, but the study has been extended. The effectiveness of the proposed n.r.t. accountancy model was evaluated by means of simulation techniques. The results showed that weekly material balances covering the entire process MBA could provide sufficient information to satisfy the IAEA guidelines for small or medium-sized facilities. The applicability of the model to the actual plant has been evaluated by a series of field tests which covered four campaigns. In addition to the material accountancy data, many valuable operational data with regard to additional locations for an in-process inventory, the time needed for an in-process inventory, etc., have been obtained. A CUMUF (cumulative MUF) chart of the resulting MUF data in the C-1 and C-2 campaigns clearly showed that there had been a measurement bias across the process MBA. This chart gave a dramatic picture of the power of the n.r.t. accountancy concept by showing the nature of this bias, which was not clearly shown in the conventional material accountancy data. (author)

  20. Highly sensitive measurements of disease progression in rare disorders: Developing and validating a multimodal model of retinal degeneration in Stargardt disease.

    Science.gov (United States)

    Lambertus, Stanley; Bax, Nathalie M; Fakin, Ana; Groenewoud, Joannes M M; Klevering, B Jeroen; Moore, Anthony T; Michaelides, Michel; Webster, Andrew R; van der Wilt, Gert Jan; Hoyng, Carel B

    2017-01-01

    Each inherited retinal disorder is rare, but together, they affect millions of people worldwide. No treatment is currently available for these blinding diseases, but promising new options-including gene therapy-are emerging. Arguably, the most prevalent retinal dystrophy is Stargardt disease. In each case, the specific combination of ABCA4 variants (> 900 identified to date) and modifying factors is virtually unique. It accounts for the vast phenotypic heterogeneity including variable rates of functional and structural progression, thereby potentially limiting the ability of phase I/II clinical trials to assess efficacy of novel therapies with few patients. To accommodate this problem, we developed and validated a sensitive and reliable composite clinical trial endpoint for disease progression based on structural measurements of retinal degeneration. We used longitudinal data from early-onset Stargardt patients from the Netherlands (development cohort, n = 14) and the United Kingdom (external validation cohort, n = 18). The composite endpoint was derived from best-corrected visual acuity, fundus autofluorescence, and spectral-domain optical coherence tomography. Weighting optimization techniques excluded visual acuity from the composite endpoint. After optimization, the endpoint outperformed each univariable outcome, and showed an average progression of 0.41° retinal eccentricity per year (95% confidence interval, 0.30-0.52). Comparing with actual longitudinal values, the model accurately predicted progression (R2, 0.904). These properties were largely preserved in the validation cohort (0.43°/year [0.33-0.53]; prediction: R2, 0.872). We subsequently ran a two-year trial simulation with the composite endpoint, which detected a 25% decrease in disease progression with 80% statistical power using only 14 patients. These results suggest that a multimodal endpoint, reflecting structural macular changes, provides a sensitive measurement of disease progression in

  1. Highly sensitive measurements of disease progression in rare disorders: Developing and validating a multimodal model of retinal degeneration in Stargardt disease.

    Directory of Open Access Journals (Sweden)

    Stanley Lambertus

    Full Text Available Each inherited retinal disorder is rare, but together, they affect millions of people worldwide. No treatment is currently available for these blinding diseases, but promising new options-including gene therapy-are emerging. Arguably, the most prevalent retinal dystrophy is Stargardt disease. In each case, the specific combination of ABCA4 variants (> 900 identified to date and modifying factors is virtually unique. It accounts for the vast phenotypic heterogeneity including variable rates of functional and structural progression, thereby potentially limiting the ability of phase I/II clinical trials to assess efficacy of novel therapies with few patients. To accommodate this problem, we developed and validated a sensitive and reliable composite clinical trial endpoint for disease progression based on structural measurements of retinal degeneration.We used longitudinal data from early-onset Stargardt patients from the Netherlands (development cohort, n = 14 and the United Kingdom (external validation cohort, n = 18. The composite endpoint was derived from best-corrected visual acuity, fundus autofluorescence, and spectral-domain optical coherence tomography. Weighting optimization techniques excluded visual acuity from the composite endpoint. After optimization, the endpoint outperformed each univariable outcome, and showed an average progression of 0.41° retinal eccentricity per year (95% confidence interval, 0.30-0.52. Comparing with actual longitudinal values, the model accurately predicted progression (R2, 0.904. These properties were largely preserved in the validation cohort (0.43°/year [0.33-0.53]; prediction: R2, 0.872. We subsequently ran a two-year trial simulation with the composite endpoint, which detected a 25% decrease in disease progression with 80% statistical power using only 14 patients.These results suggest that a multimodal endpoint, reflecting structural macular changes, provides a sensitive measurement of disease

  2. Top quark rare decays via loop-induced FCNC interactions in extended mirror fermion model

    Science.gov (United States)

    Hung, P. Q.; Lin, Yu-Xiang; Nugroho, Chrisna Setyo; Yuan, Tzu-Chiang

    2018-02-01

    Flavor changing neutral current (FCNC) interactions for a top quark t decays into Xq with X represents a neutral gauge or Higgs boson, and q a up- or charm-quark are highly suppressed in the Standard Model (SM) due to the Glashow-Iliopoulos-Miami mechanism. Whilst current limits on the branching ratios of these processes have been established at the order of 10-4 from the Large Hadron Collider experiments, SM predictions are at least nine orders of magnitude below. In this work, we study some of these FCNC processes in the context of an extended mirror fermion model, originally proposed to implement the electroweak scale seesaw mechanism for non-sterile right-handed neutrinos. We show that one can probe the process t → Zc for a wide range of parameter space with branching ratios varying from 10-6 to 10-8, comparable with various new physics models including the general two Higgs doublet model with or without flavor violations at tree level, minimal supersymmetric standard model with or without R-parity, and extra dimension model.

  3. A mathematical multiscale model of bone remodeling, accounting for pore space-specific mechanosensation.

    Science.gov (United States)

    Pastrama, Maria-Ioana; Scheiner, Stefan; Pivonka, Peter; Hellmich, Christian

    2018-02-01

    While bone tissue is a hierarchically organized material, mathematical formulations of bone remodeling are often defined on the level of a millimeter-sized representative volume element (RVE), "smeared" over all types of bone microstructures seen at lower observation scales. Thus, there is no explicit consideration of the fact that the biological cells and biochemical factors driving bone remodeling are actually located in differently sized pore spaces: active osteoblasts and osteoclasts can be found in the vascular pores, whereas the lacunar pores host osteocytes - bone cells originating from former osteoblasts which were then "buried" in newly deposited extracellular bone matrix. We here propose a mathematical description which considers size and shape of the pore spaces where the biological and biochemical events take place. In particular, a previously published systems biology formulation, accounting for biochemical regulatory mechanisms such as the rank-rankl-opg pathway, is cast into a multiscale framework coupled to a poromicromechanical model. The latter gives access to the vascular and lacunar pore pressures arising from macroscopic loading. Extensive experimental data on the biological consequences of this loading strongly suggest that the aforementioned pore pressures, together with the loading frequency, are essential drivers of bone remodeling. The novel approach presented here allows for satisfactory simulation of the evolution of bone tissue under various loading conditions, and for different species; including scenarios such as mechanical dis- and overuse of murine and human bone, or in osteocyte-free bone. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Modeling Occupancy of Hosts by Mistletoe Seeds after Accounting for Imperfect Detectability

    Science.gov (United States)

    Fadini, Rodrigo F.; Cintra, Renato

    2015-01-01

    The detection of an organism in a given site is widely used as a state variable in many metapopulation and epidemiological studies. However, failure to detect the species does not necessarily mean that it is absent. Assessing detectability is important for occupancy (presence—absence) surveys; and identifying the factors reducing detectability may help improve survey precision and efficiency. A method was used to estimate the occupancy status of host trees colonized by mistletoe seeds of Psittacanthus plagiophyllus as a function of host covariates: host size and presence of mistletoe infections on the same or on the nearest neighboring host (the cashew tree Anacardium occidentale). The technique also evaluated the effect of taking detectability into account for estimating host occupancy by mistletoe seeds. Individual host trees were surveyed for presence of mistletoe seeds with the aid of two or three observers to estimate detectability and occupancy. Detectability was, on average, 17% higher in focal-host trees with infected neighbors, while decreased about 23 to 50% from smallest to largest hosts. The presence of mistletoe plants in the sample tree had negligible effect on detectability. Failure to detect hosts as occupied decreased occupancy by 2.5% on average, with maximum of 10% for large and isolated hosts. The method presented in this study has potential for use with metapopulation studies of mistletoes, especially those focusing on the seed stage, but also as improvement of accuracy in occupancy models estimates often used for metapopulation dynamics of tree-dwelling plants in general. PMID:25973754

  5. Associative account of self-cognition: extended forward model and multi-layer structure

    Directory of Open Access Journals (Sweden)

    Motoaki eSugiura

    2013-08-01

    Full Text Available The neural correlates of self identified by neuroimaging studies differ depending on which aspects of self are addressed. Here, three categories of self are proposed based on neuroimaging findings and an evaluation of the likely underlying cognitive processes. The physical self, representing self-agency of action, body ownership, and bodily self-recognition, is supported by the sensory and motor association cortices located primarily in the right hemisphere. The interpersonal self, representing the attention or intentions of others directed at the self, is supported by several amodal association cortices in the dorsomedial frontal and lateral posterior cortices. The social self, representing the self as a collection of context-dependent social values, is supported by the ventral aspect of the medial prefrontal cortex and the posterior cingulate cortex. Despite differences in the underlying cognitive processes and neural substrates, all three categories of self are likely to share the computational characteristics of the forward model, which is underpinned by internal schema or learned associations between one’s behavioral output and the consequential input. Additionally, these three categories exist within a hierarchical layer structure based on developmental processes that updates the schema through the attribution of prediction error. In this account, most of the association cortices critically contribute to some aspect of the self through associative learning while the primary regions involved shift from the lateral to the medial cortices in a sequence from the physical to the interpersonal to the social self.

  6. Taking into account hydrological modelling uncertainty in Mediterranean flash-floods forecasting

    Science.gov (United States)

    Edouard, Simon; Béatrice, Vincendon; Véronique, Ducrocq

    2015-04-01

    Title : Taking into account hydrological modelling uncertainty in Mediterranean flash-floods forecasting Authors : Simon EDOUARD*, Béatrice VINCENDON*, Véronique Ducrocq* * : GAME/CNRM(Météo-France, CNRS)Toulouse,France Mediterranean intense weather events often lead to devastating flash-floods (FF). Increasing the lead time of FF forecasts would permit to better anticipate their catastrophic consequences. These events are one part of Mediterranean hydrological cycle. HyMeX (HYdrological cycle in the Mediterranean EXperiment) aims at a better understanding and quantification of the hydrological cycle and related processes in the Mediterranean. In order to get a lot of data, measurement campaigns were conducted. The first special observing period (SOP1) of these campaigns, served as a test-bed for a real-time hydrological ensemble prediction system (HEPS) dedicated to FF forecasting. It produced an ensemble of quantitative discharge forecasts (QDF) using the ISBA-TOP system. ISBATOP is a coupling between the surface scheme ISBA and a version of TOPMODEL dedicated to Mediterranean fast responding rivers. ISBA-TOP was driven with several quantitative precipitation forecasts (QPF) ensembles based on AROME atmospheric convection-permitting model. This permitted to take into account the uncertainty that affects QPF and that propagates up to the QDF. This uncertainty is major for discharge forecasting especially in the case of Mediterranean flash-floods. But other sources of uncertainty need to be sampled in HEPS systems. One of them is inherent to the hydrological modelling. The ISBA-TOP coupled system has been improved since the initial version, that was used for instance during Hymex SOP1. The initial ISBA-TOP consisted into coupling a TOPMODEL approach with ISBA-3L, which represented the soil stratification with 3 layers. The new version consists into coupling the same TOPMODEL approach with a version of ISBA where more than ten layers describe the soil vertical

  7. Investigation of the shape of the imaginary part of the optical-model potential for electron scattering by rare gases

    International Nuclear Information System (INIS)

    Staszewska, G.; Schwenke, D.W.; Truhlar, D.G.

    1984-01-01

    We present a comparative study of several empirical and nonempirical models for the absorption potential, which is the imaginary part of an optical-model potential, for electron scattering by rare gases. We show that the elastic differential cross section is most sensitive to the absorption potential for high-impact energy and large scattering angles. We compare differential cross sections calculated by several models for the absorption potential and by several arbitrary modifications of these model potentials. We are able to associate the effect of the absorption potential on the elastic differential cross section with its form at small electron-atom distances r, and we are able to deduce various qualitative features that the absorption potential must possess at small and large r in order to predict both accurate differential cross sections and accurate absorption cross sections. Based on these observations, the Pauli blocking conditions of the quasifree scattering model for the absorption potential are modified empirically, thus producing a more accurate model that may be applied to other systems; e.g., electron-molecule scattering, with no adjustable parameters

  8. Design of a Competency-Based Assessment Model in the Field of Accounting

    Science.gov (United States)

    Ciudad-Gómez, Adelaida; Valverde-Berrocoso, Jesús

    2012-01-01

    This paper presents the phases involved in the design of a methodology to contribute both to the acquisition of competencies and to their assessment in the field of Financial Accounting, within the European Higher Education Area (EHEA) framework, which we call MANagement of COMpetence in the areas of Accounting (MANCOMA). Having selected and…

  9. Improving the assessment and reporting on rare and endangered species through species distribution models

    Directory of Open Access Journals (Sweden)

    Rita Sousa-Silva

    2014-12-01

    The objective was to highlight the potential of SDMs for the assessment of threatened species within the periodical report on their conservation status. We used a spatially explicit modeling approach, which predicts species distributions by spatially combining two SDMs: one fitted with climate data alone and the other fitted solely with landscape variables. A comparison between the modeled distribution and the range obtained by classical methods (minimum convex polygon and Range Tool is also presented. Our results show that while data-based approaches only consider the species known distribution, model-based methods allow a more complete evaluation of species distributions and their dynamics, as well as of the underlying pressures. This will ultimately improve the accuracy and usefulness of assessments in the context of EU reporting obligations.

  10. Photoemission in the Anderson model for rare earth compounds with floating valences

    International Nuclear Information System (INIS)

    Frota, H.O. da.

    1985-01-01

    X-ray photoemission spectra (XPS) are calculated for the spin-degenerate Anderson model of valence fluctuation compounds. Based on the renormalization group technique originally introduced by Wilson to calculate the magnetic susceptibility for the Kondo model, the numerical calculation has uniform accuracy over the entire parameter space of the Anderson model; at any given photoelectron energy, a maximum error of 4% is estimated for the calculated photoemission current. The calculated spectra display two peaks associated with the two possible x-ray induced transitions between the n f = 0,1 or 2 occupations of the f-orbital: a first ionization peak corresponding to the n f = 2 → n f = 1 transition and a second ionization peak due to the n f 1 → n f = 0 transition. (author)

  11. Pharmacokinetic Modeling of Manganese III. Physiological Approaches Accounting for Background and Tracer Kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Teeguarden, Justin G.; Gearhart, Jeffrey; Clewell, III, H. J.; Covington, Tammie R.; Nong, Andy; Anderson, Melvin E.

    2007-01-01

    assessments (Dixit et al., 2003). With most exogenous compounds, there is often no background exposure and body concentrations are not under active control from homeostatic processes as occurs with essential nutrients. Any complete Mn PBPK model would include the homeostatic regulation as an essential nutritional element and the additional exposure routes by inhalation. Two companion papers discuss the kinetic complexities of the quantitative dose-dependent alterations in hepatic and intestinal processes that control uptake and elimination of Mn (Teeguarden et al., 2006a, b). Radioactive 54Mn has been to investigate the behavior of the more common 55Mn isotope in the body because the distribution and elimination of tracer doses reflects the overall distributional characteristics of Mn. In this paper, we take the first steps in developing a multi-route PBPK model for Mn. Here we develop a PBPK model to account for tissue concentrations and tracer kinetics of Mn under normal dietary intake. This model for normal levels of Mn will serve as the starting point for more complete model descriptions that include dose-dependencies in both oral uptake and and biliary excretion. Material and Methods Experimental Data Two studies using 54Mn tracer were employed in model development. (Furchner et al. 1966; Wieczorek and Oberdorster 1989). In Furchner et al. (1966) male Sprague-Dawley rats received an ip injection of carrier-free 54MnCl2 while maintained on standard rodent feed containing ~ 45 ppm Mn. Tissue radioactivity of 54Mn was measured by liquid scintillation counting between post injection days 1 to 89 and reported as percent of administered dose per kg tissue. 54Mn time courses were reported for liver, kidney, bone, brain, muscle, blood, lung and whole body. Because ip uptake is via the portal circulation to the liver, this data set had information on distribution and clearance behaviors of Mn entering the systemic circulation from liver.

  12. Materials measurement and accounting in an operating plutonium conversion and purification process. Phase I. Process modeling and simulation

    International Nuclear Information System (INIS)

    Thomas, C.C. Jr.; Ostenak, C.A.; Gutmacher, R.G.; Dayem, H.A.; Kern, E.A.

    1981-04-01

    A model of an operating conversion and purification process for the production of reactor-grade plutonium dioxide was developed as the first component in the design and evaluation of a nuclear materials measurement and accountability system. The model accurately simulates process operation and can be used to identify process problems and to predict the effect of process modifications

  13. The importance of data quality for generating reliable distribution models for rare, elusive, and cryptic species

    Science.gov (United States)

    Keith B. Aubry; Catherine M. Raley; Kevin S. McKelvey

    2017-01-01

    The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated...

  14. Modeling Rare Species Distribution at the Edge: The Case for the Vulnerable Endemic Pyrenean Desman in France

    Directory of Open Access Journals (Sweden)

    M. Williams-Tripp

    2012-01-01

    Full Text Available The endemic Pyrenean Desman (Galemys pyrenaicus is an elusive, rare, and vulnerable species declining over its entire and narrow range (Spain, Portugal, France, and Andorra. The principal set of conservation measures in France is a 5-years National Action Plan based on 25 conservation actions. Priority is given to update its present distribution and develop tools for predictive distribution models. We aim at building the first species distribution model and map for the northern edge of the range of the Desman and confronting the outputs of the model to target conservation efforts in the context of environmental change. Contrasting to former comparable studies, we derive a simpler model emphasizing the importance of factors linked to precipitation and not to the temperature. If temperature is one of the climate change key factors, depicted shrinkage in Desman distribution could be lower or null at the northern (French edge suggesting thus a major role for this northern population in terms of conservation of the species. Finally, we question the applied issue of temporal and spatial transferability for such environmental favourability models when it is made at the edge of the distribution range.

  15. Fracture criteria under creep with strain history taken into account, and long-term strength modelling

    Science.gov (United States)

    Khokhlov, A. V.

    2009-08-01

    In the present paper, we continue to study the nonlinear constitutive relation (CR) between the stress and strain proposed in [1] to describe one-dimensional isothermal rheological processes in the case of monotone variation of the strain (in particular, relaxation, creep, plasticity, and superplasticity). We show that this CR together with the strain fracture criterion (FC) leads to theoretical long-term strength curves (LSC) with the same qualitative properties as the typical experimental LSC of viscoelastoplastic materials. We propose two parametric families of fracture criteria in the case of monotone uniaxial strain, which are related to the strain fracture criterion (SFC) but take into account the strain increase history and the dependence of the critical strain on the stress. Instead of the current strain, they use other measures of damage related to the strain history by time-dependent integral operators. For any values of the material parameters, analytic studies of these criteria allowed us to find several useful properties, which confirm that they can be used to describe the creep fracture of different materials. In particular, we prove that, together with the proposed constitutive relations, these FC lead to theoretical long-term strength curves (TLSC) with the same qualitative properties as the experimental LSC. It is important that each of the constructed families of FC forms a monotone and continuous scale of criteria (monotonously and continuously depending on a real parameter) that contains the SFC as the limit case. Moreover, the criteria in the first family always provide the fracture time greater than that given by the SFC, the criteria in the second family always provide a smaller fracture time, and the difference can be made arbitrarily small by choosing the values of the control parameter near the scale end. This property is very useful in finding a more accurate adjustment of the model to the existing experimental data describing the

  16. Busy period analysis, rare events and transient behavior in fluid flow models

    Directory of Open Access Journals (Sweden)

    Søren Asmussen

    1994-01-01

    Full Text Available We consider a process {(Jt,Vt}t≥0 on E×[0,∞, such that {Jt} is a Markov process with finite state space E, and {Vt} has a linear drift ri on intervals where Jt=i and reflection at 0. Such a process arises as a fluid flow model of current interest in telecommunications engineering for the purpose of modeling ATM technology. We compute the mean of the busy period and related first passage times, show that the probability of buffer overflow within a busy cycle is approximately exponential, and give conditioned limit theorems for the busy cycle with implications for quick simulation. Further, various inequalities and approximations for transient behavior are given. Also explicit expressions for the Laplace transform of the busy period are found. Mathematically, the key tool is first passage probabilities and exponential change of measure for Markov additive processes.

  17. hiPSC Disease Modeling of Rare Hereditary Cerebellar Ataxias: Opportunities and Future Challenges

    Czech Academy of Sciences Publication Activity Database

    Lukovic, D.; Moreno-Manzano, V.; Rodriquez; Jimenez, F.J.; Vilches, A.; Syková, Eva; Jendelová, Pavla; Stojkovic, M.; Erceg, Slaven

    2017-01-01

    Roč. 23, č. 5 (2017), s. 554-566 ISSN 1073-8584 R&D Projects: GA ČR(CZ) GBP304/12/G069; GA MŠk(CZ) LO1309; GA MŠk(CZ) ED1.1.00/02.0109 Institutional support: RVO:68378041 Keywords : 3D organoids * ataxia * disease modelling Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Developmental biology Impact factor: 7.391, year: 2016

  18. Rare top quark decays in Alternative Left-Right Symmetric Models

    International Nuclear Information System (INIS)

    Gaitan, R.; Miranda, O. G.; Cabral-Rosetti, L. G.

    2007-01-01

    We evaluate the flavor changing neutral currents (FCNC) decay t → H0 + c in the context of Alternative Left-Right symmetric Models (ALRM) with extra isosinglet heavy fermions; the FCNC decays may place at tree level and are only supressed by the mixing between ordinary top and charm quarks. We also comment on the decay process t → c + γ, which involves radiative corrections

  19. Accounting for water management issues within hydrological simulation: Alternative modelling options and a network optimization approach

    Science.gov (United States)

    Efstratiadis, Andreas; Nalbantis, Ioannis; Rozos, Evangelos; Koutsoyiannis, Demetris

    2010-05-01

    In mixed natural and artificialized river basins, many complexities arise due to anthropogenic interventions in the hydrological cycle, including abstractions from surface water bodies, groundwater pumping or recharge and water returns through drainage systems. Typical engineering approaches adopt a multi-stage modelling procedure, with the aim to handle the complexity of process interactions and the lack of measured abstractions. In such context, the entire hydrosystem is separated into natural and artificial sub-systems or components; the natural ones are modelled individually, and their predictions (i.e. hydrological fluxes) are transferred to the artificial components as inputs to a water management scheme. To account for the interactions between the various components, an iterative procedure is essential, whereby the outputs of the artificial sub-systems (i.e. abstractions) become inputs to the natural ones. However, this strategy suffers from multiple shortcomings, since it presupposes that pure natural sub-systems can be located and that sufficient information is available for each sub-system modelled, including suitable, i.e. "unmodified", data for calibrating the hydrological component. In addition, implementing such strategy is ineffective when the entire scheme runs in stochastic simulation mode. To cope with the above drawbacks, we developed a generalized modelling framework, following a network optimization approach. This originates from the graph theory, which has been successfully implemented within some advanced computer packages for water resource systems analysis. The user formulates a unified system which is comprised of the hydrographical network and the typical components of a water management network (aqueducts, pumps, junctions, demand nodes etc.). Input data for the later include hydraulic properties, constraints, targets, priorities and operation costs. The real-world system is described through a conceptual graph, whose dummy properties

  20. Process Accounting

    OpenAIRE

    Gilbertson, Keith

    2002-01-01

    Standard utilities can help you collect and interpret your Linux system's process accounting data. Describes the uses of process accounting, standard process accounting commands, and example code that makes use of process accounting utilities.

  1. Accountability and non-proliferation nuclear regime: a review of the mutual surveillance Brazilian-Argentine model for nuclear safeguards

    International Nuclear Information System (INIS)

    Xavier, Roberto Salles

    2014-01-01

    The regimes of accountability, the organizations of global governance and institutional arrangements of global governance of nuclear non-proliferation and of Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards are the subject of research. The starting point is the importance of the institutional model of global governance for the effective control of non-proliferation of nuclear weapons. In this context, the research investigates how to structure the current arrangements of the international nuclear non-proliferation and what is the performance of model Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards in relation to accountability regimes of global governance. For that, was searched the current literature of three theoretical dimensions: accountability, global governance and global governance organizations. In relation to the research method was used the case study and the treatment technique of data the analysis of content. The results allowed: to establish an evaluation model based on accountability mechanisms; to assess how behaves the model Mutual Vigilance Brazilian-Argentine Nuclear Safeguards front of the proposed accountability regime; and to measure the degree to which regional arrangements that work with systems of global governance can strengthen these international systems. (author)

  2. Microwave-Assisted Adsorptive Desulfurization of Model Diesel Fuel Using Synthesized Microporous Rare Earth Metal-Doped Zeolite Y

    Directory of Open Access Journals (Sweden)

    N. Salahudeen

    2015-06-01

    Full Text Available The microwave-assisted adsorptive desulfurization of model fuel (thiophene in n-heptane was investigated using a synthesized rare earth metal-doped zeolite Y (RE Y. Crystallinity of the synthesized zeolite was 89.5%, the silicon/aluminium (Si/Al molar ratio was 5.2, the Brunauer–Emmett–Teller (BET surface area was 980.9 m2/g, and the pore volume and diameter was 0.3494 cm3/g and 1.425 nm, respectively. The results showed that the microwave reactor could be used to enhance the adsorptive desulfurization process with best efficiency of 75% at reaction conditions of 100 °C and 15 minutes. The high desulfurization effect was likely due to the higher efficiency impact of microwave energy in the interaction between sulfur in thiophene and HO-La(OSiAl.

  3. [Optimization of ecological footprint model based on environmental pollution accounts: a case study in Pearl River Delta urban agglomeration].

    Science.gov (United States)

    Bai, Yu; Zeng, Hui; Wei, Jian-bing; Zhang, Wen-juan; Zhao, Hong-wei

    2008-08-01

    To solve the problem of ignoring the calculation of environment pollution in traditional ecological footprint model accounts, this paper put forward an optimized ecological footprint (EF) model, taking the pollution footprint into account. In the meantime, the environmental capacity's calculation was also added into the system of ecological capacity, and further used to do ecological assessment of Pearl River Delta urban agglomeration in 2005. The results showed a perfect inosculation between the ecological footprint and the development characteristics and spatial pattern, and illustrated that the optimized EF model could make a better orientation for the environmental pollution in the system, and also, could roundly explain the environmental effects of human activity. The optimization of ecological footprint model had better integrality and objectivity than traditional models.

  4. Unbiased Rare Event Sampling in Spatial Stochastic Systems Biology Models Using a Weighted Ensemble of Trajectories.

    Directory of Open Access Journals (Sweden)

    Rory M Donovan

    2016-02-01

    Full Text Available The long-term goal of connecting scales in biological simulation can be facilitated by scale-agnostic methods. We demonstrate that the weighted ensemble (WE strategy, initially developed for molecular simulations, applies effectively to spatially resolved cell-scale simulations. The WE approach runs an ensemble of parallel trajectories with assigned weights and uses a statistical resampling strategy of replicating and pruning trajectories to focus computational effort on difficult-to-sample regions. The method can also generate unbiased estimates of non-equilibrium and equilibrium observables, sometimes with significantly less aggregate computing time than would be possible using standard parallelization. Here, we use WE to orchestrate particle-based kinetic Monte Carlo simulations, which include spatial geometry (e.g., of organelles, plasma membrane and biochemical interactions among mobile molecular species. We study a series of models exhibiting spatial, temporal and biochemical complexity and show that although WE has important limitations, it can achieve performance significantly exceeding standard parallel simulation--by orders of magnitude for some observables.

  5. Closing the Gaps : Taking into Account the Effects of Heat stress and Fatique Modeling in an Operational Analysis

    NARCIS (Netherlands)

    Woodill, G.; Barbier, R.R.; Fiamingo, C.

    2010-01-01

    Traditional, combat model based analysis of Dismounted Combatant Operations (DCO) has focused on the ‘lethal’ aspects in an engagement, and to a limited extent the environment in which the engagement takes place. These are however only two of the factors that should be taken into account when

  6. Accounting for Test Variability through Sizing Local Domains in Sequential Design Optimization with Concurrent Calibration-Based Model Validation

    Science.gov (United States)

    2013-08-01

    release. 1 Proceedings of IDETC/ CIE 2013 ASME 2013 International Design Engineering Technical Conferences & Computers and Information in Engineering...in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...insufficient to achieve the desired validity level . In this paper, we introduce a technique to determine the number of tests required to account for their

  7. Structural equation models using partial least squares: an example of the application of SmartPLS® in accounting research

    Directory of Open Access Journals (Sweden)

    João Carlos Hipólito Bernardes do Nascimento

    2016-08-01

    Full Text Available In view of the Accounting academy’s increasing in the investigation of latent phenomena, researchers have used robust multivariate techniques. Although Structural Equation Models are frequently used in the international literature, however, the Accounting academy has made little use of the variant based on Partial Least Squares (PLS-SEM, mostly due to lack of knowledge on the applicability and benefits of its use for Accounting research. Even if the PLS-SEM approach is regularly used in surveys, this method is appropriate to model complex relations with multiple relationships of dependence and independence between latent variables. In that sense, it is very useful for application in experiments and file data. In that sense, a literature review is presented of Accounting studies that used the PLS-SEM technique. Next, as no specific publications were observed that exemplified the application of the technique in Accounting, a PLS-SEM application is developed to encourage exploratory research by means of the software SmartPLS®, being particularly useful to graduate students. Therefore, the main contribution of this article is methodological, given its objective to clearly identify the guidelines for the appropriate use of PLS. By presenting an example of how to conduct an exploratory research using PLS-SEM, the intention is to contribute to researchers’ enhanced understanding of how to use and report on the technique in their research.

  8. Accounting Department Chairpersons' Perceptions of Business School Performance Using a Market Orientation Model

    Science.gov (United States)

    Webster, Robert L.; Hammond, Kevin L.; Rothwell, James C.

    2013-01-01

    This manuscript is part of a stream of continuing research examining market orientation within higher education and its potential impact on organizational performance. The organizations researched are business schools and the data collected came from chairpersons of accounting departments of AACSB member business schools. We use a reworded Narver…

  9. 76 FR 29249 - Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications

    Science.gov (United States)

    2011-05-20

    ... application process and selection criteria are described in Section IV of the Request for Applications but in... suppliers with a mechanism for shared governance that have formed an Accountable Care Organization (ACO..., leadership, and commitment to outcomes-based contracts with non- Medicare purchasers. Final selection will be...

  10. A regional-scale, high resolution dynamical malaria model that accounts for population density, climate and surface hydrology.

    Science.gov (United States)

    Tompkins, Adrian M; Ermert, Volker

    2013-02-18

    The relative roles of climate variability and population related effects in malaria transmission could be better understood if regional-scale dynamical malaria models could account for these factors. A new dynamical community malaria model is introduced that accounts for the temperature and rainfall influences on the parasite and vector life cycles which are finely resolved in order to correctly represent the delay between the rains and the malaria season. The rainfall drives a simple but physically based representation of the surface hydrology. The model accounts for the population density in the calculation of daily biting rates. Model simulations of entomological inoculation rate and circumsporozoite protein rate compare well to data from field studies from a wide range of locations in West Africa that encompass both seasonal endemic and epidemic fringe areas. A focus on Bobo-Dioulasso shows the ability of the model to represent the differences in transmission rates between rural and peri-urban areas in addition to the seasonality of malaria. Fine spatial resolution regional integrations for Eastern Africa reproduce the malaria atlas project (MAP) spatial distribution of the parasite ratio, and integrations for West and Eastern Africa show that the model grossly reproduces the reduction in parasite ratio as a function of population density observed in a large number of field surveys, although it underestimates malaria prevalence at high densities probably due to the neglect of population migration. A new dynamical community malaria model is publicly available that accounts for climate and population density to simulate malaria transmission on a regional scale. The model structure facilitates future development to incorporate migration, immunity and interventions.

  11. Translational research of novel hormones: lessons from animal models and rare human diseases for common human diseases.

    Science.gov (United States)

    Nakao, Kazuwa; Yasoda, Akihiro; Ebihara, Ken; Hosoda, Kiminori; Mukoyama, Masashi

    2009-10-01

    Since the 1980s, a number of bioactive molecules, now known as cardiovascular hormones, have been isolated from the heart and blood vessels, particularly from the subset of vascular endothelial cells. The natriuretic peptide family is the prototype of the cardiovascular hormones. Over the following decade, a variety of hormones and cytokines, now known as adipokines or adipocytokines, have also been isolated from adipose tissue. Leptin is the only adipokine demonstrated to cause an obese phenotype in both animals and humans upon deletion. Thus, the past two decades have seen the identification of two important classes of bioactive molecules secreted by newly recognized endocrine cells, both of which differentiate from mesenchymal stem cells. To assess the physiological and clinical implications of these novel hormones, we have investigated their functions using animal models. We have also developed and analyzed mice overexpressing transgenic forms of these proteins and knockout mice deficient in these and related genes. Here, we demonstrate the current state of the translational research of these novel hormones, the natriuretic peptide family and leptin, and discuss how lessons learned from excellent animal models and rare human diseases can provide a better understanding of common human diseases.

  12. A simple model to quantitatively account for periodic outbreaks of the measles in the Dutch Bible Belt

    Science.gov (United States)

    Bier, Martin; Brak, Bastiaan

    2015-04-01

    In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.

  13. Accounting for Slipping and Other False Negatives in Logistic Models of Student Learning

    Science.gov (United States)

    MacLellan, Christopher J.; Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    Additive Factors Model (AFM) and Performance Factors Analysis (PFA) are two popular models of student learning that employ logistic regression to estimate parameters and predict performance. This is in contrast to Bayesian Knowledge Tracing (BKT) which uses a Hidden Markov Model formalism. While all three models tend to make similar predictions,…

  14. Study on the rare radiative decay Bc→Ds*γ in the standard model and multiscale walking technicolor model

    International Nuclear Information System (INIS)

    Lu, G.; Yue, C.; Cao, Y.; Xiong, Z.; Xiao, Z.

    1996-01-01

    Applying the perturbative QCD method, we study the decay B c →D s * γ in the standard model (SM) and multiscale walking technicolor model (MWTCM). In the SM, we find that the contribution of weak annihilation is more important than that of the electromagnetic penguin diagram. The presence of pseudo Goldstone bosons in the MWTCM leads to a large enhancement in the rate of B c →D s * γ, but this model is in conflict with the branching ratio of Z→b bar b(R b ) and the CLEO data on the branching ratio B(b→sγ). If top-color is further introduced, the calculated results in the top-color-assisted MWTCM can be suppressed and be in agreement with the CLEO data for a certain range of parameters. copyright 1996 The American Physical Society

  15. SYMPOSIUM: Rare decays

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    Late last year, a symposium entitled 'Rare Decays' attracted 115 participants to a hotel in Vancouver, Canada. These participants were particle physicists interested in checking conventional selection rules to look for clues of possible new behaviour outside today's accepted 'Standard Model'. For physicists, 'rare decays' include processes that have so far not been seen, explicitly forbidden by the rules of the Standard Model, or processes highly suppressed because the decay is dominated by an easier route, or includes processes resulting from multiple transitions

  16. Alternative biosphere modeling for safety assessment of HLW disposal taking account of geosphere-biosphere interface of marine environment

    International Nuclear Information System (INIS)

    Kato, Tomoko; Ishiguro, Katsuhiko; Naito, Morimasa; Ikeda, Takao; Little, Richard

    2001-03-01

    In the safety assessment of a high-level radioactive waste (HLW) disposal system, it is required to estimate radiological impacts on future human beings arising from potential radionuclide releases from a deep repository into the surface environment. In order to estimated the impacts, a biosphere model is developed by reasonably assuming radionuclide migration processes in the surface environment and relevant human lifestyles. It is important to modify the present biosphere models or to develop alternative biosphere models applying the biosphere models according to quality and quantify of the information acquired through the siting process for constructing the repository. In this study, alternative biosphere models were developed taking geosphere-biosphere interface of marine environment into account. Moreover, the flux to dose conversion factors calculated by these alternative biosphere models was compared with those by the present basic biosphere models. (author)

  17. Novel 3D geometry and models of the lower regions of large trees for use in carbon accounting of primary forests.

    Science.gov (United States)

    Dean, Christopher; Kirkpatrick, Jamie B; Osborn, Jon; Doyle, Richard B; Fitzgerald, Nicholas B; Roxburgh, Stephen H

    2018-03-01

    There is high uncertainty in the contribution of land-use change to anthropogenic climate change, especially pertaining to below-ground carbon loss resulting from conversion of primary-to-secondary forest. Soil organic carbon (SOC) and coarse roots are concentrated close to tree trunks, a region usually unmeasured during soil carbon sampling. Soil carbon estimates and their variation with land-use change have not been correspondingly adjusted. Our aim was to deduce allometric equations that will allow improvement of SOC estimates and tree trunk carbon estimates, for primary forest stands that include large trees in rugged terrain. Terrestrial digital photography, photogrammetry and GIS software were used to produce 3D models of the buttresses, roots and humus mounds of large trees in primary forests dominated by Eucalyptus regnans in Tasmania. Models of 29, in situ eucalypts were made and analysed. 3D models of example eucalypt roots, logging debris, rainforest tree species, fallen trees, branches, root and trunk slices, and soil profiles were also derived. Measurements in 2D, from earlier work, of three buttress 'logs' were added to the data set. The 3D models had high spatial resolution. The modelling allowed checking and correction of field measurements. Tree anatomical detail was formulated, such as buttress shape, humus volume, root volume in the under-sampled zone and trunk hollow area. The allometric relationships developed link diameter at breast height and ground slope, to SOC and tree trunk carbon, the latter including a correction for senescence. These formulae can be applied to stand-level carbon accounting. The formulae allow the typically measured, inter-tree SOC to be corrected for not sampling near large trees. The 3D models developed are irreplaceable, being for increasingly rare, large trees, and they could be useful to other scientific endeavours.

  18. Efficient modeling of sun/shade canopy radiation dynamics explicitly accounting for scattering

    Science.gov (United States)

    Bodin, P.; Franklin, O.

    2012-04-01

    The separation of global radiation (Rg) into its direct (Rb) and diffuse constituents (Rg) is important when modeling plant photosynthesis because a high Rd:Rg ratio has been shown to enhance Gross Primary Production (GPP). To include this effect in vegetation models, the plant canopy must be separated into sunlit and shaded leaves. However, because such models are often too intractable and computationally expensive for theoretical or large scale studies, simpler sun-shade approaches are often preferred. A widely used and computationally efficient sun-shade model was developed by Goudriaan (1977) (GOU). However, compared to more complex models, this model's realism is limited by its lack of explicit treatment of radiation scattering. Here we present a new model based on the GOU model, but which in contrast explicitly simulates radiation scattering by sunlit leaves and the absorption of this radiation by the canopy layers above and below (2-stream approach). Compared to the GOU model our model predicts significantly different profiles of scattered radiation that are in better agreement with measured profiles of downwelling diffuse radiation. With respect to these data our model's performance is equal to a more complex and much slower iterative radiation model while maintaining the simplicity and computational efficiency of the GOU model.

  19. Internet accounting

    NARCIS (Netherlands)

    Pras, Aiko; van Beijnum, Bernhard J.F.; Sprenkels, Ron; Parhonyi, R.

    2001-01-01

    This article provides an introduction to Internet accounting and discusses the status of related work within the IETF and IRTF, as well as certain research projects. Internet accounting is different from accounting in POTS. To understand Internet accounting, it is important to answer questions like

  20. Factors accounting for youth suicide attempt in Hong Kong: a model building.

    Science.gov (United States)

    Wan, Gloria W Y; Leung, Patrick W L

    2010-10-01

    This study aimed at proposing and testing a conceptual model of youth suicide attempt. We proposed a model that began with family factors such as a history of physical abuse and parental divorce/separation. Family relationship, presence of psychopathology, life stressors, and suicide ideation were postulated as mediators, leading to youth suicide attempt. The stepwise entry of the risk factors to a logistic regression model defined their proximity as related to suicide attempt. Path analysis further refined our proposed model of youth suicide attempt. Our originally proposed model was largely confirmed. The main revision was dropping parental divorce/separation as a risk factor in the model due to lack of significant contribution when examined alongside with other risk factors. This model was cross-validated by gender. This study moved research on youth suicide from identification of individual risk factors to model building, integrating separate findings of the past studies.

  1. Afganistan and rare earths

    Directory of Open Access Journals (Sweden)

    Emilian M. Dobrescu

    2013-05-01

    Full Text Available On our planet, over a quarter of new technologies for the economic production of industrial goods, are using rare earths, which are also called critical minerals and industries that rely on these precious items being worth of an estimated nearly five trillion dollars, or 5 percent of world gross domestic product. In the near future, competition will increase for the control of rare earth minerals embedded in high-tech products. Rare minerals are in the twenty-first century what oil accounted for in the twentieth century and coal in the nineteenth century: the engine of a new industrial revolution. Future energy will be produced increasingly by more sophisticated technological equipment based not just on steel and concrete, but incorporating significant quantities of metals and rare earths. Widespread application of these technologies will result in an exponential increase in demand for such minerals, and what is worrying is that minerals of this type are almost nowhere to be found in Europe and in other industrialized countries in the world, such as U.S. and Japan, but only in some Asian countries, like China and Afghanistan.

  2. Accounting for subgrid scale topographic variations in flood propagation modeling using MODFLOW

    DEFF Research Database (Denmark)

    Milzow, Christian; Kinzelbach, W.

    2010-01-01

    To be computationally viable, grid-based spatially distributed hydrological models of large wetlands or floodplains must be set up using relatively large cells (order of hundreds of meters to kilometers). Computational costs are especially high when considering the numerous model runs or model time...

  3. Accounting for the influence of the Earth's sphericity in three-dimensional density modelling

    Science.gov (United States)

    Martyshko, P. S.; Byzov, D. D.; Chernoskutov, A. I.

    2017-11-01

    A method for transformation of the three-dimensional regional "flat" density models of the Earth's crust and upper mantle to the "spherical" models and vice versa is proposed. A computation algorithm and a method of meaningful comparison of the vertical component of the gravity field of both models are presented.

  4. Accounting for imperfect forward modeling in geophysical inverse problems — Exemplified for crosshole tomography

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Holm Jacobsen, Bo

    2014-01-01

    of the modeling error was inferred in the form of a correlated Gaussian probability distribution. The key to the method was the ability to generate many realizations from a statistical description of the source of the modeling error, which in this case is the a priori model. The methodology was tested for two...

  5. Cost accounting in radiation oncology: a computer-based model for reimbursement.

    Science.gov (United States)

    Perez, C A; Kobeissi, B; Smith, B D; Fox, S; Grigsby, P W; Purdy, J A; Procter, H D; Wasserman, T H

    1993-04-02

    The skyrocketing cost of medical care in the United States has resulted in multiple efforts in cost containment. The present work offers a rational computer-based cost accounting approach to determine the actual use of resources in providing a specific service in a radiation oncology center. A procedure-level cost accounting system was developed by using recorded information on actual time and effort spent by individual staff members performing various radiation oncology procedures, and analyzing direct and indirect costs related to staffing (labor), facilities and equipment, supplies, etc. Expenditures were classified as direct or indirect and fixed or variable. A relative value unit was generated to allocate specific cost factors to each procedure. Different costs per procedure were identified according to complexity. Whereas there was no significant difference in the treatment time between low-energy (4 and 6 MV) or high-energy (18 MV) accelerators, there were significantly higher costs identified in the operation of a high-energy linear accelerator, a reflection of initial equipment investment, quality assurance and calibration procedures, maintenance costs, service contract, and replacement parts. Utilization of resources was related to the complexity of the procedures performed and whether the treatments were delivered to inpatients or outpatients. In analyzing time motion for physicians and other staff, it was apparent that a greater effort must be made to train the staff to accurately record all times involved in a given procedure, and it is strongly recommended that each institution perform its own time motion studies to more accurately determine operating costs. Sixty-six percent of our facility's global costs were for labor, 20% for other operating expenses, 10% for space, and 4% for equipment. Significant differences were noted in the cost allocation for professional or technical functions, as labor, space, and equipment costs are higher in the latter

  6. Development of a coal shrinkage-swelling model accounting for water content in the micropores

    Energy Technology Data Exchange (ETDEWEB)

    Prob Thararoop; Zuleima T. Karpyn; Turgay Ertekin [Pennsylvania State University, University Park, PA (United States). Petroleum and Natural Gas Engineering

    2009-07-01

    Changes in cleat permeability of coal seams are influenced by internal stress, and release or adsorption of gas in the coal matrix during production/injection processes. Coal shrinkage-swelling models have been proposed to quantify such changes; however none of the existing models incorporates the effect of the presence of water in the micropores on the gas sorption of coalbeds. This paper proposes a model of coal shrinkage and swelling, incorporating the effect of water in the micropores. The proposed model was validated using field permeability data from San Juan basin coalbeds and compared with coal shrinkage and swelling models existing in the literature.

  7. THE CURRENT ACCOUNT DEFICIT AND THE FIXED EXCHANGE RATE. ADJUSTING MECHANISMS AND MODELS.

    Directory of Open Access Journals (Sweden)

    HATEGAN D.B. Anca

    2010-07-01

    Full Text Available The main purpose of the paper is to explain what measures can be taken in order to fix the trade deficit, and the pressure that is upon a country by imposing such measures. The international and the national supply and demand conditions change rapidly, and if a country doesn’t succeed in keeping a tight control over its deficit, a lot of factors will affect its wellbeing. In order to reduce the external trade deficit, the government needs to resort to several techniques. The desired result is to have a balanced current account, and therefore, the government is free to use measures such as fixing its exchange rate, reducing government spending etc. We have shown that all these measures will have a certain impact upon an economy, by allowing its exports to thrive and eliminate the danger from excessive imports, or vice-versa. The main conclusion our paper is that government intervention is allowed in order to maintain the balance of the current account.

  8. Nonlinear analysis of a new car-following model accounting for the global average optimal velocity difference

    Science.gov (United States)

    Peng, Guanghan; Lu, Weizhen; He, Hongdi

    2016-09-01

    In this paper, a new car-following model is proposed by considering the global average optimal velocity difference effect on the basis of the full velocity difference (FVD) model. We investigate the influence of the global average optimal velocity difference on the stability of traffic flow by making use of linear stability analysis. It indicates that the stable region will be enlarged by taking the global average optimal velocity difference effect into account. Subsequently, the mKdV equation near the critical point and its kink-antikink soliton solution, which can describe the traffic jam transition, is derived from nonlinear analysis. Furthermore, numerical simulations confirm that the effect of the global average optimal velocity difference can efficiently improve the stability of traffic flow, which show that our new consideration should be taken into account to suppress the traffic congestion for car-following theory.

  9. Accounting for spatial effects in land use regression for urban air pollution modeling.

    Science.gov (United States)

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. JOMAR - A model for accounting the environmental loads from building constructions

    Energy Technology Data Exchange (ETDEWEB)

    Roenning, Anne; Nereng, Guro; Vold, Mie; Bjoerberg, Svein; Lassen, Niels

    2008-07-01

    The objective for this project was to develop a model as a basis for calculation of environmental profile for whole building constructions, based upon data from databases and general LCA software, in addition to the model structure from the Nordic project on LCC assessment of buildings. The model has been tested on three building constructions; timber based, flexible and heavy as well as heavy. Total energy consumption and emissions contributing to climate change are calculated in a total life cycle perspective. The developed model and exemplifying case assessments have shown that a holistic model including operation phase is both important and possible to implement. The project has shown that the operation phase causes the highest environmental loads when it comes to the exemplified impact categories. A suggestion on further development of the model along two different axes in collaboration with a broader representation from the building sector is given in the report (author)(tk)

  11. An extended two-lane car-following model accounting for inter-vehicle communication

    Science.gov (United States)

    Ou, Hui; Tang, Tie-Qiao

    2018-04-01

    In this paper, we develop a novel car-following model with inter-vehicle communication to explore each vehicle's movement in a two-lane traffic system when an incident occurs on a lane. The numerical results show that the proposed model can perfectly describe each vehicle's motion when an incident occurs, i.e., no collision occurs while the classical full velocity difference (FVD) model produces collision on each lane, which shows the proposed model is more reasonable. The above results can help drivers to reasonably adjust their driving behaviors when an incident occurs in a two-lane traffic system.

  12. Reconstruction of Arabidopsis metabolic network models accounting for subcellular compartmentalization and tissue-specificity.

    Science.gov (United States)

    Mintz-Oron, Shira; Meir, Sagit; Malitsky, Sergey; Ruppin, Eytan; Aharoni, Asaph; Shlomi, Tomer

    2012-01-03

    Plant metabolic engineering is commonly used in the production of functional foods and quality trait improvement. However, to date, computational model-based approaches have only been scarcely used in this important endeavor, in marked contrast to their prominent success in microbial metabolic engineering. In this study we present a computational pipeline for the reconstruction of fully compartmentalized tissue-specific models of Arabidopsis thaliana on a genome scale. This reconstruction involves automatic extraction of known biochemical reactions in Arabidopsis for both primary and secondary metabolism, automatic gap-filling, and the implementation of methods for determining subcellular localization and tissue assignment of enzymes. The reconstructed tissue models are amenable for constraint-based modeling analysis, and significantly extend upon previous model reconstructions. A set of computational validations (i.e., cross-validation tests, simulations of known metabolic functionalities) and experimental validations (comparison with experimental metabolomics datasets under various compartments and tissues) strongly testify to the predictive ability of the models. The utility of the derived models was demonstrated in the prediction of measured fluxes in metabolically engineered seed strains and the design of genetic manipulations that are expected to increase vitamin E content, a significant nutrient for human health. Overall, the reconstructed tissue models are expected to lay down the foundations for computational-based rational design of plant metabolic engineering. The reconstructed compartmentalized Arabidopsis tissue models are MIRIAM-compliant and are available upon request.

  13. A Social Audit Model for Agro-biotechnology Initiatives in Developing Countries: Accounting for Ethical, Social, Cultural, and Commercialization Issues

    Directory of Open Access Journals (Sweden)

    Obidimma Ezezika

    2009-10-01

    Full Text Available There is skepticism and resistance to innovations associated with agro-biotechnology projects, leading to the possibility of failure. The source of the skepticism is complex, but partly traceable to how local communities view genetically engineered crops, public perception on the technology’s implications, and views on the role of the private sector in public health and agriculture, especially in the developing world. We posit that a governance and management model in which ethical, social, cultural, and commercialization issues are accounted for and addressed is important in mitigating risk of project failure and improving the appropriate adoption of agro-biotechnology in sub-Saharan Africa. We introduce a social audit model, which we term Ethical, Social, Cultural and Commercialization (ESC2 auditing and which we developed based on feedback from a number of stakeholders. We lay the foundation for its importance in agro-biotechnology development projects and show how the model can be applied to projects run by Public Private Partnerships. We argue that the implementation of the audit model can help to build public trust through facilitating project accountability and transparency. The model also provides evidence on how ESC2 issues are perceived by various stakeholders, which enables project managers to effectively monitor and improve project performance. Although this model was specifically designed for agro-biotechnology initiatives, we show how it can also be applied to other development projects.

  14. Rare decays at LHCb

    CERN Document Server

    Adrover Pacheco, Cosme

    2012-01-01

    Rare decays are excellent tests to infer the presence of physics beyond the Standard Model (BSM), as they occur through processes prohibited at tree level in the SM. Any deviation from the SM prediction in branching fraction or angular distributions of such decays can lead to indications of new physics.

  15. A constitutive model accounting for strain ageing effects on work-hardening. Application to a C-Mn steel

    Science.gov (United States)

    Ren, Sicong; Mazière, Matthieu; Forest, Samuel; Morgeneyer, Thilo F.; Rousselier, Gilles

    2017-12-01

    One of the most successful models for describing the Portevin-Le Chatelier effect in engineering applications is the Kubin-Estrin-McCormick model (KEMC). In the present work, the influence of dynamic strain ageing on dynamic recovery due to dislocation annihilation is introduced in order to improve the KEMC model. This modification accounts for additional strain hardening rate due to limited dislocation annihilation by the diffusion of solute atoms and dislocation pinning at low strain rate and/or high temperature. The parameters associated with this novel formulation are identified based on tensile tests for a C-Mn steel at seven temperatures ranging from 20 °C to 350 °C. The validity of the model and the improvement compared to existing models are tested using 2D and 3D finite element simulations of the Portevin-Le Chatelier effect in tension.

  16. Accounting for sex differences in PTSD: A multi-variable mediation model

    DEFF Research Database (Denmark)

    Christiansen, Dorte M.; Hansen, Maj

    2015-01-01

    ABSTRACT Background: Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD). However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used...... methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually...... and related variables in 73.3% of all Danish bank employees exposed to bank robbery during the period from April 2010 to April 2011. Participants filled out questionnaires 1 week (T1, Nﰁ450) and 6 months after the robbery (T2, Nﰁ368; 61.1% females). Mediation was examined using an analysis designed...

  17. Taking individual scaling differences into account by analyzing profile data with the Mixed Assessor Model

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Schlich, Pascal; Skovgaard, Ib

    2015-01-01

    Scale range differences between individual assessors will often constitute a non-trivial part of the assessor-by-product interaction in sensory profile data (Brockhoff, 2003, 1998; Brockhoff and Skovgaard, 1994). We suggest a new mixed model ANOVA analysis approach, the Mixed Assessor Model (MAM...

  18. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  19. Assessing and accounting for time heterogeneity in stochastic actor oriented models

    NARCIS (Netherlands)

    Lospinoso, Joshua A.; Schweinberger, Michael; Snijders, Tom A. B.; Ripley, Ruth M.

    This paper explores time heterogeneity in stochastic actor oriented models (SAOM) proposed by Snijders (Sociological methodology. Blackwell, Boston, pp 361-395, 2001) which are meant to study the evolution of networks. SAOMs model social networks as directed graphs with nodes representing people,

  20. An individual-based model of Zebrafish population dynamics accounting for energy dynamics

    DEFF Research Database (Denmark)

    Beaudouin, Remy; Goussen, Benoit; Piccini, Benjamin

    2015-01-01

    Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model) w...

  1. Bioeconomic Modelling of Wetlands and Waterfowl in Western Canada: Accounting for Amenity Values

    NARCIS (Netherlands)

    Kooten, van G.C.; Whitey, P.; Wong, L.

    2011-01-01

    This study reexamines and updates an original bioeconomic model of optimal duck harvest and wetland retention by Hammack and Brown (1974, Waterfowl and Wetlands: Toward Bioeconomic Analysis. Washington, DC: Resources for the Future). It then extends the model to include the nonmarket (in situ) value

  2. Accounting for correlated observations in an age-based state-space stock assessment model

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders

    2016-01-01

    Fish stock assessment models often relyon size- or age-specific observations that are assumed to be statistically independent of each other. In reality, these observations are not raw observations, but rather they are estimates from a catch-standardization model or similar summary statistics based...

  3. Rare muon processes: Experiment

    International Nuclear Information System (INIS)

    Walter, H.K.

    1998-01-01

    The decay properties of muons, especially their rare decays, can be used to study very accurately deviations from the Standard Model. Muons with extremely low energies and good spatial definition are preferred for the majority of such studies. With the upgrade of the 590-MeV ring accelerator, PSI possesses the most powerful cyclotron in the world. This makes it possible to operate high-intensity beams of secondary pions and muons. A short review on rare muon processes is presented, concerning μ-e conversion and muonium-antimuonium oscillations. A possible new search for μ→eγ is also mentioned

  4. A Nonlinear Transmission Line Model of the Cochlea With Temporal Integration Accounts for Duration Effects in Threshold Fine Structure

    DEFF Research Database (Denmark)

    Verhey, Jesko L.; Mauermann, Manfred; Epp, Bastian

    2017-01-01

    than for long signals. The present study demonstrates how this effect can be captured by a nonlinear and active model of the cochlear in combination with a temporal integration stage. Since this cochlear model also accounts for fine structure and connected level dependent effects, it is superior......For normal-hearing listeners, auditory pure-tone thresholds in quiet often show quasi periodic fluctuations when measured with a high frequency resolution, referred to as threshold fine structure. Threshold fine structure is dependent on the stimulus duration, with smaller fluctuations for short...

  5. An improved car-following model accounting for the preceding car's taillight

    Science.gov (United States)

    Zhang, Jian; Tang, Tie-Qiao; Yu, Shao-Wei

    2018-02-01

    During the deceleration process, the preceding car's taillight may have influences on its following car's driving behavior. In this paper, we propose an extended car-following model with consideration of the preceding car's taillight. Two typical situations are used to simulate each car's movement and study the effects of the preceding car's taillight on the driving behavior. Meanwhile, sensitivity analysis of the model parameter is in detail discussed. The numerical results show that the proposed model can improve the stability of traffic flow and the traffic safety can be enhanced without a decrease of efficiency especially when cars pass through a signalized intersection.

  6. An agent-based simulation model of patient choice of health care providers in accountable care organizations.

    Science.gov (United States)

    Alibrahim, Abdullah; Wu, Shinyi

    2018-03-01

    Accountable care organizations (ACO) in the United States show promise in controlling health care costs while preserving patients' choice of providers. Understanding the effects of patient choice is critical in novel payment and delivery models like ACO that depend on continuity of care and accountability. The financial, utilization, and behavioral implications associated with a patient's decision to forego local health care providers for more distant ones to access higher quality care remain unknown. To study this question, we used an agent-based simulation model of a health care market composed of providers able to form ACO serving patients and embedded it in a conditional logit decision model to examine patients capable of choosing their care providers. This simulation focuses on Medicare beneficiaries and their congestive heart failure (CHF) outcomes. We place the patient agents in an ACO delivery system model in which provider agents decide if they remain in an ACO and perform a quality improving CHF disease management intervention. Illustrative results show that allowing patients to choose their providers reduces the yearly payment per CHF patient by $320, reduces mortality rates by 0.12 percentage points and hospitalization rates by 0.44 percentage points, and marginally increases provider participation in ACO. This study demonstrates a model capable of quantifying the effects of patient choice in a theoretical ACO system and provides a potential tool for policymakers to understand implications of patient choice and assess potential policy controls.

  7. Accounting for partial sleep deprivation and cumulative sleepiness in the Three-Process Model of alertness regulation.

    Science.gov (United States)

    Akerstedt, Torbjörn; Ingre, Michael; Kecklund, Göran; Folkard, Simon; Axelsson, John

    2008-04-01

    Mathematical models designed to predict alertness or performance have been developed primarily as tools for evaluating work and/or sleep-wake schedules that deviate from the traditional daytime orientation. In general, these models cope well with the acute changes resulting from an abnormal sleep but have difficulties handling sleep restriction across longer periods. The reason is that the function representing recovery is too steep--usually exponentially so--and with increasing sleep loss, the steepness increases, resulting in too rapid recovery. The present study focused on refining the Three-Process Model of alertness regulation. We used an experiment with 4 h of sleep/night (nine participants) that included subjective self-ratings of sleepiness every hour. To evaluate the model at the individual subject level, a set of mixed-effect regression analyses were performed using subjective sleepiness as the dependent variable. These mixed models estimate a fixed effect (group mean) and a random effect that accounts for heterogeneity between participants in the overall level of sleepiness (i.e., a random intercept). Using this technique, a point was sought on the exponential recovery function that would explain maximum variance in subjective sleepiness by switching to a linear function. The resulting point explaining the highest amount of variance was 12.2 on the 1-21 unit scale. It was concluded that the accumulation of sleep loss effects on subjective sleepiness may be accounted for by making the recovery function linear below a certain point on the otherwise exponential function.

  8. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    Science.gov (United States)

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Toward a formalized account of attitudes: The Causal Attitude Network (CAN) Model

    NARCIS (Netherlands)

    Dalege, J.; Borsboom, D.; Harreveld, F. van; Berg, H. van den; Conner, M.; Maas, H.L.J. van der

    2016-01-01

    This article introduces the Causal Attitude Network (CAN) model, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions. Relevant evaluative reactions include beliefs, feelings, and behaviors toward the attitude object. Interactions

  10. Mechanistic Physiologically Based Pharmacokinetic (PBPK) Model of the Heart Accounting for Inter-Individual Variability: Development and Performance Verification.

    Science.gov (United States)

    Tylutki, Zofia; Mendyk, Aleksander; Polak, Sebastian

    2018-04-01

    Modern model-based approaches to cardiac safety and efficacy assessment require accurate drug concentration-effect relationship establishment. Thus, knowledge of the active concentration of drugs in heart tissue is desirable along with inter-subject variability influence estimation. To that end, we developed a mechanistic physiologically based pharmacokinetic model of the heart. The models were described with literature-derived parameters and written in R, v.3.4.0. Five parameters were estimated. The model was fitted to amitriptyline and nortriptyline concentrations after an intravenous infusion of amitriptyline. The cardiac model consisted of 5 compartments representing the pericardial fluid, heart extracellular water, and epicardial intracellular, midmyocardial intracellular, and endocardial intracellular fluids. Drug cardiac metabolism, passive diffusion, active efflux, and uptake were included in the model as mechanisms involved in the drug disposition within the heart. The model accounted for inter-individual variability. The estimates of optimized parameters were within physiological ranges. The model performance was verified by simulating 5 clinical studies of amitriptyline intravenous infusion, and the simulated pharmacokinetic profiles agreed with clinical data. The results support the model feasibility. The proposed structure can be tested with the goal of improving the patient-specific model-based cardiac safety assessment and offers a framework for predicting cardiac concentrations of various xenobiotics. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  11. Accounting for scattering in the Landauer-Datta-Lundstrom transport model

    Directory of Open Access Journals (Sweden)

    Юрій Олексійович Кругляк

    2015-03-01

    Full Text Available Scattering of carriers in the LDL transport model during the changes of the scattering times in the collision processes is considered qualitatively. The basic relationship between the transmission coefficient T and the average mean free path  is derived for 1D conductor. As an example, the experimental data for Si MOSFET are analyzed with the use of various models of reliability.

  12. Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.

    Science.gov (United States)

    Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael

    2015-03-03

    Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.

  13. An extended continuum model accounting for the driver's timid and aggressive attributions

    International Nuclear Information System (INIS)

    Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng

    2017-01-01

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV–Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption. - Highlights: • A new continuum model is developed with the consideration of the driver's timid and aggressive behaviors simultaneously. • Applying the linear stability theory, the new model's linear stability is obtained. • Through nonlinear analysis, the KdV–Burgers equation is derived. • The energy consumption for this model is studied.

  14. An extended continuum model accounting for the driver's timid and aggressive attributions

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Rongjun; Ge, Hongxia [Faculty of Maritime and Transportation, Ningbo University, Ningbo 315211 (China); Jiangsu Province Collaborative Innovation Center for Modern Urban Traffic Technologies, Nanjing 210096 (China); National Traffic Management Engineering and Technology Research Centre Ningbo University Sub-centre, Ningbo 315211 (China); Wang, Jufeng, E-mail: wjf@nit.zju.edu.cn [Ningbo Institute of Technology, Zhejiang University, Ningbo 315100 (China)

    2017-04-18

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV–Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption. - Highlights: • A new continuum model is developed with the consideration of the driver's timid and aggressive behaviors simultaneously. • Applying the linear stability theory, the new model's linear stability is obtained. • Through nonlinear analysis, the KdV–Burgers equation is derived. • The energy consumption for this model is studied.

  15. A Critical Examination of the Models Proposed to Account for Baryon-Antibaryon Segregation Following the Quark-Hadron Transition

    Science.gov (United States)

    Garfinkle, Moishe

    2015-04-01

    The major concern of the Standard Cosmological Model (SCM) is to account for the continuing existence of the universe in spite of the Standard Particle Model (SPM). According to the SPM below the quark-hadron temperature (~ 150 +/- 50 MeV) the rate of baryon-antibaryon pair creation from γ radiation is in equilibrium with rate of pair annihilation. At freeze-out (~ 20 +/- 10 MeV) the rate of pair creation ceases. Henceforth only annihilation occurs below this temperature, resulting in a terminal pair ratio B+/ γ = B-/ γ ~ 10-18, insufficient to account for the present universe which would require a pair ratio minimum of at least B+/ γ = B-/ γ ~ 10-10. The present universe could not exist according to the SPM unless a mechanism was devised to segregation baryons from antibaryon before freeze-out. The SPM can be tweaked to accommodate the first two conditions but all of the mechanisms proposed over the past sixty years for the third condition failed. All baryon-number excursions devised were found to be reversible. The major concern of the SCM is to account for the continuing existence of the universe in spite of the SPM. The present universe could not exist according to the SPM unless a mechanism was devised to segregation baryons from antibaryon before freeze-out. It is the examination of these possible mechanisms that is subject of this work.

  16. Accounting standards

    NARCIS (Netherlands)

    Stellinga, B.; Mügge, D.

    2014-01-01

    The European and global regulation of accounting standards have witnessed remarkable changes over the past twenty years. In the early 1990s, EU accounting practices were fragmented along national lines and US accounting standards were the de facto global standards. Since 2005, all EU listed

  17. Accounting outsourcing

    OpenAIRE

    Klečacká, Tereza

    2009-01-01

    This thesis gives a complex view on accounting outsourcing, deals with the outsourcing process from its beginning (condition of collaboration, making of contract), through collaboration to its possible ending. This work defines outsourcing, indicates the main advatages, disadvatages and arguments for its using. The main object of thesis is mainly practical side of accounting outsourcing and providing of first quality accounting services.

  18. Accounting for misclassification in electronic health records-derived exposures using generalized linear finite mixture models.

    Science.gov (United States)

    Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M

    2017-06-01

    Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.

  19. Unsupervised machine learning account of magnetic transitions in the Hubbard model

    Science.gov (United States)

    Ch'ng, Kelvin; Vazquez, Nick; Khatami, Ehsan

    2018-01-01

    We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t -distributed stochastic neighboring ensemble (t -SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t -SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t -SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.

  20. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    OpenAIRE

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts, allowing the inclusion of a broader set of ecosystem services types such regulating services and cultural services. Consistent with the principles of national account, ecosystem accounting focuses on asse...

  1. A comparison of land use change accounting methods: seeking common grounds for key modeling choices in biofuel assessments

    DEFF Research Database (Denmark)

    de Bikuna Salinas, Koldo Saez; Hamelin, Lorie; Hauschild, Michael Zwicky

    2018-01-01

    Five currently used methods to account for the global warming (GW) impact of the induced land-use change (LUC) greenhouse gas (GHG) emissions have been applied to four biofuel case studies. Two of the investigated methods attempt to avoid the need of considering a definite occupation -thus...... amortization period by considering ongoing LUC trends as a dynamic baseline. This leads to the accounting of a small fraction (0.8%) of the related emissions from the assessed LUC, thus their validity is disputed. The comparison of methods and contrasting case studies illustrated the need of clearly...... distinguishing between the different time horizons involved in life cycle assessments (LCA) of land-demanding products like biofuels. Absent in ISO standards, and giving rise to several confusions, definitions for the following time horizons have been proposed: technological scope, inventory model, impact...

  2. Model of Environmental Development of the Urbanized Areas: Accounting of Ecological and other Factors

    Science.gov (United States)

    Abanina, E. N.; Pandakov, K. G.; Agapov, D. A.; Sorokina, Yu V.; Vasiliev, E. H.

    2017-05-01

    Modern cities and towns are often characterized by poor administration, which could be the reason of environmental degradation, the poverty growth, decline in economic growth and social isolation. In these circumstances it is really important to conduct fresh researches forming new ways of sustainable development of administrative districts. This development of the urban areas depends on many interdependent factors: ecological, economic, social. In this article we show some theoretical aspects of forming a model of environmental progress of the urbanized areas. We submit some model containing four levels including natural resources capacities of the territory, its social features, economic growth and human impact. The author describes the interrelations of elements of the model. In this article the program of environmental development of a city is offered and it could be used in any urban area.

  3. Mathematical modeling of pigment dispersion taking into account the full agglomerate particle size distribution

    DEFF Research Database (Denmark)

    Kiil, Søren

    2017-01-01

    . The only adjustable parameter used was an apparent rate constant for the linear agglomerate erosion rate. Model simulations, at selected values of time, for the full agglomerate particle size distribution were in good qualitative agreement with the measured values. A quantitative match of the experimental...... particle size distribution was simulated. Data from two previous experimental investigations were used for model validation. The first concerns two different yellow organic pigments dispersed in nitrocellulose/ethanol vehicles in a ball mill and the second a red organic pigment dispersed in a solvent...... particle size distributions could be obtained using time-dependent fragment distributions, but this resulted in a very slight improvement in the simulated transient mean diameter only. The model provides a mechanistic understanding of the agglomerate breakage process that can be used, e...

  4. Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality

    Energy Technology Data Exchange (ETDEWEB)

    Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis

    2015-03-01

    This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.

  5. O8.10A MODEL FOR RESEARCH INITIATIVES FOR RARE CANCERS: THE COLLABORATIVE EPENDYMOMA RESEARCH NETWORK (CERN)

    Science.gov (United States)

    Armstrong, T.S.; Aldape, K.; Gajjar, A.; Haynes, C.; Hirakawa, D.; Gilbertson, R.; Gilbert, M.R.

    2014-01-01

    Ependymoma represents less than 5% of adult central nervous system (CNS) tumors and a higher percentage of pediatric CNS tumors, but it remains an orphan disease. The majority of the laboratory-based research and clinical trials have been conducted in the pediatric setting, a reflection of the relative incidence and funding opportunities. CERN, created in 2006, was designed to establish a collaborative effort between laboratory and clinical research and pediatric and adult investigators. The organization of CERN is based on integration and collaboration among five projects. Project 1 contains the clinical trials network encompassing both adult and pediatric centers. This group has completed 2 clinical trials with more underway. Project 2 is focused on molecular classification of human ependymoma tumor tissues and also contains the tumor repository which has now collected over 600 fully clinically annotated CNS ependymomas from adults and children. Project 3 is focused on drug discovery utilizing robust laboratory models of ependymoma to perform high throughput screening of drug libraries, then taking promising agents through extensive preclinical testing including monitoring of drug delivery to tumor using state of the art microdialysis. Project 4 contains the basic research efforts evaluating the molecular pathogenesis of ependymoma and has successfully translated these findings by generating the first mouse models of ependymoma that are employed in preclinical drug development in Project 3. Project 5 studies patient outcomes, including the incorporation of these measures in the clinical trials. This project also contains an online Ependymoma Outcomes survey, collecting data on the consequences of the disease and its treatment. These projects have been highly successful and collaborative. For example, the serial measurement of symptom burden (Project 5) has greatly contributed to the evaluation of treatment efficacy of a clinical trial (Project 1) and

  6. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Directory of Open Access Journals (Sweden)

    Rahmatollah Beheshti

    Full Text Available Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES, the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1 quantifying the influence of prior diet preferences when food budgets are increased and 2 simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs, or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP. Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  7. Assessing and accounting for the effects of model error in Bayesian solutions to hydrogeophysical inverse problems

    Science.gov (United States)

    Koepke, C.; Irving, J.; Roubinet, D.

    2014-12-01

    Geophysical methods have gained much interest in hydrology over the past two decades because of their ability to provide estimates of the spatial distribution of subsurface properties at a scale that is often relevant to key hydrological processes. Because of an increased desire to quantify uncertainty in hydrological predictions, many hydrogeophysical inverse problems have recently been posed within a Bayesian framework, such that estimates of hydrological properties and their corresponding uncertainties can be obtained. With the Bayesian approach, it is often necessary to make significant approximations to the associated hydrological and geophysical forward models such that stochastic sampling from the posterior distribution, for example using Markov-chain-Monte-Carlo (MCMC) methods, is computationally feasible. These approximations lead to model structural errors, which, so far, have not been properly treated in hydrogeophysical inverse problems. Here, we study the inverse problem of estimating unsaturated hydraulic properties, namely the van Genuchten-Mualem (VGM) parameters, in a layered subsurface from time-lapse, zero-offset-profile (ZOP) ground penetrating radar (GPR) data, collected over the course of an infiltration experiment. In particular, we investigate the effects of assumptions made for computational tractability of the stochastic inversion on model prediction errors as a function of depth and time. These assumptions are that (i) infiltration is purely vertical and can be modeled by the 1D Richards equation, and (ii) the petrophysical relationship between water content and relative dielectric permittivity is known. Results indicate that model errors for this problem are far from Gaussian and independently identically distributed, which has been the common assumption in previous efforts in this domain. In order to develop a more appropriate likelihood formulation, we use (i) a stochastic description of the model error that is obtained through

  8. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Science.gov (United States)

    Beheshti, Rahmatollah; Jones-Smith, Jessica C; Igusa, Takeru

    2017-01-01

    Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES), the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1) quantifying the influence of prior diet preferences when food budgets are increased and 2) simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs), or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP). Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  9. Improved signal model for confocal sensors accounting for object depending artifacts.

    Science.gov (United States)

    Mauch, Florian; Lyda, Wolfram; Gronle, Marc; Osten, Wolfgang

    2012-08-27

    The conventional signal model of confocal sensors is well established and has proven to be exceptionally robust especially when measuring rough surfaces. Its physical derivation however is explicitly based on plane surfaces or point like objects, respectively. Here we show experimental results of a confocal point sensor measurement of a surface standard. The results illustrate the rise of severe artifacts when measuring curved surfaces. On this basis, we present a systematic extension of the conventional signal model that is proven to be capable of qualitatively explaining these artifacts.

  10. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models.

    Science.gov (United States)

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-12-15

    Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  11. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    Directory of Open Access Journals (Sweden)

    Koen Degeling

    2017-12-01

    Full Text Available Abstract Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Methods Two approaches, 1 using non-parametric bootstrapping and 2 using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Results Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500, the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25, yielding infeasible modeling outcomes. Conclusions Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  12. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia (P.197-207

    Directory of Open Access Journals (Sweden)

    Supriyati Supriyati

    2017-01-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the criteria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may differ, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, the Audit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  13. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia

    Directory of Open Access Journals (Sweden)

    Supriyati

    2015-12-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the crite-ria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may dif-fer, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, theAudit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  14. Current-account effects of a devaluation in an optimizing model with capital accumulation

    DEFF Research Database (Denmark)

    Nielsen, Søren Bo

    1991-01-01

    This article explores the consequences of a devaluation in the context of a ‘real', optimizing model of a small open economy. What provides for real effects of the devaluation is the existence of nominal wage stickiness during a contract period. We show that if this contract period is relatively...... assets and of capital...

  15. Summary of model to account for inhibition of CAM corrosion by porous ceramic coating

    Energy Technology Data Exchange (ETDEWEB)

    Hopper, R., LLNL

    1998-03-31

    Corrosion occurs during five characteristic periods or regimes. These are summarized below. For more detailed discussion, see the attached Memorandum by Robert Hopper entitled `Ceramic Barrier Performance Model, Version 1.0, Description of Initial PA Input` and dated March 30, 1998.

  16. Methods for Accounting for Co-Teaching in Value-Added Models. Working Paper

    Science.gov (United States)

    Hock, Heinrich; Isenberg, Eric

    2012-01-01

    Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching.…

  17. Practical Model for First Hyperpolarizability Dispersion Accounting for Both Homogeneous and Inhomogeneous Broadening Effects.

    Science.gov (United States)

    Campo, Jochen; Wenseleers, Wim; Hales, Joel M; Makarov, Nikolay S; Perry, Joseph W

    2012-08-16

    A practical yet accurate dispersion model for the molecular first hyperpolarizability β is presented, incorporating both homogeneous and inhomogeneous line broadening because these affect the β dispersion differently, even if they are indistinguishable in linear absorption. Consequently, combining the absorption spectrum with one free shape-determining parameter Ginhom, the inhomogeneous line width, turns out to be necessary and sufficient to obtain a reliable description of the β dispersion, requiring no information on the homogeneous (including vibronic) and inhomogeneous line broadening mechanisms involved, providing an ideal model for practical use in extrapolating experimental nonlinear optical (NLO) data. The model is applied to the efficient NLO chromophore picolinium quinodimethane, yielding an excellent fit of the two-photon resonant wavelength-dependent data and a dependable static value β0 = 316 × 10(-30) esu. Furthermore, we show that including a second electronic excited state in the model does yield an improved description of the NLO data at shorter wavelengths but has only limited influence on β0.

  18. Accountability in Training Transfer: Adapting Schlenker's Model of Responsibility to a Persistent but Solvable Problem

    Science.gov (United States)

    Burke, Lisa A.; Saks, Alan M.

    2009-01-01

    Decades have been spent studying training transfer in organizational environments in recognition of a transfer problem in organizations. Theoretical models of various antecedents, empirical studies of transfer interventions, and studies of best practices have all been advanced to address this continued problem. Yet a solution may not be so…

  19. Small strain multiphase-field model accounting for configurational forces and mechanical jump conditions

    Science.gov (United States)

    Schneider, Daniel; Schoof, Ephraim; Tschukin, Oleg; Reiter, Andreas; Herrmann, Christoph; Schwab, Felix; Selzer, Michael; Nestler, Britta

    2017-08-01

    Computational models based on the phase-field method have become an essential tool in material science and physics in order to investigate materials with complex microstructures. The models typically operate on a mesoscopic length scale resolving structural changes of the material and provide valuable information about the evolution of microstructures and mechanical property relations. For many interesting and important phenomena, such as martensitic phase transformation, mechanical driving forces play an important role in the evolution of microstructures. In order to investigate such physical processes, an accurate calculation of the stresses and the strain energy in the transition region is indispensable. We recall a multiphase-field elasticity model based on the force balance and the Hadamard jump condition at the interface. We show the quantitative characteristics of the model by comparing the stresses, strains and configurational forces with theoretical predictions in two-phase cases and with results from sharp interface calculations in a multiphase case. As an application, we choose the martensitic phase transformation process in multigrain systems and demonstrate the influence of the local homogenization scheme within the transition regions on the resulting microstructures.

  20. Shadow Segmentation and Augmentation Using á-overlay Models that Account for Penumbra

    DEFF Research Database (Denmark)

    Nielsen, Michael; Madsen, Claus B.

    2006-01-01

    This paper introduces a new concept within shadow segmentation. Previously, an image is considered to consist of shadow and non-shadow regions. Thus, a binary mask is estimated using various heuristics regarding structural and retinex/color constancy theories. We wish to model natural shadows so...

  1. An Exemplar-Model Account of Feature Inference from Uncertain Categorizations

    Science.gov (United States)

    Nosofsky, Robert M.

    2015-01-01

    In a highly systematic literature, researchers have investigated the manner in which people make feature inferences in paradigms involving uncertain categorizations (e.g., Griffiths, Hayes, & Newell, 2012; Murphy & Ross, 1994, 2007, 2010a). Although researchers have discussed the implications of the results for models of categorization and…

  2. Evaluation of alternative surface runoff accounting procedures using the SWAT model

    Science.gov (United States)

    For surface runoff estimation in the Soil and Water Assessment Tool (SWAT) model, the curve number (CN) procedure is commonly adopted to calculate surface runoff by utilizing antecedent soil moisture condition (SCSI) in field. In the recent version of SWAT (SWAT2005), an alternative approach is ava...

  3. Using state-and-transition modeling to account for imperfect detection in invasive species management

    Science.gov (United States)

    Frid, Leonardo; Holcombe, Tracy; Morisette, Jeffrey T.; Olsson, Aaryn D.; Brigham, Lindy; Bean, Travis M.; Betancourt, Julio L.; Bryan, Katherine

    2013-01-01

    Buffelgrass, a highly competitive and flammable African bunchgrass, is spreading rapidly across both urban and natural areas in the Sonoran Desert of southern and central Arizona. Damages include increased fire risk, losses in biodiversity, and diminished revenues and quality of life. Feasibility of sustained and successful mitigation will depend heavily on rates of spread, treatment capacity, and cost–benefit analysis. We created a decision support model for the wildland–urban interface north of Tucson, AZ, using a spatial state-and-transition simulation modeling framework, the Tool for Exploratory Landscape Scenario Analyses. We addressed the issues of undetected invasions, identifying potentially suitable habitat and calibrating spread rates, while answering questions about how to allocate resources among inventory, treatment, and maintenance. Inputs to the model include a state-and-transition simulation model to describe the succession and control of buffelgrass, a habitat suitability model, management planning zones, spread vectors, estimated dispersal kernels for buffelgrass, and maps of current distribution. Our spatial simulations showed that without treatment, buffelgrass infestations that started with as little as 80 ha (198 ac) could grow to more than 6,000 ha by the year 2060. In contrast, applying unlimited management resources could limit 2060 infestation levels to approximately 50 ha. The application of sufficient resources toward inventory is important because undetected patches of buffelgrass will tend to grow exponentially. In our simulations, areas affected by buffelgrass may increase substantially over the next 50 yr, but a large, upfront investment in buffelgrass control could reduce the infested area and overall management costs.

  4. Making Collaborative Innovation Accountable

    DEFF Research Database (Denmark)

    Sørensen, Eva

    The public sector is increasingly expected to be innovative, but the prize for a more innovative public sector might be that it becomes difficult to hold public authorities to account for their actions. The article explores the tensions between innovative and accountable governance, describes the...... the foundation for these tensions in different accountability models, and suggest directions to take in analyzing the accountability of collaborative innovation processes.......The public sector is increasingly expected to be innovative, but the prize for a more innovative public sector might be that it becomes difficult to hold public authorities to account for their actions. The article explores the tensions between innovative and accountable governance, describes...

  5. Conditional models accounting for regression to the mean in observational multi-wave panel studies on alcohol consumption.

    Science.gov (United States)

    Ripatti, Samuli; Mäkelä, Pia

    2008-01-01

    To develop statistical methodology needed for studying whether effects of an acute-onset intervention differ by consumption group that accounts correctly for the effect of regression to the mean (RTM) in observational panel studies with three or more measurement waves. A general statistical modelling framework, based on conditional models, is presented for analysing alcohol panel data with three or more measurements, that models the dependence between initial drinking level and change in consumption controlling for RTM. The method is illustrated by panel data from Finland, southern Sweden and Denmark, where the effects of large changes in alcohol taxes and travellers' allowances were studied. The suggested model allows for drawing statistical inference of the parameters of interest and also the identification of non-linear effects of an intervention by initial consumption using standard statistical software modelling tools. There was no evidence in any of the countries of the changes being larger among heavy drinkers, but in southern Sweden there was evidence that light drinkers raised their level of consumption. Conditional models are a versatile modelling framework that offers a flexible tool for modelling and testing changes due to intervention in consumption by initial consumption while controlling simultaneously for RTM.

  6. Tree biomass in the Swiss landscape: nationwide modelling for improved accounting for forest and non-forest trees.

    Science.gov (United States)

    Price, B; Gomez, A; Mathys, L; Gardi, O; Schellenberger, A; Ginzler, C; Thürig, E

    2017-03-01

    Trees outside forest (TOF) can perform a variety of social, economic and ecological functions including carbon sequestration. However, detailed quantification of tree biomass is usually limited to forest areas. Taking advantage of structural information available from stereo aerial imagery and airborne laser scanning (ALS), this research models tree biomass using national forest inventory data and linear least-square regression and applies the model both inside and outside of forest to create a nationwide model for tree biomass (above ground and below ground). Validation of the tree biomass model against TOF data within settlement areas shows relatively low model performance (R 2 of 0.44) but still a considerable improvement on current biomass estimates used for greenhouse gas inventory and carbon accounting. We demonstrate an efficient and easily implementable approach to modelling tree biomass across a large heterogeneous nationwide area. The model offers significant opportunity for improved estimates on land use combination categories (CC) where tree biomass has either not been included or only roughly estimated until now. The ALS biomass model also offers the advantage of providing greater spatial resolution and greater within CC spatial variability compared to the current nationwide estimates.

  7. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    NARCIS (Netherlands)

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts,

  8. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    NARCIS (Netherlands)

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts,

  9. Accountability and non-proliferation nuclear regime: a review of the mutual surveillance Brazilian-Argentine model for nuclear safeguards; Accountability e regime de nao proliferacao nuclear: uma avaliacao do modelo de vigilancia mutua brasileiro-argentina de salvaguardas nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Xavier, Roberto Salles

    2014-08-01

    The regimes of accountability, the organizations of global governance and institutional arrangements of global governance of nuclear non-proliferation and of Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards are the subject of research. The starting point is the importance of the institutional model of global governance for the effective control of non-proliferation of nuclear weapons. In this context, the research investigates how to structure the current arrangements of the international nuclear non-proliferation and what is the performance of model Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards in relation to accountability regimes of global governance. For that, was searched the current literature of three theoretical dimensions: accountability, global governance and global governance organizations. In relation to the research method was used the case study and the treatment technique of data the analysis of content. The results allowed: to establish an evaluation model based on accountability mechanisms; to assess how behaves the model Mutual Vigilance Brazilian-Argentine Nuclear Safeguards front of the proposed accountability regime; and to measure the degree to which regional arrangements that work with systems of global governance can strengthen these international systems. (author)

  10. Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.

    2016-01-01

    Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.

  11. Why does placing the question before an arithmetic word problem improve performance? A situation model account.

    Science.gov (United States)

    Thevenot, Catherine; Devidal, Michel; Barrouillet, Pierre; Fayol, Michel

    2007-01-01

    The aim of this paper is to investigate the controversial issue of the nature of the representation constructed by individuals to solve arithmetic word problems. More precisely, we consider the relevance of two different theories: the situation or mental model theory (Johnson-Laird, 1983; Reusser, 1989) and the schema theory (Kintsch & Greeno, 1985; Riley, Greeno, & Heller, 1983). Fourth-graders who differed in their mathematical skills were presented with problems that varied in difficulty and with the question either before or after the text. We obtained the classic effect of the position of the question, with better performance when the question was presented prior to the text. In addition, this effect was more marked in the case of children who had poorer mathematical skills and in the case of more difficult problems. We argue that this pattern of results is compatible only with the situation or mental model theory, and not with the schema theory.

  12. Mathematical modeling of pigment dispersion taking into account the full agglomerate particle size distribution

    DEFF Research Database (Denmark)

    Kiil, Søren

    2017-01-01

    The purpose of this work is to develop a mathematical model that can quantify the dispersion of pigments, with a focus on the mechanical breakage of pigment agglomerates. The underlying physical mechanism was assumed to be surface erosion of spherical pigment agglomerates. The full agglomerate......-based acrylic vehicle in a three-roll mill. When the linear rate of agglomerate surface erosion was taken to be proportional to the external agglomerate surface area, simulations of the volume-moment mean diameter over time were in good quantitative agreement with experimental data for all three pigments....... The only adjustable parameter used was an apparent rate constant for the linear agglomerate erosion rate. Model simulations, at selected values of time, for the full agglomerate particle size distribution were in good qualitative agreement with the measured values. A quantitative match of the experimental...

  13. Modelling of saturated soil slopes equilibrium with an account of the liquid phase bearing capacity

    Directory of Open Access Journals (Sweden)

    Maltseva Tatyana

    2017-01-01

    Full Text Available The paper presents an original method of solving the problem of uniformly distributed load action on a two-phase elastic half-plane with the use of a kinematic model. The kinematic model (Maltsev L.E. of two-phase medium is based on two new hypotheses according to which the stress and strain state of the two-phase body is described by a system of linear elliptic equations. These equations differ from the Lame equations of elasticity theory with two terms in each equation. The terms describe the bearing capacity of the liquid phase or a decrease in stress in the solid phase. The finite element method has been chosen as a solution method.

  14. Model application of Murabahah financing acknowledgement statement of Sharia accounting standard No 59 Year 2002

    Science.gov (United States)

    Muda, Iskandar; Panjaitan, Rohdearni; Erlina; Ginting, Syafruddin; Maksum, Azhar; Abubakar

    2018-03-01

    The purpose of this research is to observe murabahah financing implantation model. Observations were made on one of the sharia banks going public in Indonesia. Form of implementation of such implementation in the form of financing given the exact facilities and maximum financing, then the provision of financing should be adjusted to the type, business conditions and business plans prospective mudharib. If the financing provided is too low with the mudharib requirement not reaching the target and the financing is not refundable.

  15. An extended heterogeneous car-following model accounting for anticipation driving behavior and mixed maximum speeds

    Science.gov (United States)

    Sun, Fengxin; Wang, Jufeng; Cheng, Rongjun; Ge, Hongxia

    2018-02-01

    The optimal driving speeds of the different vehicles may be different for the same headway. In the optimal velocity function of the optimal velocity (OV) model, the maximum speed vmax is an important parameter determining the optimal driving speed. A vehicle with higher maximum speed is more willing to drive faster than that with lower maximum speed in similar situation. By incorporating the anticipation driving behavior of relative velocity and mixed maximum speeds of different percentages into optimal velocity function, an extended heterogeneous car-following model is presented in this paper. The analytical linear stable condition for this extended heterogeneous traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulted from the cooperation between anticipation driving behavior and heterogeneous maximum speeds in the optimal velocity function. The analytical and numerical results all demonstrate that strengthening driver's anticipation effect can improve the stability of heterogeneous traffic flow, and increasing the lowest value in the mixed maximum speeds will result in more instability, but increasing the value or proportion of the part already having higher maximum speed will cause different stabilities at high or low traffic densities.

  16. Taking error into account when fitting models using Approximate Bayesian Computation.

    Science.gov (United States)

    van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M

    2018-03-01

    Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.

  17. A new lattice model accounting for multiple optimal current differences' anticipation effect in two-lane system

    Science.gov (United States)

    Li, Xiaoqin; Fang, Kangling; Peng, Guanghan

    2017-11-01

    This paper extends a two-lane lattice hydrodynamic traffic flow model to take into account the driver's anticipation effect in sensing the multiple optimal current differences. Based on the proposed model, we derive analytically the effect of driver's anticipation of multiple optimal current differences on the instability of traffic dynamics. The phase diagrams have been plotted and discussed that the stability region enhances with anticipation effect in sensing multiple optimal current differences. Through simulation, it is found that the oscillation of density wave around critical density decreases with an increase in lattice number and anticipation time for transient and steady state. The simulation results are in good agreement with the theoretical analysis, which show that considering the driver's anticipation of multiple optimal current differences in two-lane lattice model stabilizes the traffic flow and suppresses the traffic jam efficiently.

  18. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    International Nuclear Information System (INIS)

    Beckon, William N.

    2016-01-01

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  19. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    Energy Technology Data Exchange (ETDEWEB)

    Beckon, William N., E-mail: William_Beckon@fws.gov

    2016-07-15

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  20. The occupant response to autonomous braking: a modeling approach that accounts for active musculature.

    Science.gov (United States)

    Östh, Jonas; Brolin, Karin; Carlsson, Stina; Wismans, Jac; Davidsson, Johan

    2012-01-01

    The aim of this study is to model occupant kinematics in an autonomous braking event by using a finite element (FE) human body model (HBM) with active muscles as a step toward HBMs that can be used for injury prediction in integrated precrash and crash simulations. Trunk and neck musculature was added to an existing FE HBM. Active muscle responses were achieved using a simplified implementation of 3 feedback controllers for head angle, neck angle, and angle of the lumbar spine. The HBM was compared with volunteer responses in sled tests with 10 ms(-2) deceleration over 0.2 s and in 1.4-s autonomous braking interventions with a peak deceleration of 6.7 ms(-2). The HBM captures the characteristics of the kinematics of volunteers in sled tests. Peak forward displacements have the same timing as for the volunteers, and lumbar muscle activation timing matches data from one of the volunteers. The responses of volunteers in autonomous braking interventions are mainly small head rotations and translational motions. This is captured by the HBM controller objective, which is to maintain the initial angular positions. The HBM response with active muscles is within ±1 standard deviation of the average volunteer response with respect to head displacements and angular rotation. With the implementation of feedback control of active musculature in an FE HBM it is possible to model the occupant response to autonomous braking interventions. The lumbar controller is important for the simulations of lap belt-restrained occupants; it is less important for the kinematics of occupants with a modern 3-point seat belt. Increasing head and neck controller gains provides a better correlation for head rotation, whereas it reduces the vertical head displacement and introduces oscillations.

  1. What Time is Your Sunset? Accounting for Refraction in Sunrise/set Prediction Models

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer Lynn; Chizek Frouard, Malynda; Hilton, James; Phlips, Alan; Edgar, Roman

    2018-01-01

    Algorithms that predict sunrise and sunset times currently have an uncertainty of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, including difficulties determining whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction.We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We, then, compared these predictions with data sets of observed rise/set times taken from Mount Wilson Observatory in California, University of Alberta in Edmonton, Alberta, and onboard the SS James Franco in the Atlantic. A thorough investigation of the problem requires a more substantial data set of observed rise/set times and corresponding meteorological data from around the world.We have developed a mobile application, Sunrise & Sunset Observer, so that anyone can capture this astronomical and meteorological data using their smartphone video recorder as part of a citizen science project. The Android app for this project is available in the Google Play store. Videos can also be submitted through the project website (riseset.phy.mtu.edu). Data analysis will lead to more complete models that will provide higher accuracy rise/set predictions to benefit astronomers, navigators, and outdoorsmen everywhere.

  2. An extended macro traffic flow model accounting for multiple optimal velocity functions with different probabilities

    Science.gov (United States)

    Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng

    2017-08-01

    Due to the maximum velocity and safe headway distance of the different vehicles are not exactly the same, an extended macro model of traffic flow with the consideration of multiple optimal velocity functions with probabilities is proposed in this paper. By means of linear stability theory, the new model's linear stability condition considering multiple probabilities optimal velocity is obtained. The KdV-Burgers equation is derived to describe the propagating behavior of traffic density wave near the neutral stability line through nonlinear analysis. The numerical simulations of influences of multiple maximum velocities and multiple safety distances on model's stability and traffic capacity are carried out. The cases of two different kinds of maximum speeds with same safe headway distance, two different types of safe headway distances with same maximum speed and two different max velocities and two different time-gaps are all explored by numerical simulations. First cases demonstrate that when the proportion of vehicles with a larger vmax increase, the traffic tends to unstable, which also means that jerk and brakes is not conducive to traffic stability and easier to result in stop and go phenomenon. Second cases show that when the proportion of vehicles with greater safety spacing increases, the traffic tends to be unstable, which also means that too cautious assumptions or weak driving skill is not conducive to traffic stability. Last cases indicate that increase of maximum speed is not conducive to traffic stability, while reduction of the safe headway distance is conducive to traffic stability. Numerical simulation manifests that the mixed driving and traffic diversion does not have effect on the traffic capacity when traffic density is low or heavy. Numerical results also show that mixed driving should be chosen to increase the traffic capacity when the traffic density is lower, while the traffic diversion should be chosen to increase the traffic capacity when

  3. Where's the problem? Considering Laing and Esterson's account of schizophrenia, social models of disability, and extended mental disorder.

    Science.gov (United States)

    Cooper, Rachel

    2017-08-01

    In this article, I compare and evaluate R. D. Laing and A. Esterson's account of schizophrenia as developed in Sanity, Madness and the Family (1964), social models of disability, and accounts of extended mental disorder. These accounts claim that some putative disorders (schizophrenia, disability, certain mental disorders) should not be thought of as reflecting biological or psychological dysfunction within the afflicted individual, but instead as external problems (to be located in the family, or in the material and social environment). In this article, I consider the grounds on which such claims might be supported. I argue that problems should not be located within an individual putative patient in cases where there is some acceptable test environment in which there is no problem. A number of cases where such an argument can show that there is no internal disorder are discussed. I argue, however, that Laing and Esterson's argument-that schizophrenia is not within diagnosed patients-does not work. The problem with their argument is that they fail to show that the diagnosed women in their study function adequately in any environment.

  4. Method for determining the duration of construction basing on evolutionary modeling taking into account random organizational expectations

    Directory of Open Access Journals (Sweden)

    Alekseytsev Anatoliy Viktorovich

    2016-10-01

    Full Text Available One of the problems of construction planning is failure to meet time constraints and increase of workflow duration. In the recent years informational technologies are efficiently used to solve the problem of estimation of construction period. The issue of optimal estimate of the duration of construction, taking into account the possible organizational expectations is considered in the article. In order to solve this problem the iteration scheme of evolutionary modeling, in which random values of organizational expectations are used as variable parameters is developed. Adjustable genetic operators are used to improve the efficiency of the search for solutions. The reliability of the proposed approach is illustrated by an example of formation of construction schedules of monolithic foundations for buildings, taking into account possible disruptions of supply of concrete and reinforcement cages. Application of the presented methodology enables automated acquisition of several alternative scheduling of construction in accordance with standard or directive duration. Application of this computational procedure has the prospects of taking into account of construction downtime due to weather, accidents related to construction machinery breakdowns or local emergency collapses of the structures being erected.

  5. Optimization model of energy mix taking into account the environmental impact

    International Nuclear Information System (INIS)

    Gruenwald, O.; Oprea, D.

    2012-01-01

    At present, the energy system in the Czech Republic needs to decide some important issues regarding limited fossil resources, greater efficiency in producing of electrical energy and reducing emission levels of pollutants. These problems can be decided only by formulating and implementing an energy mix that will meet these conditions: rational, reliable, sustainable and competitive. The aim of this article is to find a new way of determining an optimal mix for the energy system in the Czech Republic. To achieve the aim, the linear optimization model comprising several economics, environmental and technical aspects will be applied. (Authors)

  6. Research destruction ice under dynamic loading. Part 1. Modeling explosive ice cover into account the temperature

    Directory of Open Access Journals (Sweden)

    Bogomolov Gennady N.

    2017-01-01

    Full Text Available In the research, the behavior of ice under shock and explosive loads is analyzed. Full-scale experiments were carried out. It is established that the results of 2013 practically coincide with the results of 2017, which is explained by the temperature of the formation of river ice. Two research objects are considered, including freshwater ice and river ice cover. The Taylor test was simulated numerically. The results of the Taylor test are presented. Ice is described by an elastoplastic model of continuum mechanics. The process of explosive loading of ice by emulsion explosives is numerically simulated. The destruction of the ice cover under detonation products is analyzed in detail.

  7. Modeling liquid-vapor equilibria with an equation of state taking into account dipolar interactions and association by hydrogen bonding

    International Nuclear Information System (INIS)

    Perfetti, E.

    2006-11-01

    Modelling fluid-rock interactions as well as mixing and unmixing phenomena in geological processes requires robust equations of state (EOS) which must be applicable to systems containing water, gases over a broad range of temperatures and pressures. Cubic equations of state based on the Van der Waals theory (e. g. Soave-Redlich-Kwong or Peng-Robinson) allow simple modelling from the critical parameters of the studied fluid components. However, the accuracy of such equations becomes poor when water is a major component of the fluid since neither association trough hydrogen bonding nor dipolar interactions are accounted for. The Helmholtz energy of a fluid may be written as the sum of different energetic contributions by factorization of partition function. The model developed in this thesis for the pure H 2 O and H 2 S considers three contributions. The first contribution represents the reference Van der Waals fluid which is modelled by the SRK cubic EOS. The second contribution accounts for association through hydrogen bonding and is modelled by a term derived from Cubic Plus Association (CPA) theory. The third contribution corresponds to the dipolar interactions and is modelled by the Mean Spherical Approximation (MSA) theory. The resulting CPAMSA equation has six adjustable parameters, which three represent physical terms whose values are close to their experimental counterpart. This equation results in a better reproduction of the thermodynamic properties of pure water than obtained using the classical CPA equation along the vapour-liquid equilibrium. In addition, extrapolation to higher temperatures and pressure is satisfactory. Similarly, taking into account dipolar interactions together with the SRK cubic equation of state for calculating molar volume of H 2 S as a function of pressure and temperature results in a significant improvement compared to the SRK equation alone. Simple mixing rules between dipolar molecules are proposed to model the H 2 O-H 2 S

  8. Photoproduction of pions on nuclear in chiral bag model with account of motion effects of recoil nucleon

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Kanokov, Z.; Musakhanov, M.M.; Rakhimov, A.M.

    1989-01-01

    Pion production on a nucleon is studied in the chiral bag model (CBM). A CBM version is investigated in which the pions get into the bag and interact with quarks in a pseudovector way in the entire volume. Charged pion photoproduction amplitudes are found taking into account the recoil nucleon motion effects. Angular and energy distributions of charged pions, polarization of the recoil nucleon, multipoles are calculated. The recoil effects are shon to give an additional contribution to the static approximation of order of 10-20%. At bag radius value R=1 in the calculations are consistent with the experimental data

  9. Rare B decays at LHCb

    CERN Document Server

    Puig Navarro, Albert

    2017-01-01

    Rare decays are flavour changing neutral current processes that allow sensitive searches for phenomena beyond the Standard Model (SM). In the SM, rare decays are loop-suppressed and new particles in SM extensions can give significant contributions. The very rare decay $B^0_s\\to\\mu^+\\mu^-$ in addition helicity suppressed and constitutes a powerful probe for new (pseudo) scalar particles. Of particular interest are furthermore tests of lepton universality in rare $b\\to s\\ell^+\\ell^-$ decays. The LHCb experiment is designed for the study of b-hadron decays and ideally suited for the analysis of rare decays due to its high trigger efficiency, as well as excellent tracking and particle identification performance. Recent results from the LHCb experiment in the area of rare decays are presented, including tests of lepton universality and searches for lepton flavour violation.

  10. Accounting emergy flows to determine the best production model of a coffee plantation

    Energy Technology Data Exchange (ETDEWEB)

    Giannetti, B.F.; Ogura, Y.; Bonilla, S.H. [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil); Almeida, C.M.V.B., E-mail: cmvbag@terra.com.br [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil)

    2011-11-15

    Cerrado, a savannah region, is Brazil's second largest ecosystem after the Amazon rainforest and is also threatened with imminent destruction. In the present study emergy synthesis was applied to assess the environmental performance of a coffee farm located in Coromandel, Minas Gerais, in the Brazilian Cerrado. The effects of land use on sustainability were evaluated by comparing the emergy indices along ten years in order to assess the energy flows driving the production process, and to determine the best production model combining productivity and environmental performance. The emergy indices are presented as a function of the annual crop. Results show that Santo Inacio farm should produce approximately 20 bags of green coffee per hectare to accomplish its best performance regarding both the production efficiency and the environment. The evaluation of coffee trade complements those obtained by contrasting productivity and environmental performance, and despite of the market prices variation, the optimum interval for Santo Inacio's farm is between 10 and 25 coffee bags/ha. - Highlights: > Emergy synthesis is used to assess the environmental performance of a coffee farm in Brazil. > The effects of land use on sustainability were evaluated along ten years. > The energy flows driving the production process were assessed. > The best production model combining productivity and environmental performance was determined.

  11. Accounting emergy flows to determine the best production model of a coffee plantation

    International Nuclear Information System (INIS)

    Giannetti, B.F.; Ogura, Y.; Bonilla, S.H.; Almeida, C.M.V.B.

    2011-01-01

    Cerrado, a savannah region, is Brazil's second largest ecosystem after the Amazon rainforest and is also threatened with imminent destruction. In the present study emergy synthesis was applied to assess the environmental performance of a coffee farm located in Coromandel, Minas Gerais, in the Brazilian Cerrado. The effects of land use on sustainability were evaluated by comparing the emergy indices along ten years in order to assess the energy flows driving the production process, and to determine the best production model combining productivity and environmental performance. The emergy indices are presented as a function of the annual crop. Results show that Santo Inacio farm should produce approximately 20 bags of green coffee per hectare to accomplish its best performance regarding both the production efficiency and the environment. The evaluation of coffee trade complements those obtained by contrasting productivity and environmental performance, and despite of the market prices variation, the optimum interval for Santo Inacio's farm is between 10 and 25 coffee bags/ha. - Highlights: → Emergy synthesis is used to assess the environmental performance of a coffee farm in Brazil. → The effects of land use on sustainability were evaluated along ten years. → The energy flows driving the production process were assessed. → The best production model combining productivity and environmental performance was determined.

  12. The Influence of Feedback on Task-Switching Performance: A Drift Diffusion Modeling Account

    Directory of Open Access Journals (Sweden)

    Russell Cohen Hoffing

    2018-02-01

    Full Text Available Task-switching is an important cognitive skill that facilitates our ability to choose appropriate behavior in a varied and changing environment. Task-switching training studies have sought to improve this ability by practicing switching between multiple tasks. However, an efficacious training paradigm has been difficult to develop in part due to findings that small differences in task parameters influence switching behavior in a non-trivial manner. Here, for the first time we employ the Drift Diffusion Model (DDM to understand the influence of feedback on task-switching and investigate how drift diffusion parameters change over the course of task switch training. We trained 316 participants on a simple task where they alternated sorting stimuli by color or by shape. Feedback differed in six different ways between subjects groups, ranging from No Feedback (NFB to a variety of manipulations addressing trial-wise vs. Block Feedback (BFB, rewards vs. punishments, payment bonuses and different payouts depending upon the trial type (switch/non-switch. While overall performance was found to be affected by feedback, no effect of feedback was found on task-switching learning. Drift Diffusion Modeling revealed that the reductions in reaction time (RT switch cost over the course of training were driven by a continually decreasing decision boundary. Furthermore, feedback effects on RT switch cost were also driven by differences in decision boundary, but not in drift rate. These results reveal that participants systematically modified their task-switching performance without yielding an overall gain in performance.

  13. Modeling co-occurrence of northern spotted and barred owls: accounting for detection probability differences

    Science.gov (United States)

    Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.

    2009-01-01

    Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.

  14. On the influence of debris in glacier melt modelling: a new temperature-index model accounting for the debris thickness feedback

    Science.gov (United States)

    Carenzo, Marco; Mabillard, Johan; Pellicciotti, Francesca; Reid, Tim; Brock, Ben; Burlando, Paolo

    2013-04-01

    The increase of rockfalls from the surrounding slopes and of englacial melt-out material has led to an increase of the debris cover extent on Alpine glaciers. In recent years, distributed debris energy-balance models have been developed to account for the melt rate enhancing/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya. Some of the input data such as wind or temperature are also of difficult extrapolation from station measurements. Due to their lower data requirement, empirical models have been used in glacier melt modelling. However, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of debris thickness on melt. In this paper, we present a new temperature-index model accounting for the debris thickness feedback in the computation of melt rates at the debris-ice interface. The empirical parameters (temperature factor, shortwave radiation factor, and lag factor accounting for the energy transfer through the debris layer) are optimized at the point scale for several debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter has been validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. The new model is developed on Miage Glacier, Italy, a debris cover glacier in which the ablation area is mantled in near-continuous layer of rock. Subsequently, its transferability is tested on Haut Glacier d'Arolla, Switzerland, where debris is thinner and its extension has been seen to expand in the last decades. The results show that the performance of the new debris temperature-index model (DETI) in simulating the glacier melt rate at the point scale

  15. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections

    Directory of Open Access Journals (Sweden)

    Fei Guo

    2016-06-01

    Full Text Available The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015, more datasets (a time span of almost two years were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the

  16. Model of investment appraisal of high-rise construction with account of cost of land resources

    Science.gov (United States)

    Okolelova, Ella; Shibaeva, Marina; Trukhina, Natalya

    2018-03-01

    The article considers problems and potential of high-rise construction as a global urbanization. The results of theoretical and practical studies on the appraisal of investments in high-rise construction are provided. High-rise construction has a number of apparent upsides in modern terms of development of megapolises and primarily it is economically efficient. Amid serious lack of construction sites, skyscrapers successfully deal with the need of manufacturing, office and living premises. Nevertheless, there are plenty issues, which are related with high-rise construction, and only thorough scrutiny of them allow to estimate the real economic efficiency of this branch. The article focuses on the question of economic efficiency of high-rise construction. The suggested model allows adjusting the parameters of a facility under construction, setting the tone for market value as well as the coefficient for appreciation of the construction net cost, that depends on the number of storey's, in the form of function or discrete values.

  17. A coupled surface/subsurface flow model accounting for air entrapment and air pressure counterflow

    DEFF Research Database (Denmark)

    Delfs, Jens Olaf; Wang, Wenqing; Kalbacher, Thomas

    2013-01-01

    the mass exchange between compartments. A benchmark test, which is based on a classic experimental data set on infiltration excess (Horton) overland flow, identified a feedback mechanism between surface runoff and soil air pressures. Our study suggests that air compression in soils amplifies surface runoff......This work introduces the soil air system into integrated hydrology by simulating the flow processes and interactions of surface runoff, soil moisture and air in the shallow subsurface. The numerical model is formulated as a coupled system of partial differential equations for hydrostatic (diffusive...... wave) shallow flow and two-phase flow in a porous medium. The simultaneous mass transfer between the soil, overland, and atmosphere compartments is achieved by upgrading a fully established leakance concept for overland-soil liquid exchange to an air exchange flux between soil and atmosphere. In a new...

  18. Does Reading Cause Later Intelligence? Accounting for Stability in Models of Change.

    Science.gov (United States)

    Bailey, Drew H; Littlefield, Andrew K

    2017-11-01

    This study reanalyzes data presented by Ritchie, Bates, and Plomin (2015) who used a cross-lagged monozygotic twin differences design to test whether reading ability caused changes in intelligence. The authors used data from a sample of 1,890 monozygotic twin pairs tested on reading ability and intelligence at five occasions between the ages of 7 and 16, regressing twin differences in intelligence on twin differences in prior intelligence and twin differences in prior reading ability. Results from a state-trait model suggest that reported effects of reading ability on later intelligence may be artifacts of previously uncontrolled factors, both environmental in origin and stable during this developmental period, influencing both constructs throughout development. Implications for cognitive developmental theory and methods are discussed. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  19. Modelling the distribution of fish accounting for spatial correlation and overdispersion

    DEFF Research Database (Denmark)

    Lewy, Peter; Kristensen, Kasper

    2009-01-01

    The spatial distribution of cod (Gadus morhua) in the North Sea and the Skagerrak was analysed over a 24-year period using the Log Gaussian Cox Process (LGCP). In contrast to other spatial models of the distribution of fish, LGCP avoids problems with zero observations and includes the spatial...... correlation between observations. It is therefore possible to predict and interpolate unobserved densities at any location in the area. This is important for obtaining unbiased estimates of stock concentration and other measures depending on the distribution in the entire area. Results show that the spatial...... correlation and dispersion of cod catches remained unchanged during winter throughout the period, in spite of a drastic decline in stock abundance and a movement of the centre of gravity of the distribution towards the northeast in the same period. For the age groups considered, the concentration of the stock...

  20. The application of multilevel modelling to account for the influence of walking speed in gait analysis.

    Science.gov (United States)

    Keene, David J; Moe-Nilssen, Rolf; Lamb, Sarah E

    2016-01-01

    Differences in gait performance can be explained by variations in walking speed, which is a major analytical problem. Some investigators have standardised speed during testing, but this can result in an unnatural control of gait characteristics. Other investigators have developed test procedures where participants walking at their self-selected slow, preferred and fast speeds, with computation of gait characteristics at a standardised speed. However, this analysis is dependent upon an overlap in the ranges of gait speed observed within and between participants, and this is difficult to achieve under self-selected conditions. In this report a statistical analysis procedure is introduced that utilises multilevel modelling to analyse data from walking tests at self-selected speeds, without requiring an overlap in the range of speeds observed or the routine use of data transformations. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation

    KAUST Repository

    De La Garza Martinez, Pablo

    2016-05-01

    Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.

  2. Accounting for disturbance history in models: using remote sensing to constrain carbon and nitrogen pool spin-up.

    Science.gov (United States)

    Hanan, Erin J; Tague, Christina; Choate, Janet; Liu, Mingliang; Kolden, Crystal; Adam, Jennifer

    2018-03-24

    Disturbances such as wildfire, insect outbreaks, and forest clearing, play an important role in regulating carbon, nitrogen, and hydrologic fluxes in terrestrial watersheds. Evaluating how watersheds respond to disturbance requires understanding mechanisms that interact over multiple spatial and temporal scales. Simulation modeling is a powerful tool for bridging these scales; however, model projections are limited by uncertainties in the initial state of plant carbon and nitrogen stores. Watershed models typically use one of two methods to initialize these stores: spin-up to steady state, or remote sensing with allometric relationships. Spin-up involves running a model until vegetation reaches equilibrium based on climate; this approach assumes that vegetation across the watershed has reached maturity and is of uniform age, which fails to account for landscape heterogeneity and non-steady state conditions. By contrast, remote sensing, can provide data for initializing such conditions. However, methods for assimilating remote sensing into model simulations can also be problematic. They often rely on empirical allometric relationships between a single vegetation variable and modeled carbon and nitrogen stores. Because allometric relationships are species- and region-specific, they do not account for the effects of local resource limitation, which can influence carbon allocation (to leaves, stems, roots, etc.). To address this problem, we developed a new initialization approach using the catchment-scale ecohydrologic model RHESSys. The new approach merges the mechanistic stability of spin-up with the spatial fidelity of remote sensing. It uses remote sensing to define spatially explicit targets for one, or several vegetation state variables, such as leaf area index, across a watershed. The model then simulates the growth of carbon and nitrogen stores until the defined targets are met for all locations. We evaluated this approach in a mixed pine-dominated watershed in

  3. Hysteresis modelling of GO laminations for arbitrary in-plane directions taking into account the dynamics of orthogonal domain walls

    Energy Technology Data Exchange (ETDEWEB)

    Baghel, A.P.S.; Sai Ram, B. [Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai 400076 (India); Chwastek, K. [Department of Electrical Engineering Czestochowa University of Technology (Poland); Daniel, L. [Group of Electrical Engineering-Paris (GeePs), CNRS(UMR8507)/CentraleSupelec/UPMC/Univ Paris-Sud, 11 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Kulkarni, S.V. [Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai 400076 (India)

    2016-11-15

    The anisotropy of magnetic properties in grain-oriented steels is related to their microstructure. It results from the anisotropy of the single crystal properties combined to crystallographic texture. The magnetization process along arbitrary directions can be explained using phase equilibrium for domain patterns, which can be described using Neel's phase theory. According to the theory the fractions of 180° and 90° domain walls depend on the direction of magnetization. This paper presents an approach to model hysteresis loops of grain-oriented steels along arbitrary in-plane directions. The considered description is based on a modification of the Jiles–Atherton model. It includes a modified expression for the anhysteretic magnetization which takes into account contributions of two types of domain walls. The computed hysteresis curves for different directions are in good agreement with experimental results. - Highlights: • An extended Jiles–Atherton description is used to model hysteresis loops in GO steels. • The model stresses the role of material anisotropy and different contributions of the two types of domain walls. • Hysteresis loops can be modeled along arbitrary in-plane directions. • Modeling results are in good agreement with experiments.

  4. Modelling of a mecanum wheel taking into account the geometry of road rollers

    Science.gov (United States)

    Hryniewicz, P.; Gwiazda, A.; Banaś, W.; Sękala, A.; Foit, K.

    2017-08-01

    During the process planning in a company one of the basic factors associated with the production costs is the operation time for particular technological jobs. The operation time consists of time units associated with the machining tasks of a workpiece as well as the time associated with loading and unloading and the transport operations of this workpiece between machining stands. Full automation of manufacturing in industry companies tends to a maximal reduction in machine downtimes, thereby the fixed costs simultaneously decreasing. The new construction of wheeled vehicles, using Mecanum wheels, reduces the transport time of materials and workpieces between machining stands. These vehicles have the ability to simultaneously move in two axes and thus more rapid positioning of the vehicle relative to the machining stand. The Mecanum wheel construction implies placing, around the wheel free rollers that are mounted at an angle 450, which allow the movement of the vehicle not only in its axis but also perpendicular thereto. The improper selection of the rollers can cause unwanted vertical movement of the vehicle, which may cause difficulty in positioning of the vehicle in relation to the machining stand and the need for stabilisation. Hence the proper design of the free rollers is essential in designing the whole Mecanum wheel construction. It allows avoiding the disadvantageous and unwanted vertical vibrations of a whole vehicle with these wheels. In the article the process of modelling the free rollers, in order to obtain the desired shape of unchanging, horizontal trajectory of the vehicle is presented. This shape depends on the desired diameter of the whole Mecanum wheel, together with the road rollers, and the width of the drive wheel. Another factor related with the curvature of the trajectory shape is the length of the road roller and its diameter decreases depending on the position with respect to its centre. The additional factor, limiting construction of

  5. Probabilistic modelling in urban drainage – two approaches that explicitly account for temporal variation of model errors

    DEFF Research Database (Denmark)

    Löwe, Roland; Del Giudice, Dario; Mikkelsen, Peter Steen

    of input uncertainties observed in the models. The explicit inclusion of such variations in the modelling process will lead to a better fulfilment of the assumptions made in formal statistical frameworks, thus reducing the need to resolve to informal methods. The two approaches presented here...

  6. Search for the Higgs Boson and Rare Standard Model Processes in the ET+B-Jets Signature at the Collider Detector at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Potamianos, Karolos Jozef [Purdue Univ., West Lafayette, IN (United States)

    2011-12-01

    We study rare processes of the standard model of particle physics (SM) in events with missing transverse energy ET, no leptons, and two or three jets, of which at least one is identified as originating from a $b$-quark (ET+b-jets signature). We present a search for the SM Higgs boson produced in association with a $W$ or $Z$ boson when the Higgs decays into \\bbbar. We consider the scenario where $Z \\to \

  7. A 2D finite element procedure for magnetic field analysis taking into account a vector Preisach model

    Directory of Open Access Journals (Sweden)

    Dupré Luc R.

    1997-01-01

    Full Text Available The main purpose of this paper is to incorporate a refined hysteresis model, viz. a vector Preisach model, in 2D magnetic field computations. To this end the governing Maxwell equations are rewritten in a suitable way, which allows to take into account the proper magnetic material parameters and, moreover, to pass to a variational formulation. The variational problem is solved numerically by a FE approximation, using a quadratic mesh, followed by the time discretisation based upon a modified Cranck Nicholson algorithm. The latter includes a suitable iteration procedure to deal with the nonlinear hysteresis behaviour. Finally, the effectiveness of the presented mathematical tool has been confirmed by several numerical experiments.

  8. Using experiments and demographic models to assess rare plant vulnerability to utlity-scale solar energy development

    Science.gov (United States)

    Moore, K. A.

    2015-12-01

    Pressing challenges for the implementation of solar energy are the effects of construction and operation on protected animal and plant species. Siting and mitigation of solar energy often requires understanding of basic biology and distributions of rare species that are unknown. How can we rapidly collect the information necessary on species- and site-specific population dynamics to effectively design mitigation and conservation measures? We have developed an integrated approach to assessing the vulnerability of a suite of representative rare plant species in the region. We implemented a prioritized series of demographic and experimental studies over the past four years to identify the types of species, populations, and life stages most vulnerable to impact or prone to conservation efforts. We have found substantial variation in vegetative and sexual reproduction between study populations for several rare plants, including between populations that vary in putative impact by development and/or effects of experimental solar arrays. For a subset of species, we designed population viability analysis and applied them to identify sensitive vital rates and compare quasi-extinction probabilities under different climate and impact scenarios. By utilizing practical experiments to test for the effects of real or simulated impacts, we found differences in vital rates between natural and disturbed populations adjacent to and within solar installations. We draw conclusions from our work to guide the analysis of benefits, permitting, and design of utility-scale solar energy facilities.

  9. An EMG-driven biomechanical model that accounts for the decrease in moment generation capacity during a dynamic fatigued condition.

    Science.gov (United States)

    Rao, Guillaume; Berton, Eric; Amarantini, David; Vigouroux, Laurent; Buchanan, Thomas S

    2010-07-01

    Although it is well known that fatigue can greatly reduce muscle forces, it is not generally included in biomechanical models. The aim of the present study was to develop an electromyographic-driven (EMG-driven) biomechanical model to estimate the contributions of flexor and extensor muscle groups to the net joint moment during a nonisokinetic functional movement (squat exercise) performed in nonfatigued and in fatigued conditions. A methodology that aims at balancing the decreased muscle moment production capacity following fatigue was developed. During an isometric fatigue session, a linear regression was created linking the decrease in force production capacity of the muscle (normalized force/EMG ratio) to the EMG mean frequency. Using the decrease in mean frequency estimated through wavelet transforms between dynamic squats performed before and after the fatigue session as input to the previous linear regression, a coefficient accounting for the presence of fatigue in the quadriceps group was computed. This coefficient was used to constrain the moment production capacity of the fatigued muscle group within an EMG-driven optimization model dedicated to estimate the contributions of the knee flexor and extensor muscle groups to the net joint moment. During squats, our results showed significant increases in the EMG amplitudes with fatigue (+23.27% in average) while the outputs of the EMG-driven model were similar. The modifications of the EMG amplitudes following fatigue were successfully taken into account while estimating the contributions of the flexor and extensor muscle groups to the net joint moment. These results demonstrated that the new procedure was able to estimate the decrease in moment production capacity of the fatigued muscle group.

  10. Accounting for intracell flow in models with emphasis on water table recharge and stream-aquifer interaction: 2. A procedure

    Science.gov (United States)

    Jorgensen, Donald G.; Signor, Donald C.; Imes, Jeffrey L.

    1989-01-01

    Intercepted intracell flow, especially if cell includes water table recharge and a stream ((sink), can result in significant model error if not accounted for. A procedure utilizing net flow per cell (Fn) that accounts for intercepted intracell flow can be used for both steady state and transient simulations. Germane to the procedure is the determination of the ratio of area of influence of the interior sink to the area of the cell (Ai/Ac). Ai is the area in which water table recharge has the potential to be intercepted by the sink. Determining Ai/Ac requires either a detailed water table map or observation of stream conditions within the cell. A proportioning parameter M, which is equal to 1 or slightly less and is a function of cell geometry, is used to determine how much of the water that has potential for interception is intercepted by the sink within the cell. Also germane to the procedure is the determination of the flow across the streambed (Fs), which is not directly a function of cell size, due to difference in head between the water level in the stream and the potentiometric surface of the aquifer underlying the streambed. The use of Fn for steady state simulations allows simulation of water levels without utilizing head-dependent or constant head boundary conditions which tend to constrain the model-calculated water levels, an undesirable result if a comparison of measured and calculated water levels is being made. Transient simulations of streams usually utilize a head-dependent boundary condition and a leakance value to model a stream. Leakance values for each model cell can be determined from a steady state simulation, which used the net flow per cell procedure. For transient simulation, Fn would not include Fs. Also, for transient simulation it is necessary to check Fn at different time intervals because M and Ai/Ac are not constant and change with time. The procedure was used successfully in two different models of the aquifer system

  11. Modeling the dynamic behavior of railway track taking into account the occurrence of defects in the system wheel-rail

    Directory of Open Access Journals (Sweden)

    Loktev Alexey

    2017-01-01

    Full Text Available This paper investigates the influence of wheel defects on the development of rail defects up to a state where rail prompt replacement becomes necessary taking into account different models of the dynamic contact between a wheel and a rail. In particular, the quasistatic Hertz model, the linear elastic model and the elastoplastic Aleksandrov-Kadomtsev model. Based on the model of the wheel-rail contact the maximum stresses are determined which take place in the rail in the presence of wheel defects (e.g. flat spot, weld-on deposit, etc.. In this paper, the solution of the inverse problem is presented, i.e., investigation of the influence of the strength of a wheel impact upon rails on wheel defects as well as evaluation of the stresses emerging in rails. During the motion of a railway vehicle, the wheel pair position in relation to rails changes significantly, which causes various combinations of wheel-rail contact areas. Even provided the constant axial load, the normal stresses will substantially change due to the differences in the radii of curvature of contact surfaces of these areas, as well as movement velocities of railway vehicles.

  12. Diarrhea Morbidities in Small Areas: Accounting for Non-Stationarity in Sociodemographic Impacts using Bayesian Spatially Varying Coefficient Modelling.

    Science.gov (United States)

    Osei, F B; Stein, A

    2017-08-30

    Model-based estimation of diarrhea risk and understanding the dependency on sociodemographic factors is important for prioritizing interventions. It is unsuitable to calibrate regression model with a single set of coefficients, especially for large spatial domains. For this purpose, we developed a Bayesian hierarchical varying coefficient model to account for non-stationarity in the covariates. We used the integrated nested Laplace approximation for parameter estimation. Diarrhea morbidities in Ghana motivated our empirical study. Results indicated improvement regarding model fit and epidemiological benefits. The findings highlighted substantial spatial, temporal, and spatio-temporal heterogeneities in both diarrhea risk and the coefficients of the sociodemographic factors. Diarrhea risk in peri-urban and urban districts were 13.2% and 10.8% higher than rural districts, respectively. The varying coefficient model indicated further details, as the coefficients varied across districts. A unit increase in the proportion of inhabitants with unsafe liquid waste disposal was found to increase diarrhea risk by 11.5%, with higher percentages within the south-central parts through to the south-western parts. Districts with safe and unsafe drinking water sources unexpectedly had a similar risk, as were districts with safe and unsafe toilets. The findings show that site-specific interventions need to consider the varying effects of sociodemographic factors.

  13. Low Energy Atomic Models Suggesting a Pilus Structure that could Account for Electrical Conductivity of Geobacter sulfurreducens Pili.

    Science.gov (United States)

    Xiao, Ke; Malvankar, Nikhil S; Shu, Chuanjun; Martz, Eric; Lovley, Derek R; Sun, Xiao

    2016-03-22

    The metallic-like electrical conductivity of Geobacter sulfurreducens pili has been documented with multiple lines of experimental evidence, but there is only a rudimentary understanding of the structural features which contribute to this novel mode of biological electron transport. In order to determine if it was feasible for the pilin monomers of G. sulfurreducens to assemble into a conductive filament, theoretical energy-minimized models of Geobacter pili were constructed with a previously described approach, in which pilin monomers are assembled using randomized structural parameters and distance constraints. The lowest energy models from a specific group of predicted structures lacked a central channel, in contrast to previously existing pili models. In half of the no-channel models the three N-terminal aromatic residues of the pilin monomer are arranged in a potentially electrically conductive geometry, sufficiently close to account for the experimentally observed metallic like conductivity of the pili that has been attributed to overlapping pi-pi orbitals of aromatic amino acids. These atomic resolution models capable of explaining the observed conductive properties of Geobacter pili are a valuable tool to guide further investigation of the metallic-like conductivity of the pili, their role in biogeochemical cycling, and applications in bioenergy and bioelectronics.

  14. Diagnostic and prognostic simulations with a full Stokes model accounting for superimposed ice of Midtre Lovénbreen, Svalbard

    Directory of Open Access Journals (Sweden)

    T. Zwinger

    2009-11-01

    Full Text Available We present steady state (diagnostic and transient (prognostic simulations of Midtre Lovénbreen, Svalbard performed with the thermo-mechanically coupled full-Stokes code Elmer. This glacier has an extensive data set of geophysical measurements available spanning several decades, that allow for constraints on model descriptions. Consistent with this data set, we included a simple model accounting for the formation of superimposed ice. Diagnostic results indicated that a dynamic adaptation of the free surface is necessary, to prevent non-physically high velocities in a region of under determined bedrock depths. Observations from ground penetrating radar of the basal thermal state agree very well with model predictions, while the dip angles of isochrones in radar data also match reasonably well with modelled isochrones, despite the numerical deficiencies of estimating ages with a steady state model.

    Prognostic runs for 53 years, using a constant accumulation/ablation pattern starting from the steady state solution obtained from the configuration of the 1977 DEM show that: 1 the unrealistic velocities in the under determined parts of the DEM quickly damp out; 2 the free surface evolution matches well measured elevation changes; 3 the retreat of the glacier under this scenario continues with the glacier tongue in a projection to 2030 being situated ≈500 m behind the position in 1977.

  15. Accounting rigid support at the border in a mixed model the finite element method in problems of ice cover destruction

    Directory of Open Access Journals (Sweden)

    V. V. Knyazkov

    2014-01-01

    Full Text Available To evaluate the force to damage the ice covers is necessary for estimation of icebreaking capability of vessels, as well as of hull strength of icebreakers, and navigation of ships in ice conditions. On the other hand, the use of ice cover support to arrange construction works from the ice is also of practical interest.By the present moment a great deal of investigations of ice cover deformation have been carried out to result, usually, in approximate calculations formula which was obtained after making a variety of assumptions. Nevertheless, we believe that it is possible to make further improvement in calculations. Application numerical methods, and, for example, FEM, makes possible to avoid numerous drawbacks of analytical methods dealing with both complex boundaries and load application areas and other problem peculiarities.The article considers an application of mixed models of FEM for investigating ice cover deformation. A simple flexible triangle element of mixed type was taken to solve this problem. Vector of generalized coordinates of the element contains apices flexures and normal bending moments in the middle of its sides. Compared to other elements mixed models easily satisfy compatibility requirements on the boundary of adjacent elements and do not require numerical displacement differentiation to define bending moments, because bending moments are included in vector of element generalized coordinates.The method of account of rigid support plate is proposed. The resulting ratio, taking into account the "stiffening", reduces the number of resolving systems of equations by the number of elements on the plate contour.To evaluate further the results the numerical realization of ice cover stress-strained problem it becomes necessary and correct to check whether calculation results correspond to accurate solution. Using an example of circular plate the convergence of numerical solutions to analytical solutions is showed.The article

  16. Model cortical association fields account for the time course and dependence on target complexity of human contour perception.

    Directory of Open Access Journals (Sweden)

    Vadas Gintautas

    2011-10-01

    Full Text Available Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas distributed among groups of randomly rotated fragments (clutter. The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms, followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least [Formula: see text] ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas.

  17. Accounting for geochemical alterations of caprock fracture permeability in basin-scale models of leakage from geologic CO2 reservoirs

    Science.gov (United States)

    Guo, B.; Fitts, J. P.; Dobossy, M.; Bielicki, J. M.; Peters, C. A.

    2012-12-01

    Climate mitigation, public acceptance and energy, markets demand that the potential CO2 leakage rates from geologic storage reservoirs are predicted to be low and are known to a high level of certainty. Current approaches to predict CO2 leakage rates assume constant permeability of leakage pathways (e.g., wellbores, faults, fractures). A reactive transport model was developed to account for geochemical alterations that result in permeability evolution of leakage pathways. The one-dimensional reactive transport model was coupled with the basin-scale Estimating Leakage Semi-Analytical (ELSA) model to simulate CO2 and brine leakage through vertical caprock pathways for different CO2 storage reservoir sites and injection scenarios within the Mt. Simon and St. Peter sandstone formations of the Michigan basin. Mineral dissolution in the numerical reactive transport model expands leakage pathways and increases permeability as a result of calcite dissolution by reactions driven by CO2-acidified brine. A geochemical model compared kinetic and equilibrium treatments of calcite dissolution within each grid block for each time step. For a single fracture, we investigated the effect of the reactions on leakage by performing sensitivity analyses of fracture geometry, CO2 concentration, calcite abundance, initial permeability, and pressure gradient. Assuming that calcite dissolution reaches equilibrium at each time step produces unrealistic scenarios of buffering and permeability evolution within fractures. Therefore, the reactive transport model with a kinetic treatment of calcite dissolution was coupled to the ELSA model and used to compare brine and CO2 leakage rates at a variety of potential geologic storage sites within the Michigan basin. The results are used to construct maps based on the susceptibility to geochemically driven increases in leakage rates. These maps should provide useful and easily communicated inputs into decision-making processes for siting geologic CO2

  18. Serviceability limit state related to excessive lateral deformations to account for infill walls in the structural model

    Directory of Open Access Journals (Sweden)

    G. M. S. ALVA

    Full Text Available Brazilian Codes NBR 6118 and NBR 15575 provide practical values for interstory drift limits applied to conventional modeling in order to prevent negative effects in masonry infill walls caused by excessive lateral deformability, however these codes do not account for infill walls in the structural model. The inclusion of infill walls in the proposed model allows for a quantitative evaluation of structural stresses in these walls and an assessment of cracking in these elements (sliding shear diagonal tension and diagonal compression cracking. This paper presents the results of simulations of single-story one-bay infilled R/C frames. The main objective is to show how to check the serviceability limit states under lateral loads when the infill walls are included in the modeling. The results of numerical simulations allowed for an evaluation of stresses and the probable cracking pattern in infill walls. The results also allowed an identification of some advantages and limitations of the NBR 6118 practical procedure based on interstory drift limits.

  19. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Peixin; Chai, Feng [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Bi, Yunlong [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Pei, Yulong, E-mail: peiyulong1@163.com [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Cheng, Shukang [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China)

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization. - Highlights: • The no-load magnetic field of poke-type motors is firstly calculated by analytical method. • The magnetic circuit model and iterative method are employed to calculate the permeability. • The analytical expression of each subdomain is derived.. • The proposed method can effectively reduce the predesign stages duration.

  20. Ensemble-based flash-flood modelling: Taking into account hydrodynamic parameters and initial soil moisture uncertainties

    Science.gov (United States)

    Edouard, Simon; Vincendon, Béatrice; Ducrocq, Véronique

    2018-05-01

    Intense precipitation events in the Mediterranean often lead to devastating flash floods (FF). FF modelling is affected by several kinds of uncertainties and Hydrological Ensemble Prediction Systems (HEPS) are designed to take those uncertainties into account. The major source of uncertainty comes from rainfall forcing and convective-scale meteorological ensemble prediction systems can manage it for forecasting purpose. But other sources are related to the hydrological modelling part of the HEPS. This study focuses on the uncertainties arising from the hydrological model parameters and initial soil moisture with aim to design an ensemble-based version of an hydrological model dedicated to Mediterranean fast responding rivers simulations, the ISBA-TOP coupled system. The first step consists in identifying the parameters that have the strongest influence on FF simulations by assuming perfect precipitation. A sensitivity study is carried out first using a synthetic framework and then for several real events and several catchments. Perturbation methods varying the most sensitive parameters as well as initial soil moisture allow designing an ensemble-based version of ISBA-TOP. The first results of this system on some real events are presented. The direct perspective of this work will be to drive this ensemble-based version with the members of a convective-scale meteorological ensemble prediction system to design a complete HEPS for FF forecasting.

  1. A statistical human rib cage geometry model accounting for variations by age, sex, stature and body mass index.

    Science.gov (United States)

    Shi, Xiangnan; Cao, Libo; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2014-07-18

    In this study, we developed a statistical rib cage geometry model accounting for variations by age, sex, stature and body mass index (BMI). Thorax CT scans were obtained from 89 subjects approximately evenly distributed among 8 age groups and both sexes. Threshold-based CT image segmentation was performed to extract the rib geometries, and a total of 464 landmarks on the left side of each subject׳s ribcage were collected to describe the size and shape of the rib cage as well as the cross-sectional geometry of each rib. Principal component analysis and multivariate regression analysis were conducted to predict rib cage geometry as a function of age, sex, stature, and BMI, all of which showed strong effects on rib cage geometry. Except for BMI, all parameters also showed significant effects on rib cross-sectional area using a linear mixed model. This statistical rib cage geometry model can serve as a geometric basis for developing a parametric human thorax finite element model for quantifying effects from different human attributes on thoracic injury risks. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  3. Multivariate space-time modelling of multiple air pollutants and their health effects accounting for exposure uncertainty.

    Science.gov (United States)

    Huang, Guowen; Lee, Duncan; Scott, E Marian

    2018-03-30

    The long-term health effects of air pollution are often estimated using a spatio-temporal ecological areal unit study, but this design leads to the following statistical challenges: (1) how to estimate spatially representative pollution concentrations for each areal unit; (2) how to allow for the uncertainty in these estimated concentrations when estimating their health effects; and (3) how to simultaneously estimate the joint effects of multiple correlated pollutants. This article proposes a novel 2-stage Bayesian hierarchical model for addressing these 3 challenges, with inference based on Markov chain Monte Carlo simulation. The first stage is a multivariate spatio-temporal fusion model for predicting areal level average concentrations of multiple pollutants from both monitored and modelled pollution data. The second stage is a spatio-temporal model for estimating the health impact of multiple correlated pollutants simultaneously, which accounts for the uncertainty in the estimated pollution concentrations. The novel methodology is motivated by a new study of the impact of both particulate matter and nitrogen dioxide concentrations on respiratory hospital admissions in Scotland between 2007 and 2011, and the results suggest that both pollutants exhibit substantial and independent health effects. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. A deposit model for carbonatite and peralkaline intrusion-related rare earth element deposits: Chapter J in Mineral deposit models for resource assessment

    Science.gov (United States)

    Verplanck, Philip L.; Van Gosen, Bradley S.; Seal, Robert R.; McCafferty, Anne E.

    2014-01-01

    Carbonatite and alkaline intrusive complexes, as well as their weathering products, are the primary sources of rare earth elements. A wide variety of other commodities have been exploited from carbonatites and alkaline igneous rocks including niobium, phosphate, titanium, vermiculite, barite, fluorite, copper, calcite, and zirconium. Other elements enriched in these deposits include manganese, strontium, tantalum, thorium, vanadium, and uranium. Carbonatite and peralkaline intrusion-related rare earth element deposits are presented together in this report because of the spatial, and potentially genetic, association between carbonatite and alkaline rocks. Although these rock types occur together at many locations, carbonatite and peralkaline intrusion-related rare earth element deposits are not generally found together.

  5. MODEL OF DISTRIBUTION OF THE BUDGET OF THE PORTFOLIO OF IT PROJECTS TAKING IN-TO ACCOUNT THEIR PRIORITY

    Directory of Open Access Journals (Sweden)

    Anita V. Sotnikova

    2015-01-01

    Full Text Available Article is devoted to a problem of effective distribution of the general budget of a portfolio between the IT projects which are its part taking into ac-count their priority. The designated problem is actual in view of low results of activity of the consulting companies in the sphere of information technologies.For determination of priority of IT projects the method of analytical networks developed by T. Saati is used. For the purpose of application of this method the system of criteria (indicators reflecting influence of IT projects of a portfolio on the most significant purposes of implementation of IT projects of a portfolio is developed. As system of criteria the key indicators of efficiency defined when developing the Balanced system of indicators which meet above-mentioned requirements are used. The essence of a method of analytical net-works consists in paired comparison of key indicators of efficiency concerning the purpose of realization of a portfolio and IT projects which are a part of a portfolio. Result of use of a method of analytical networks are coefficients of priority of each IT project of a portfolio. The received coefficients of priority of IT projects are used in the offered model of distribution of the budget of a portfolio between IT projects. Thus, the budget of a portfolio of IT projects is distributed between them taking into account not only the income from implementation of each IT project, but also other criteria, important for the IT company, for example: the degree of compliance of the IT project to strategic objectives of the IT company defining expediency of implementation of the IT project; the term of implementation of the IT project determined by the customer. The developed model of distribution of the budget of a portfolio between IT projects is approved on the example of distribution of the budget between IT projects of the portfolio consisting of three IT projects. Taking into account the received

  6. Population Modeling of Modified Risk Tobacco Products Accounting for Smoking Reduction and Gradual Transitions of Relative Risk.

    Science.gov (United States)

    Poland, Bill; Teischinger, Florian

    2017-11-01

    As suggested by the Food and Drug Administration (FDA) Modified Risk Tobacco Product (MRTP) Applications Draft Guidance, we developed a statistical model based on public data to explore the effect on population mortality of an MRTP resulting in reduced conventional cigarette smoking. Many cigarette smokers who try an MRTP persist as dual users while smoking fewer conventional cigarettes per day (CPD). Lower-CPD smokers have lower mortality risk based on large cohort studies. However, with little data on the effect of smoking reduction on mortality, predictive modeling is needed. We generalize prior assumptions of gradual, exponential decay of Excess Risk (ER) of death, relative to never-smokers, after quitting or reducing CPD. The same age-dependent slopes are applied to all transitions, including initiation to conventional cigarettes and to a second product (MRTP). A Monte Carlo simulation model generates random individual product use histories, including CPD, to project cumulative deaths through 2060 in a population with versus without the MRTP. Transitions are modeled to and from dual use, which affects CPD and cigarette quit rates, and to MRTP use only. Results in a hypothetical scenario showed high sensitivity of long-run mortality to CPD reduction levels and moderate sensitivity to ER transition rates. Models to project population effects of an MRTP should account for possible mortality effects of reduced smoking among dual users. In addition, studies should follow dual-user CPD histories and quit rates over long time periods to clarify long-term usage patterns and thereby improve health impact projections. We simulated mortality effects of a hypothetical MRTP accounting for cigarette smoking reduction by smokers who add MRTP use. Data on relative mortality risk versus CPD suggest that this reduction may have a substantial effect on mortality rates, unaccounted for in other models. This effect is weighed with additional hypothetical effects in an example.

  7. Tritium accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.; Spannagel, G.

    1995-01-01

    Conventional accountancy means that for a given material balance area and a given interval of time the tritium balance is established so that at the end of that interval of time the book inventory is compared with the measured inventory. In this way, an optimal effectiveness of accountancy is achieved. However, there are still further objectives of accountancy, namely the timely detection of anomalies as well as the localization of anomalies in a major system. It can be shown that each of these objectives can be optimized only at the expense of the others. Recently, Near-Real-Time Accountancy procedures have been studied; their methodological background as well as their merits will be discussed. (orig.)

  8. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    Science.gov (United States)

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Rare Higgs three body decay induced by top-Higgs FCNC coupling in the littlest Higgs model with T-parity

    Science.gov (United States)

    Yang, Bing-Fang; Liu, Zhi-Yong; Liu, Ning

    2017-04-01

    Motivated by the search for flavor-changing neutral current (FCNC) top quark decays at the LHC, we calculate the rare Higgs three body decay H → Wbc induced by top-Higgs FCNC coupling in the littlest Higgs model with T-parity (LHT). We find that the branching ratio of H → Wbc in the LHT model can reach O(10-7) in the allowed parameter space. Supported by National Natural Science Foundation of China (11305049, 11405047), Startup Foundation for Doctors of Henan Normal University (11112, qd15207) and Education Department Foundation of Henan Province(14A140010)

  10. Accounting assessment

    Directory of Open Access Journals (Sweden)

    Kafka S.М.

    2017-03-01

    Full Text Available The proper evaluation of accounting objects influences essentially upon the reliability of assessing the financial situation of a company. Thus, the problem in accounting estimate is quite relevant. The works of home and foreign scholars on the issues of assessment of accounting objects, regulatory and legal acts of Ukraine controlling the accounting and compiling financial reporting are a methodological basis for the research. The author uses the theoretical methods of cognition (abstraction and generalization, analysis and synthesis, induction and deduction and other methods producing conceptual knowledge for the synthesis of theoretical and methodological principles in the evaluation of assets accounting, liabilities and equity. The tabular presentation and information comparison methods are used for analytical researches. The article considers the modern approaches to the issue of evaluation of accounting objects and financial statements items. The expedience to keep records under historical value is proved and the articles of financial statements are to be presented according to the evaluation on the reporting date. In connection with the evaluation the depreciation of fixed assets is considered as a process of systematic return into circulation of the before advanced funds on the purchase (production, improvement of fixed assets and intangible assets by means of including the amount of wear in production costs. Therefore it is proposed to amortize only the actual costs incurred, i.e. not to depreciate the fixed assets received free of charge and surplus valuation of different kinds.

  11. Rare Decays at LHCb

    Science.gov (United States)

    Hall, Sam

    2014-04-01

    Rare decays of beauty and charm hadrons provide an effective method of testing the Standard Model and probing possible new physics scenarios. The LHCb experiment has published a variety of interesting results in this field, some of which are presented here. In particular the measurements of the branching fractions of B(s)0 → μ+μ- which, in combination with CMS, resulted in the first observation of the Bs0 → μ+μ- decay. Other topics include searches for the rare decay D0 → μ+μ-, the lepton flavour violating decays B(s)0 → e±μ∓, and the observation of the ψ(4160) resonance in the region of low recoil in B+ → K+μ+μ- decay. New results on the angular analysis of the decay B0 → K*0μ+μ- with form factor independent observables are also shown.

  12. Modeling the Photoelectron Spectra of MoNbO2(-) Accounting for Spin Contamination in Density Functional Theory.

    Science.gov (United States)

    Thompson, Lee M; Hratchian, Hrant P

    2015-08-13

    Spin contamination in density functional studies has been identified as a cause of discrepancies between theoretical and experimental spectra of metal oxide clusters such as MoNbO2. We perform calculations to simulate the photoelectron spectra of the MoNbO2 anion using broken-symmetry density functional theory incorporating recently developed approximate projection methods. These calculations are able to account for the presence of contaminating spin states at single-reference computational cost. Results using these new tools demonstrate the significant effect of spin-contamination on geometries and force constants and show that the related errors in simulated spectra may be largely overcome by using an approximate projection model.

  13. Accounting for pH heterogeneity and variability in modelling human health risks from cadmium in contaminated land

    International Nuclear Information System (INIS)

    Gay, J. Rebecca; Korre, Anna

    2009-01-01

    The authors have previously published a methodology which combines quantitative probabilistic human health risk assessment and spatial statistical methods (geostatistics) to produce an assessment, incorporating uncertainty, of risks to human health from exposure to contaminated land. The model assumes a constant soil to plant concentration factor (CF veg ) when calculating intake of contaminants. This model is modified here to enhance its use in a situation where CF veg varies according to soil pH, as is the case for cadmium. The original methodology uses sequential indicator simulation (SIS) to map soil concentration estimates for one contaminant across a site. A real, age-stratified population is mapped across the contaminated area, and intake of soil contaminants by individuals is calculated probabilistically using an adaptation of the Contaminated Land Exposure Assessment (CLEA) model. The proposed improvement involves not only the geostatistical estimation of the contaminant concentration, but also that of soil pH, which in turn leads to a variable CF veg estimate which influences the human intake results. The results presented demonstrate that taking pH into account can influence the outcome of the risk assessment greatly. It is proposed that a similar adaptation could be used for other combinations of soil variables which influence CF veg .

  14. Explanations, mechanisms, and developmental models: Why the nativist account of early perceptual learning is not a proper mechanistic model

    Directory of Open Access Journals (Sweden)

    Radenović Ljiljana

    2013-01-01

    Full Text Available In the last several decades a number of studies on perceptual learning in early infancy have suggested that even infants seem to be sensitive to the way objects move and interact in the world. In order to explain the early emergence of infants’ sensitivity to causal patterns in the world some psychologists have proposed that core knowledge of objects and causal relations is innate (Leslie & Keeble 1987, Carey & Spelke, 1994; Keil, 1995; Spelke et al., 1994. The goal of this paper is to examine the nativist developmental model by investigating the criteria that a mechanistic model needs to fulfill if it is to be explanatory. Craver (2006 put forth a number of such criteria and developed a few very useful distinctions between explanation sketches and proper mechanistic explanations. By applying these criteria to the nativist developmental model I aim to show, firstly, that nativists only partially characterize the phenomenon at stake without giving us the details of when and under which conditions perception and attention in early infancy take place. Secondly, nativist start off with a description of the phenomena to be explained (even if it is only a partial description but import into it a particular theory of perception that requires further empirical evidence and further defense on its own. Furthermore, I argue that innate knowledge is a good candidate for a filler term (a term that is used to name the still unknown processes and parts of the mechanism and is likely to become redundant. Recent extensive research on early intermodal perception indicates that the mechanism enabling the perception of regularities and causal patterns in early infancy is grounded in our neurophysiology. However, this mechanism is fairly basic and does not involve highly sophisticated cognitive structures or innate core knowledge. I conclude with a remark that a closer examination of the mechanisms involved in early perceptual learning indicates that the nativism

  15. Goals and Psychological Accounting

    DEFF Research Database (Denmark)

    Koch, Alexander Karl; Nafziger, Julia

    We model how people formulate and evaluate goals to overcome self-control problems. People often attempt to regulate their behavior by evaluating goal-related outcomes separately (in narrow psychological accounts) rather than jointly (in a broad account). To explain this evidence, our theory...... of endogenous narrow or broad psychological accounts combines insights from the literatures on goals and mental accounting with models of expectations-based reference-dependent preferences. By formulating goals the individual creates expectations that induce reference points for task outcomes. These goal......-induced reference points make substandard performance psychologically painful and motivate the individual to stick to his goals. How strong the commitment to goals is depends on the type of psychological account. We provide conditions when it is optimal to evaluate goals in narrow accounts. The key intuition...

  16. An empirical test of Birkett’s competency model for management accountants : A confirmative study conducted in the Netherlands

    NARCIS (Netherlands)

    Bots, J.M.; Groenland, E.A.G.; Swagerman, D.

    2009-01-01

    In 2002, the Accountants-in-Business section of the International Federation of Accountants (IFAC) issued the Competency Profiles for Management Accounting Practice and Practitioners report. This “Birkett Report” presents a framework for competency development during the careers of management

  17. A fundamentalist perspective on accounting and implications for accounting

    OpenAIRE

    Guohua Jiang; Stephen Penman

    2013-01-01

    This paper presents a framework for addressing normative accounting issues for reporting to shareholders. The framework is an alternative to the emerging Conceptual Framework of the International Accounting Standards Board and the Financial Accounting Standards Board. The framework can be broadly characterized as a utilitarian approach to accounting standard setting. It has two main features. First, accounting is linked to valuation models under which shareholders use accounting information t...

  18. Rare-earth elements

    Science.gov (United States)

    Van Gosen, Bradley S.; Verplanck, Philip L.; Seal, Robert R.; Long, Keith R.; Gambogi, Joseph; Schulz, Klaus J.; DeYoung,, John H.; Seal, Robert R.; Bradley, Dwight C.

    2017-12-19

    The rare-earth elements (REEs) are 15 elements that range in atomic number from 57 (lanthanum) to 71 (lutetium); they are commonly referred to as the “lanthanides.” Yttrium (atomic number 39) is also commonly regarded as an REE because it shares chemical and physical similarities and has affinities with the lanthanides. Although REEs are not rare in terms of average crustal abundance, the concentrated deposits of REEs are limited in number.Because of their unusual physical and chemical properties, the REEs have diverse defense, energy, industrial, and military technology applications. The glass industry is the leading consumer of REE raw materials, which are used for glass polishing and as additives that provide color and special optical properties to the glass. Lanthanum-based catalysts are used in petroleum refining, and cerium-based catalysts are used in automotive catalytic converters. The use of REEs in magnets is a rapidly increasing application. Neodymium-iron-boron magnets, which are the strongest known type of magnets, are used when space and weight are restrictions. Nickel-metal hydride batteries use anodes made of a lanthanum-based alloys.China, which has led the world production of REEs for decades, accounted for more than 90 percent of global production and supply, on average, during the past decade. Citing a need to retain its limited REE resources to meet domestic requirements as well as concerns about the environmental effects of mining, China began placing restrictions on the supply of REEs in 2010 through the imposition of quotas, licenses, and taxes. As a result, the global rare-earth industry has increased its stockpiling of REEs; explored for deposits outside of China; and promoted new efforts to conserve, recycle, and substitute for REEs. New mine production began at Mount Weld in Western Australia, and numerous other exploration and development projects noted in this chapter are ongoing throughout the world.The REE-bearing minerals are

  19. Modelling of the physico-chemical behaviour of clay minerals with a thermo-kinetic model taking into account particles morphology in compacted material.

    Science.gov (United States)

    Sali, D.; Fritz, B.; Clément, C.; Michau, N.

    2003-04-01

    Modelling of fluid-mineral interactions is largely used in Earth Sciences studies to better understand the involved physicochemical processes and their long-term effect on the materials behaviour. Numerical models simplify the processes but try to preserve their main characteristics. Therefore the modelling results strongly depend on the data quality describing initial physicochemical conditions for rock materials, fluids and gases, and on the realistic way of processes representations. The current geo-chemical models do not well take into account rock porosity and permeability and the particle morphology of clay minerals. In compacted materials like those considered as barriers in waste repositories, low permeability rocks like mudstones or compacted powders will be used : they contain mainly fine particles and the geochemical models used for predicting their interactions with fluids tend to misjudge their surface areas, which are fundamental parameters in kinetic modelling. The purpose of this study was to improve how to take into account the particles morphology in the thermo-kinetic code KINDIS and the reactive transport code KIRMAT. A new function was integrated in these codes, considering the reaction surface area as a volume depending parameter and the calculated evolution of the mass balance in the system was coupled with the evolution of reactive surface areas. We made application exercises for numerical validation of these new versions of the codes and the results were compared with those of the pre-existing thermo-kinetic code KINDIS. Several points are highlighted. Taking into account reactive surface area evolution during simulation modifies the predicted mass transfers related to fluid-minerals interactions. Different secondary mineral phases are also observed during modelling. The evolution of the reactive surface parameter helps to solve the competition effects between different phases present in the system which are all able to fix the chemical

  20. On the importance of accounting for competing risks in pediatric brain cancer: II. Regression modeling and sample size.

    Science.gov (United States)

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-03-15

    To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Paying for the Orphan Drug System: break or bend? Is it time for a new evaluation system for payers in Europe to take account of new rare disease treatments?

    Directory of Open Access Journals (Sweden)

    Hughes-Wilson Wills

    2012-09-01

    Full Text Available Abstract Since its enactment in 2000, the European Orphan Medicinal Products Regulation has allowed the review and approval of approaching 70 treatments for some 55 different conditions in Europe. Success does not come without a price, however. Many of these so-called “orphan drugs” have higher price points than treatments for more common diseases. This has been raising debate as to whether the treatments are worth it, which, in turn risks blocking patient access to treatment. To date, orphan drugs have only accounted for a small percentage of the overall drug budget. It would appear that, with increasing numbers of orphan drugs, governments are concerned about the future budget impact and their cost-effectiveness in comparison with other healthcare interventions. Orphan drugs are under the spotlight, something that is likely to continue as the economic crisis in Europe takes hold and governments respond with austerity measures that include cuts to healthcare expenditures. Formally and informally, governments are looking at how they are going to handle orphan drugs in the future. Collaborative proposals between EU governments to better understand the value of orphan drugs are under consideration. In recent years there has been increasing criticism of behaviours in the orphan drug field, mainly centring on two key perceptions of the system: the high prices of orphan drugs and their inability to meet standard cost-effectiveness thresholds; and the construct of the system itself, which allows companies to gain the benefits that accrue from being badged as an orphan drug. The authors hypothesise that, by examining these criticisms individually, one might be able to turn these different “behaviours” into criteria for the creation of a system to evaluate new orphan drugs coming onto the market. It has been acknowledged that standard methodologies for Health Technology Assessments (HTA will need to be tailored to take into account the

  2. ABACC - Brazil-Argentina Agency for Accounting and Control of Nuclear Materials, a model of integration and transparence

    International Nuclear Information System (INIS)

    Oliveira, Antonio A.; Do Canto, Odilon Marcusso

    2013-01-01

    Argentina and Brazil began its activities in the nuclear area about the same time, in the 50 century past. The existence of an international nuclear nonproliferation treaty-TNP-seen by Brazil and Argentina as discriminatory and prejudicial to the interests of the countries without nuclear weapons, led to the need for a common system of control of nuclear material between the two countries to somehow provide assurances to the international community of the exclusively peaceful purpose of its nuclear programs. The creation of a common system, assured the establishment of uniform procedures to implement safeguards in Argentina and Brazil, so the same requirements and safeguards procedures took effect in both countries, and the operators of nuclear facilities began to follow the same rules of control of nuclear materials and subjected to the same type of verification and control. On July 18, 1991, the Bilateral Agreement for the Exclusively Peaceful Use of Nuclear Energy created a binational body, the Argentina-Brazil Agency for Accounting and Control of Nuclear Materials-ABACC-to implement the so-called Common System of Accounting and Control of Nuclear materials - SCCC. The deal provided, permanently, a clear commitment to use exclusively for peaceful purposes all material and nuclear facilities under the jurisdiction or control of the two countries. The Quadripartite Agreement, signed in December of that year, between the two countries, ABACC and IAEA completed the legal framework for the implementation of comprehensive safeguards system. The 'model ABACC' now represents a paradigmatic framework in the long process of economic, political, technological and cultural integration of the two countries. Argentina and Brazil were able to establish a guarantee system that is unique in the world today and that consolidated and matured over more than twenty years, has earned the respect of the international community

  3. Comprehensive Revenue and Expense Data Collection Methodology for Teaching Health Centers: A Model for Accountable Graduate Medical Education Financing.

    Science.gov (United States)

    Regenstein, Marsha; Snyder, John E; Jewers, Mariellen Malloy; Nocella, Kiki; Mullan, Fitzhugh

    2018-04-01

    Despite considerable federal investment, graduate medical education financing is neither transparent for estimating residency training costs nor accountable for effectively producing a physician workforce that matches the nation's health care needs. The Teaching Health Center Graduate Medical Education (THCGME) program's authorization in 2010 provided an opportunity to establish a more transparent financing mechanism. We developed a standardized methodology for quantifying the necessary investment to train primary care physicians in high-need communities. The THCGME Costing Instrument was designed utilizing guidance from site visits, financial documentation, and expert review. It collects educational outlays, patient service expenses and revenues from residents' ambulatory and inpatient care, and payer mix. The instrument was fielded from April to November 2015 in 43 THCGME-funded residency programs of varying specialties and organizational structures. Of the 43 programs, 36 programs (84%) submitted THCGME Costing Instruments. The THCGME Costing Instrument collected standardized, detailed cost data on residency labor (n = 36), administration and educational outlays (n = 33), ambulatory care visits and payer mix (n = 30), patient service expenses (n =  26), and revenues generated by residents (n = 26), in contrast to Medicare cost reports, which include only costs incurred by residency programs. The THCGME Costing Instrument provides a model for calculating evidence-based costs and revenues of community-based residency programs, and it enhances accountability by offering an approach that estimates residency costs and revenues in a range of settings. The instrument may have feasibility and utility for application in other residency training settings.

  4. Signaling and Accounting Information

    OpenAIRE

    Stewart C. Myers

    1989-01-01

    This paper develops a signaling model in which accounting information improves real investment decisions. Pure cash flow reporting is shown to lead to underinvestment when managers have superior information but are acting in shareholders' interests. Accounting by prespecified, "objective" rules alleviates the underinvestment problem.

  5. Towards ecosystem accounting

    NARCIS (Netherlands)

    Duku, C.; Rathjens, H.; Zwart, S.J.; Hein, L.

    2015-01-01

    Ecosystem accounting is an emerging field that aims to provide a consistent approach to analysing environment-economy interactions. One of the specific features of ecosystem accounting is the distinction between the capacity and the flow of ecosystem services. Ecohydrological modelling to support

  6. Basis of accountability system

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    The first part of this presentation describes in an introductory manner the accountability design approach which is used for the Model Plant in order to meet US safeguards requirements. The general requirements for the US national system are first presented. Next, the approach taken to meet each general requirement is described. The general concepts and principles of the accountability system are introduced. The second part of this presentation describes some basic concepts and techniques used in the model plant accounting system and relates them to US safeguards requirements. The specifics and mechanics of the model plant accounting system are presented in the third part. The purpose of this session is to enable participants to: (1) understand how the accounting system is designed to meet safeguards criteria for both IAEA and State Systems; (2) understand the principles of materials accounting used to account for element and isotope in the model plant; (3) understand how the computer-based accounting system operates to meet the above objectives

  7. Eye movement control in reading: accounting for initial fixation locations and refixations within the E-Z Reader model.

    Science.gov (United States)

    Reichle, E D; Rayner, K; Pollatsek, A

    1999-10-01

    Reilly and O'Regan (1998, Vision Research, 38, 303-317) used computer simulations to evaluate how well several different word-targeting strategies could account for results which show that the distributions of fixation locations in reading are systematically related to low-level oculomotor variables, such as saccade distance and launch site [McConkie, Kerr, Reddix & Zola, (1988). Vision Research, 28, 1107-1118]. Their simulation results suggested that fixation locations are primarily determined by word length information, and that the processing of language, such as the identification of words, plays only a minimal role in deciding where to move the eyes. This claim appears to be problematic for our model of eye movement control in reading, E-Z Reader [Rayner, Reichle & Pollatsek (1998). Eye movement control in reading: an overview and model. In G. Underwood, Eye guidance in reading and scene perception (pp. 243-268). Oxford, UK: Elsevier; Reichle, Pollatsek, Fisher & Rayner (1998). Psychological Review, 105, 125-157], because it assumes that lexical access is the engine that drives the eyes forward during reading. However, we show that a newer version of E-Z Reader which still assumes that lexical access is the engine driving eye movements also predicts the locations of fixations and within-word refixations, and therefore provides a viable framework for understanding how both linguistic and oculomotor variables affect eye movements in reading.

  8. Accounts Assistant

    Indian Academy of Sciences (India)

    CHITRA

    (Not more than three months old). Annexure 1. Indian Academy of Sciences. C V Raman Avenue, Bengaluru 560 080. Application for the Post of: Accounts Assistant / Administrative Assistant Trainee / Assistant – Official Language. Implementation Policy / Temporary Copy Editor and Proof Reader / Social Media Manager. 1.

  9. A Surprisingly Simple Electrostatic Model Explains Bent Versus Linear Structures in M(+)-RG2 Species (M = Group 1 Metal, Li-Fr; RG = Rare Gas, He-Rn).

    Science.gov (United States)

    Andrejeva, Anna; Breckenridge, W H; Wright, Timothy G

    2015-11-05

    It is found that a simple electrostatic model involving competition between the attractive dispersive interaction and induced-dipole repulsion between the two RG atoms performs extremely well in rationalizing the M(+)-RG2 geometries, where M = group 1 metal and RG = rare gas. The Li(+)-RG2 and Na(+)-RG2 complexes have previously been found to exhibit quasilinear or linear minimum-energy geometries, with the Na(+)-RG2 complexes having an additional bent local minimum [A. Andrejeva, A. M. Gardner, J. B. Graneek, R. J. Plowright, W. H. Breckenridge, T. G. Wright, J. Phys. Chem. A, 2013, 117, 13578]. In the present work, the geometries for M = K-Fr are found to be bent. A simple electrostatic model explains these conclusions and is able to account almost quantitatively for the binding energy of the second RG atom, as well as the form of the angular potential, for all 36 titular species. Additionally, results of population analyses are presented together with orbital contour plots; combined with the success of the electrostatic model, the expectation that these complexes are all physically bound is confirmed.

  10. Rare Kaon Decays

    International Nuclear Information System (INIS)

    Kudenko, Y.

    1999-01-01

    The past few years have seen an evolution in the study of rare K decays from a concentration on explicitly Standard Model (SM) violating decays such as K L 0 → μe, to one on SM-allowed but suppressed decays such as K → πν| ν, in which short-distance interactions are dominant. There are also a number of recent experimental and theoretical studies of long-distance-dominated decays, but they do not have space to cover these, with the exception of those that are needed in the discussion of the short-distance-dominated processes

  11. Rare beauty and charm decays

    Science.gov (United States)

    Blake, T.; LHCb Collaboration

    2017-07-01

    Rare beauty and charm decays can provide powerful probes of physics beyond the Standard Model. These proceedings summarise the latest measurements of rare beauty and charm decays from the LHCb experiment at the end of Run 1 of the LHC. Whilst the majority of the measurements are consistent with SM predictions, small differences are seen in the rate and angular distribution of ℓ- decay processes.

  12. Estimating the Societal Benefits of THA After Accounting for Work Status and Productivity: A Markov Model Approach.

    Science.gov (United States)

    Koenig, Lane; Zhang, Qian; Austin, Matthew S; Demiralp, Berna; Fehring, Thomas K; Feng, Chaoling; Mather, Richard C; Nguyen, Jennifer T; Saavoss, Asha; Springer, Bryan D; Yates, Adolph J

    2016-12-01

    Demand for total hip arthroplasty (THA) is high and expected to continue to grow during the next decade. Although much of this growth includes working-aged patients, cost-effectiveness studies on THA have not fully incorporated the productivity effects from surgery. We asked: (1) What is the expected effect of THA on patients' employment and earnings? (2) How does accounting for these effects influence the cost-effectiveness of THA relative to nonsurgical treatment? Taking a societal perspective, we used a Markov model to assess the overall cost-effectiveness of THA compared with nonsurgical treatment. We estimated direct medical costs using Medicare claims data and indirect costs (employment status and worker earnings) using regression models and nonparametric simulations. For direct costs, we estimated average spending 1 year before and after surgery. Spending estimates included physician and related services, hospital inpatient and outpatient care, and postacute care. For indirect costs, we estimated the relationship between functional status and productivity, using data from the National Health Interview Survey and regression analysis. Using regression coefficients and patient survey data, we ran a nonparametric simulation to estimate productivity (probability of working multiplied by earnings if working minus the value of missed work days) before and after THA. We used the Australian Orthopaedic Association National Joint Replacement Registry to obtain revision rates because it contained osteoarthritis-specific THA revision rates by age and gender, which were unavailable in other registry reports. Other model assumptions were extracted from a previously published cost-effectiveness analysis that included a comprehensive literature review. We incorporated all parameter estimates into Markov models to assess THA effects on quality-adjusted life years and lifetime costs. We conducted threshold and sensitivity analyses on direct costs, indirect costs, and revision

  13. AMERICAN ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Mihaela Onica

    2005-01-01

    Full Text Available The international Accounting Standards already contribute to the generation of better and more easily comparable financial information on an international level, supporting thus a more effective allocationof the investments resources in the world. Under the circumstances, there occurs the necessity of a consistent application of the standards on a global level. The financial statements are part of thefinancial reporting process. A set of complete financial statements usually includes a balance sheet,a profit and loss account, a report of the financial item change (which can be presented in various ways, for example as a status of the treasury flows and of the funds flows and those notes, as well as those explanatory situations and materials which are part of the financial statements.

  14. American Accounting

    OpenAIRE

    Mihaela Cristina Onica

    2005-01-01

    The international Accounting Standards already contribute to the generation of better and more easily comparable financial information on an international level, supporting thus a more effective allocation of the investments resources in the world. Under the circumstances, there occurs the necessity of a consistent application of the standards on a global level. The financial statements are part of the financial reporting process. A set of complete financial statements usually includes a bala...

  15. Dorsoventral and Proximodistal Hippocampal Processing Account for the Influences of Sleep and Context on Memory (Reconsolidation: A Connectionist Model

    Directory of Open Access Journals (Sweden)

    Justin Lines

    2017-01-01

    Full Text Available The context in which learning occurs is sufficient to reconsolidate stored memories and neuronal reactivation may be crucial to memory consolidation during sleep. The mechanisms of context-dependent and sleep-dependent memory (reconsolidation are unknown but involve the hippocampus. We simulated memory (reconsolidation using a connectionist model of the hippocampus that explicitly accounted for its dorsoventral organization and for CA1 proximodistal processing. Replicating human and rodent (reconsolidation studies yielded the following results. (1 Semantic overlap between memory items and extraneous learning was necessary to explain experimental data and depended crucially on the recurrent networks of dorsal but not ventral CA3. (2 Stimulus-free, sleep-induced internal reactivations of memory patterns produced heterogeneous recruitment of memory items and protected memories from subsequent interference. These simulations further suggested that the decrease in memory resilience when subjects were not allowed to sleep following learning was primarily due to extraneous learning. (3 Partial exposure to the learning context during simulated sleep (i.e., targeted memory reactivation uniformly increased memory item reactivation and enhanced subsequent recall. Altogether, these results show that the dorsoventral and proximodistal organization of the hippocampus may be important components of the neural mechanisms for context-based and sleep-based memory (reconsolidations.

  16. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    Science.gov (United States)

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Dissolution of rare-earth clusters in SiO2 by Al codoping: A microscopic model

    DEFF Research Database (Denmark)

    Lægsgaard, Jesper

    2002-01-01

    A microscopic model for the incorporation of Er2O3 units in silica codoped with Al2O3 is presented. The model assumes that Er clustering is counteracted by the formation of Er-Al complexes in which each Er ion provides valence compensation for three substitutional Al ions. These complexes...

  18. Self-consistent modeling of induced magnetic field in Titan's atmosphere accounting for the generation of Schumann resonance

    Science.gov (United States)

    Béghin, Christian

    2015-02-01

    This model is worked out in the frame of physical mechanisms proposed in previous studies accounting for the generation and the observation of an atypical Schumann Resonance (SR) during the descent of the Huygens Probe in the Titan's atmosphere on 14 January 2005. While Titan is staying inside the subsonic co-rotating magnetosphere of Saturn, a secondary magnetic field carrying an Extremely Low Frequency (ELF) modulation is shown to be generated through ion-acoustic instabilities of the Pedersen current sheets induced at the interface region between the impacting magnetospheric plasma and Titan's ionosphere. The stronger induced magnetic field components are focused within field-aligned arcs-like structures hanging down the current sheets, with minimum amplitude of about 0.3 nT throughout the ramside hemisphere from the ionopause down to the Moon surface, including the icy crust and its interface with a conductive water ocean. The deep penetration of the modulated magnetic field in the atmosphere is thought to be allowed thanks to the force balance between the average temporal variations of thermal and magnetic pressures within the field-aligned arcs. However, there is a first cause of diffusion of the ELF magnetic components, probably due to feeding one, or eventually several SR eigenmodes. A second leakage source is ascribed to a system of eddy-Foucault currents assumed to be induced through the buried water ocean. The amplitude spectrum distribution of the induced ELF magnetic field components inside the SR cavity is found fully consistent with the measurements of the Huygens wave-field strength. Waiting for expected future in-situ exploration of Titan's lower atmosphere and the surface, the Huygens data are the only experimental means available to date for constraining the proposed model.

  19. Developing a Global Model of Accounting Education and Examining IES Compliance in Australia, Japan, and Sri Lanka

    Science.gov (United States)

    Watty, Kim; Sugahara, Satoshi; Abayadeera, Nadana; Perera, Luckmika

    2013-01-01

    The introduction of International Education Standards (IES) signals a clear move by the International Accounting Education Standards Board (IAESB) to ensure high quality standards in professional accounting education at a global level. This study investigated how IES are perceived and valued by member bodies and academics in three counties:…

  20. Stochastic inverse modelling of hydraulic conductivity fields taking into account independent stochastic structures: A 3D case study

    Science.gov (United States)

    Llopis-Albert, C.; Capilla, J. E.

    2010-09-01

    SummaryMajor factors affecting groundwater flow through fractured rocks include the geometry of each fracture, its properties and the fracture-network connectivity together with the porosity and conductivity of the rock matrix. When modelling fractured rocks this is translated into attaining a characterization of the hydraulic conductivity ( K) as adequately as possible, despite its high heterogeneity. This links with the main goal of this paper, which is to present an improvement of a stochastic inverse model, named as Gradual Conditioning (GC) method, to better characterise K in a fractured rock medium by considering different K stochastic structures, belonging to independent K statistical populations (SP) of fracture families and the rock matrix, each one with its own statistical properties. The new methodology is carried out by applying independent deformations to each SP during the conditioning process for constraining stochastic simulations to data. This allows that the statistical properties of each SPs tend to be preserved during the iterative optimization process. It is worthwhile mentioning that so far, no other stochastic inverse modelling technique, with the whole capabilities implemented in the GC method, is able to work with a domain covered by several different stochastic structures taking into account the independence of different populations. The GC method is based on a procedure that gradually changes an initial K field, which is conditioned only to K data, to approximate the reproduction of other types of information, i.e., piezometric head and solute concentration data. The approach is applied to the Äspö Hard Rock Laboratory (HRL) in Sweden, where, since the middle nineties, many experiments have been carried out to increase confidence in alternative radionuclide transport modelling approaches. Because the description of fracture locations and the distribution of hydrodynamic parameters within them are not accurate enough, we address the

  1. Emerging accounting trends accounting for leases.

    Science.gov (United States)

    Valletta, Robert; Huggins, Brian

    2010-12-01

    A new model for lease accounting can have a significant impact on hospitals and healthcare organizations. The new approach proposes a "right-of-use" model that involves complex estimates and significant administrative burden. Hospitals and health systems that draw heavily on lease arrangements should start preparing for the new approach now even though guidance and a final rule are not expected until mid-2011. This article highlights a number of considerations from the lessee point of view.

  2. RCM: a new model accounting for the non-linear chloride binding isotherm and the non-equilibrium conditions between the free- and bound-chloride concentrations

    NARCIS (Netherlands)

    Spiesz, Przemek; Ballari, M.M.; Brouwers, Jos

    2012-01-01

    In this paper a new theoretical model for the Rapid Chloride Migration test is presented. This model accounts for the non-linear chloride binding isotherm and the non-equilibrium conditions between the free- and bound-chloride concentrations in concrete. The new system of equations is solved

  3. Molecular weight​/branching distribution modeling of low-​density-​polyethylene accounting for topological scission and combination termination in continuous stirred tank reactor

    NARCIS (Netherlands)

    Yaghini, N.; Iedema, P.D.

    2014-01-01

    We present a comprehensive model to predict the molecular weight distribution (MWD),(1) and branching distribution of low-density polyethylene (IdPE),(2) for free radical polymerization system in a continuous stirred tank reactor (CSTR).(3) The model accounts for branching, by branching moment or

  4. Response function theories that account for size distribution effects - A review. [mathematical models concerning composite propellant heterogeneity effects on combustion instability

    Science.gov (United States)

    Cohen, N. S.

    1980-01-01

    The paper presents theoretical models developed to account for the heterogeneity of composite propellants in expressing the pressure-coupled combustion response function. It is noted that the model of Lengelle and Williams (1968) furnishes a viable basis to explain the effects of heterogeneity.

  5. The material control and accounting system model development in the Radiochemical plant of Siberian Chemical Combine (SChC)

    International Nuclear Information System (INIS)

    Kozyrev, A.S.; Purygin, V.Ya.; Skuratov, V.A.; Lapotkov, A.A.

    1999-01-01

    The nuclear material (NM) control and accounting computerized system is designed to automatically account NM reception, movement and storage at the Radiochemical Plant. The objective of this system development is to provide a constant surveillance over the process material movement, to improve their accountability and administrative work, to upgrade the plant protection against possible NM thefts, stealing and diversion, to rule out any casual errors of operators, to improve the timeliness and significance (reliability) of information about nuclear materials. The NM control and accounting system at the Radiochemical Plant should be based on the computerized network. It must keep track of all the material movements in each Material Balance Areas: material receipt from other plant; material local movement within the plant; material shipment to other plants; generation of required documents about NM movements and its accounting [ru

  6. Infrastrukturel Accountability

    DEFF Research Database (Denmark)

    Ubbesen, Morten Bonde

    Hvordan redegør man troværdigt for noget så diffust som en hel nations udledning af drivhusgasser? Det undersøger denne afhandling i et etnografisk studie af hvordan Danmarks drivhusgasregnskab udarbejdes, rapporteres og kontrolleres. Studiet trækker på begreber og forståelser fra 'Science & Tech...... & Technology Studies', og bidrager med begrebet 'infrastrukturel accountability' til nye måder at forstå og tænke om det arbejde, hvormed højt specialiserede praksisser dokumenterer og redegør for kvaliteten af deres arbejde....

  7. Predicting Environmental Suitability for a Rare and Threatened Species (Lao Newt, Laotriton laoensis) Using Validated Species Distribution Models

    Science.gov (United States)

    Chunco, Amanda J.; Phimmachak, Somphouthone; Sivongxay, Niane; Stuart, Bryan L.

    2013-01-01

    The Lao newt (Laotriton laoensis) is a recently described species currently known only from northern Laos. Little is known about the species, but it is threatened as a result of overharvesting. We integrated field survey results with climate and altitude data to predict the geographic distribution of this species using the niche modeling program Maxent, and we validated these predictions by using interviews with local residents to confirm model predictions of presence and absence. The results of the validated Maxent models were then used to characterize the environmental conditions of areas predicted suitable for L. laoensis. Finally, we overlaid the resulting model with a map of current national protected areas in Laos to determine whether or not any land predicted to be suitable for this species is coincident with a national protected area. We found that both area under the curve (AUC) values and interview data provided strong support for the predictive power of these models, and we suggest that interview data could be used more widely in species distribution niche modeling. Our results further indicated that this species is mostly likely geographically restricted to high altitude regions (i.e., over 1,000 m elevation) in northern Laos and that only a minute fraction of suitable habitat is currently protected. This work thus emphasizes that increased protection efforts, including listing this species as endangered and the establishment of protected areas in the region predicted to be suitable for L. laoensis, are urgently needed. PMID:23555808

  8. Predicting environmental suitability for a rare and threatened species (Lao newt, Laotriton laoensis using validated species distribution models.

    Directory of Open Access Journals (Sweden)

    Amanda J Chunco

    Full Text Available The Lao newt (Laotriton laoensis is a recently described species currently known only from northern Laos. Little is known about the species, but it is threatened as a result of overharvesting. We integrated field survey results with climate and altitude data to predict the geographic distribution of this species using the niche modeling program Maxent, and we validated these predictions by using interviews with local residents to confirm model predictions of presence and absence. The results of the validated Maxent models were then used to characterize the environmental conditions of areas predicted suitable for L. laoensis. Finally, we overlaid the resulting model with a map of current national protected areas in Laos to determine whether or not any land predicted to be suitable for this species is coincident with a national protected area. We found that both area under the curve (AUC values and interview data provided strong support for the predictive power of these models, and we suggest that interview data could be used more widely in species distribution niche modeling. Our results further indicated that this species is mostly likely geographically restricted to high altitude regions (i.e., over 1,000 m elevation in northern Laos and that only a minute fraction of suitable habitat is currently protected. This work thus emphasizes that increased protection efforts, including listing this species as endangered and the establishment of protected areas in the region predicted to be suitable for L. laoensis, are urgently needed.

  9. Modeling uranium(VI) adsorption onto montmorillonite under varying carbonate concentrations: A surface complexation model accounting for the spillover effect on surface potential

    Science.gov (United States)

    Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; Davis, J. A.

    2018-01-01

    The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonite edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites ('spillover' effect). A series of U(VI) - Na-montmorillonite batch adsorption experiments was conducted as a function of pH, with variable U(VI), Ca, and dissolved carbonate concentrations. Based on the experimental data, a new type of surface complexation model (SCM) was developed for montmorillonite, that specifically accounts for the spillover effect using the edge surface speciation model by Tournassat et al. (2016a). The SCM allows for a prediction of U(VI) adsorption under varying chemical conditions with a minimum number of fitting parameters, not only for our own experimental results, but also for a number of published data sets. The model agreed well with many of these datasets without introducing a second site type or including the formation of ternary U(VI)-carbonato surface complexes. The model predictions were greatly impacted by utilizing analytical measurements of dissolved inorganic carbon (DIC) concentrations in individual sample solutions rather than assuming solution equilibration with a specific partial pressure of CO2, even when the gas phase was

  10. Stroke Lesions in a Large Upper Limb Rehabilitation Trial Cohort Rarely Match Lesions in Common Preclinical Models.

    Science.gov (United States)

    Edwardson, Matthew A; Wang, Ximing; Liu, Brent; Ding, Li; Lane, Christianne J; Park, Caron; Nelsen, Monica A; Jones, Theresa A; Wolf, Steven L; Winstein, Carolee J; Dromerick, Alexander W

    2017-06-01

    Stroke patients with mild-moderate upper extremity motor impairments and minimal sensory and cognitive deficits provide a useful model to study recovery and improve rehabilitation. Laboratory-based investigators use lesioning techniques for similar goals. To determine whether stroke lesions in an upper extremity rehabilitation trial cohort match lesions from the preclinical stroke recovery models used to drive translational research. Clinical neuroimages from 297 participants enrolled in the Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) study were reviewed. Images were characterized based on lesion type (ischemic or hemorrhagic), volume, vascular territory, depth (cortical gray matter, cortical white matter, subcortical), old strokes, and leukoaraiosis. Lesions were compared with those of preclinical stroke models commonly used to study upper limb recovery. Among the ischemic stroke participants, median infarct volume was 1.8 mL, with most lesions confined to subcortical structures (61%) including the anterior choroidal artery territory (30%) and the pons (23%). Of ICARE participants, stroke patients, but they represent a clinically and scientifically important subgroup. Compared with lesions in general stroke populations and widely studied animal models of recovery, ICARE participants had smaller, more subcortically based strokes. Improved preclinical-clinical translational efforts may require better alignment of lesions between preclinical and human stroke recovery models.

  11. Accounting for multimorbidity in pay for performance: a modelling study using UK Quality and Outcomes Framework data.

    Science.gov (United States)

    Ruscitto, Andrea; Mercer, Stewart W; Morales, Daniel; Guthrie, Bruce

    2016-08-01

    The UK Quality and Outcomes Framework (QOF) offers financial incentives to deliver high-quality care for individual diseases, but the single-disease focus takes no account of multimorbidity. To examine variation in QOF payments for two indicators incentivised in ≥1 disease domain. Modelling study using cross-sectional data from 314 general practices in Scotland. Maximum payments that practices could receive under existing financial incentives were calculated for blood pressure (BP) control and influenza immunisation according to the number of coexisting clinical conditions. Payments were recalculated assuming a single new indicator. Payment varied by condition (£4.71-£11.08 for one BP control and £2.09-£5.78 for one influenza immunisation). Practices earned more for delivering the same action in patients with multimorbidity: in patients with 2, 3, and ≥4 conditions mean payments were £13.95, £21.92, and £29.72 for BP control, and £7.48, £11.21, and £15.14 for influenza immunisation, respectively. Practices in deprived areas had more multiple incentivised patients. When recalculated so that each incentivised action was only paid for once, all practices received less for BP control: affluent practices received more and deprived practices received less for influenza immunisation. For patients with single conditions, existing QOF payment methods have more than twofold variation in payment for delivering the same process. Multiple payments were common in patients with multimorbidity. A payment method is required that ensures fairness of rewards while maintaining adequate funding for practices based on actual workload. © British Journal of General Practice 2016.

  12. Accounting for the Uncertainty Related to Building Occupants with Regards to Visual Comfort: A Literature Survey on Drivers and Models

    Directory of Open Access Journals (Sweden)

    Valentina Fabi

    2016-02-01

    Full Text Available The interactions between building occupants and control systems have a high influence on energy consumption and on indoor environmental quality. In the perspective of a future of “nearly-zero” energy buildings, it is crucial to analyse the energy-related interactions deeply to predict realistic energy use during the design stage. Since the reaction to thermal, acoustic, or visual stimuli is not the same for every human being, monitoring the behaviour inside buildings is an essential step to assert differences in energy consumption related to different interactions. Reliable information concerning occupants’ behaviours in a building could contribute to a better evaluation of building energy performances and design robustness, as well as supporting the development of occupants’ education to energy awareness. The present literature survey enlarges our understanding of which environmental conditions influence occupants’ manual controlling of the system in offices and by consequence the energy consumption. The purpose of this study was to investigate the possible drivers for light-switching to model occupant behaviour in office buildings. The probability of switching lighting systems on or off was related to the occupancy and differentiated for arrival, intermediate, and departure periods. The switching probability has been reported to be higher during the entering or the leaving time in relation to contextual variables. In the analysis of switch-on actions, users were often clustered between those who take daylight level into account and switch on lights only if necessary and people who totally disregard the natural lighting. This underlines the importance of how individuality is at the base of the definition of the different types of users.

  13. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    NARCIS (Netherlands)

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-01-01

    Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive

  14. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    NARCIS (Netherlands)

    Degeling, Koen; Ijzerman, Maarten J.; Koopman, Miriam; Koffijberg, Hendrik

    2017-01-01

    Background: Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive

  15. Model-based scenario planning to develop climate change adaptation strategies for rare plant populations in grassland reserves

    Science.gov (United States)

    Laura Phillips-Mao; Susan M. Galatowitsch; Stephanie A. Snyder; Robert G. Haight

    2016-01-01

    Incorporating climate change into conservation decision-making at site and population scales is challenging due to uncertainties associated with localized climate change impacts and population responses to multiple interacting impacts and adaptation strategies. We explore the use of spatially explicit population models to facilitate scenario analysis, a conservation...

  16. Design Accountability

    DEFF Research Database (Denmark)

    Koskinen, Ilpo; Krogh, Peter

    2015-01-01

    . This paper looks at constructive design research which takes the entanglement of theory and practice as its hallmark, and uses it as a test case in exploring how design researchers can work with theory, methodology, and practice without losing their identity as design researchers. The crux of practice based...... design research is that where classical research is interested in singling out a particular aspect and exploring it in depth, design practice is characterized by balancing numerous concerns in a heterogenous and occasionally paradoxical product. It is on this basis the notion of design accountability......When design research builds on design practice, it may contribute to both theory and practice of design in ways richer than research that treats design as a topic. Such research, however, faces several tensions that it has to negotiate successfully in order not to lose its character as research...

  17. Bayesian analysis of rare events

    Science.gov (United States)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  18. Chemistry of rare elements

    International Nuclear Information System (INIS)

    Tananaev, I.V.

    1988-01-01

    The main directions of the research in the field of rare element chemistry (mainly rare earths, as well as In, U, Th, V, Nb, Ta, Mo, W) carried out for the recent decade in the laboratory of chemistry of rare elements and inorganic polymers of the USSR Academy of Sciences Institute, of General and Inorganic Chemistry (IGIC) are reviewed. The research of synthesis, structure and properties of rare earth phosphites, phoshates, arsenates, selenates, selenites and borates, as well as rare earth compounds with mixed oxide anions is of great importance. The fields of practical application of these compounds in view of their unique properties are noted

  19. Rare earth sulfates

    International Nuclear Information System (INIS)

    Komissarova, L.N.; Shatskij, V.M.; Pokrovskij, A.N.; Chizhov, S.M.; Bal'kina, T.I.; Suponitskij, Yu.L.

    1986-01-01

    Results of experimental works on the study of synthesis conditions, structure and physico-chemical properties of rare earth, scandium and yttrium sulfates, have been generalized. Phase diagrams of solubility and fusibility, thermodynamic and crystallochemical characteristics, thermal stability of hydrates and anhydrous sulfates of rare earths, including normal, double (with cations of alkali and alkaline-earth metals), ternary and anion-mixed sulfates of rare earths, as well as their adducts, are considered. The state of ions of rare earths, scandium and yttrium in aqueous sulfuric acid solutions is discussed. Data on the use of rare earth sulfates are given

  20. Democratic Model of Public Policy Accountability. Case Study on Implementation of Street Vendors Empowerment Policy in Makassar City

    Directory of Open Access Journals (Sweden)

    Rulinawaty Kasmadsi

    2015-08-01

    Full Text Available Policy accountability is a form of manifestation of public officials responsible to the people. One form of policy accountability that is discussed here is street vendors policy accountability, because they are a group of citizens who have the economic activities in public spaces. The existence of this policy how-ever, the number of street vendors from year to year increase in Makassar City. Therefore, this study seeks to uncover and explain the democratic policy ac-countability through the street vendors’ responses and expectations to the implementation of street ven-dors empowerment policy in Makassar City; and to uncover and explain the democratic policy account-ability through the stakeholders’ responses and ex-pectations to the implementation of street vendors empowerment policy in Makassar City. To achieve these objectives, the study uses democracy theory, in which this theory focuses on togetherness in dis-cussing solutions to the various problems of street vendors and in the policy implementation as well.This study used a qualitative design and case studies strat-egy. Data collection techniques used was observa-tion, interview, and documentation. Data were ana-lyzed with case description its settings. The results of this study pointed out that the interests and needs of the street vendors are not met through the empow-erment policies vendors. This is caused by the ab-sence of accountability forum as a place of togeth-erness all of street vendors empowerment stakehold-ers’. Street vendors empowerment policy in Makassar City are designed base on a top-down approach, so they are considered as objects, which must accept all government programs aimed at them.

  1. Determination of a cohesive law for delamination modelling - Accounting for variation in crack opening and stress state across the test specimen width

    DEFF Research Database (Denmark)

    Joki, R. K.; Grytten, F.; Hayman, Brian

    2016-01-01

    The cohesive law for Mode I delamination in glass fibre Non-Crimped Fabric reinforced vinylester is determined for use in finite element models. The cohesive law is derived from a delamination test based on DCB specimens loaded with pure bending moments taking into account the presence of large...... by differentiating the fracture resistance with respect to opening displacement at the initial location of the crack tip, measured at the specimen edge. 2) Extend the bridging law to a cohesive law by accounting for crack tip fracture energy. 3) Fine-tune the cohesive law through an iterative modelling approach so...... that the changing state of stress and deformation across the width of the test specimen is taken into account. The changing state of stress and deformation across the specimen width is shown to be significant for small openings (small fracture process zone size). This will also be important for the initial part...

  2. Modelling representative and coherent Danish farm types based on farm accountancy data for use in enviromental assessments

    DEFF Research Database (Denmark)

    Dalgaard, Randi; Halberg, Niels; Kristensen, Ib S.

    2006-01-01

    is established in order to report Danish agro-economical data to the ‘Farm Accountancy Data Network’ (FADN), and to produce ‘The annual Danish account statistics for agriculture’. The farm accounts are selected and weighted to be representative for the Danish agricultural sector, and similar samples of farm......, homegrown feed, manure production, fertilizer use and crop production. The set of farm types was scaled up to national level thus representing the whole Danish agricultural sector and the resulting production, resource use and land use was checked against the national statistics. Nutrient balance....... The methane emission was higher from dairy farm types compared with all other farm types. In general the conventional dairy farms emitted more nitrate, ammonia, and nitrous oxi de, compared with organic dairy farms....

  3. Severe Intellectual Disability and Enhanced Gamma-Aminobutyric Acidergic Synaptogenesis in a Novel Model of Rare RASopathies.

    Science.gov (United States)

    Papale, Alessandro; d'Isa, Raffaele; Menna, Elisabetta; Cerovic, Milica; Solari, Nicola; Hardingham, Neil; Cambiaghi, Marco; Cursi, Marco; Barbacid, Mariano; Leocani, Letizia; Fasano, Stefania; Matteoli, Michela; Brambilla, Riccardo

    2017-02-01

    Dysregulation of Ras-extracellular signal-related kinase (ERK) signaling gives rise to RASopathies, a class of neurodevelopmental syndromes associated with intellectual disability. Recently, much attention has been directed at models bearing mild forms of RASopathies whose behavioral impairments can be attenuated by inhibiting the Ras-ERK cascade in the adult. Little is known about the brain mechanisms in severe forms of these disorders. We performed an extensive characterization of a new brain-specific model of severe forms of RASopathies, the KRAS 12V mutant mouse. The KRAS 12V mutation results in a severe form of intellectual disability, which parallels mental deficits found in patients bearing mutations in this gene. KRAS 12V mice show a severe impairment of both short- and long-term memory in a number of behavioral tasks. At the cellular level, an upregulation of ERK signaling during early phases of postnatal development, but not in the adult state, results in a selective enhancement of synaptogenesis in gamma-aminobutyric acidergic interneurons. The enhancement of ERK activity in interneurons at this critical postnatal time leads to a permanent increase in the inhibitory tone throughout the brain, manifesting in reduced synaptic transmission and long-term plasticity in the hippocampus. In the adult, the behavioral and electrophysiological phenotypes in KRAS 12V mice can be temporarily reverted by inhibiting gamma-aminobutyric acid signaling but not by a Ras-ERK blockade. Importantly, the synaptogenesis phenotype can be rescued by a treatment at the developmental stage with Ras-ERK inhibitors. These data demonstrate a novel mechanism underlying inhibitory synaptogenesis and provide new insights in understanding mental dysfunctions associated to RASopathies. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  4. [Rare diseases from a life insurance perspective].

    Science.gov (United States)

    Senn, A; Filzmaier, K

    2015-12-01

    A rare disease is defined as a disease that affects a maximum of 5 in 10,000 people. As of today there are roughly 7000 different rare diseases known. On account of this one can say that "rare diseases are rare, but people affected by them are common". For Germany this amounts to: 4 million people that are affected by a rare disease. Diagnosis, therapeutic options and prognosis have substantially improved for some of the rare diseases. Besides the general medical advances--especially in the area of genetics--this is also due to networking and sharing information by so-called Centres of Competence on a national and international scale. This results in a better medical care for the corresponding group of patients. Against this backdrop, the number of people applying for life assurance who are suffering from a complex or rare disease has risen steadily in the last years. Due to the scarce availability of data regarding long-term prognosis of many rare diseases, a biomathematical, medical and actuarial expertise on the part of the insurer is necessary in order to adequately assess the risk of mortality and morbidity. Furthermore there is quite a focus on the issue of rare diseases from not only politics but society as well. Therefore evidence based medical assessment by insurers is especially important in this group of applicants--thinking of legal compliance and reputational risk.

  5. Determination of a cohesive law for delamination modelling - Accounting for variation in crack opening and stress state across the test specimen width

    DEFF Research Database (Denmark)

    Joki, R. K.; Grytten, F.; Hayman, Brian

    2016-01-01

    The cohesive law for Mode I delamination in glass fibre Non-Crimped Fabric reinforced vinylester is determined for use in finite element models. The cohesive law is derived from a delamination test based on DCB specimens loaded with pure bending moments taking into account the presence of large...

  6. Can the Five Factor Model of Personality Account for the Variability of Autism Symptom Expression? Multivariate Approaches to Behavioral Phenotyping in Adult Autism Spectrum Disorder

    Science.gov (United States)

    Schwartzman, Benjamin C.; Wood, Jeffrey J.; Kapp, Steven K.

    2016-01-01

    The present study aimed to: determine the extent to which the five factor model of personality (FFM) accounts for variability in autism spectrum disorder (ASD) symptomatology in adults, examine differences in average FFM personality traits of adults with and without ASD and identify distinct behavioral phenotypes within ASD. Adults (N = 828;…

  7. Rare Earth Metals: Resourcefulness and Recovery

    Science.gov (United States)

    Wang, Shijie

    2013-10-01

    When we appreciate the digital revolution carried over from the twentieth century with mobile communication and the Internet, and when we enjoy our high-tech lifestyle filled with iDevices, hybrid cars, wind turbines, and solar cells in this new century, we should also appreciate that all of these advanced products depend on rare earth metals to function. Although there are only 136,000 tons of annual worldwide demand, (Cho, Rare Earth Metals, Will We Have Enough?)1 rare earth metals are becoming such hot commodities on international markets, due to not only to their increasing uses, including in most critical military hardware, but also to Chinese growth, which accounts for 95% of global rare earth metal production. Hence, the 2013 technical calendar topic, planned by the TMS/Hydrometallurgy and Electrometallurgy Committee, is particularly relevant, with four articles (including this commentary) contributed to the JOM October Issue discussing rare earth metals' resourcefulness and recovery.

  8. Accounting for capacity and flow of ecosystem services: A conceptual model and a case study for Telemark, Norway

    NARCIS (Netherlands)

    Schroter, M.; Barton, D.N.; Remme, R.P.; Hein, L.G.

    2014-01-01

    Understanding the flow of ecosystem services and the capacity of ecosystems to generate these services is an essential element for understanding the sustainability of ecosystem use as well as developing ecosystem accounts. We conduct spatially explicit analyses of nine ecosystem services in Telemark

  9. Whole of Government Accounts

    DEFF Research Database (Denmark)

    Pontoppidan, Caroline Aggestam; Chow, Danny; Day, Ronald

    In our comparative study, we surveyed an emerging literature on the use of consolidation in government accounting and develop a research agenda. We find heterogeneous approaches to the development of consolidation models across the five countries (Australia, New Zealand, UK, Canada and Sweden...... of financial reporting (GAAP)-based reforms when compared with budget-centric systems of accounting, which dominate government decision-making. At a trans-national level, there is a need to examine the embedded or implicit contests or ‘trials of strength’ between nations and/or institutions jockeying...... for influence. We highlight three arenas where such contests are being played out: 1. Statistical versus GAAP notions of accounting value, which features in all accounting debates over the merits and costs of ex-ante versus ex-post notions of value (i.e., the relevance versus reliability debate); 2. Private...

  10. An Entropy Testing Model Research on the Quality of Internal Control and Accounting Conservatism: Empirical Evidence from the Financial Companies of China from 2007 to 2011

    Directory of Open Access Journals (Sweden)

    Zongrun Wang

    2014-01-01

    Full Text Available We set information disclosure of internal control as a starting point to explore the relationship between the quality of internal control and accounting conservatism, and then adopt the entropy testing model to calculate the index of the internal control quality with the sample data of Chinese listed companies in financial industry from 2007–2011. Regression results show that earnings conservatism exists. The stronger the internal control is, the higher the accounting conservatism can be. Companies which have enhanced their internal control are more conservative, and these results make no difference with other industries.

  11. A physically meaningful equivalent circuit network model of a lithium-ion battery accounting for local electrochemical and thermal behaviour, variable double layer capacitance and degradation

    Science.gov (United States)

    von Srbik, Marie-Therese; Marinescu, Monica; Martinez-Botas, Ricardo F.; Offer, Gregory J.

    2016-09-01

    A novel electrical circuit analogy is proposed modelling electrochemical systems under realistic automotive operation conditions. The model is developed for a lithium ion battery and is based on a pseudo 2D electrochemical model. Although cast in the framework familiar to application engineers, the model is essentially an electrochemical battery model: all variables have a direct physical interpretation and there is direct access to all states of the cell via the model variables (concentrations, potentials) for monitoring and control systems design. This is the first Equivalent Circuit Network -type model that tracks directly the evolution of species inside the cell. It accounts for complex electrochemical phenomena that are usually omitted in online battery performance predictors such as variable double layer capacitance, the full current-overpotential relation and overpotentials due to mass transport limitations. The coupled electrochemical and thermal model accounts for capacity fade via a loss in active species and for power fade via an increase in resistive solid electrolyte passivation layers at both electrodes. The model's capability to simulate cell behaviour under dynamic events is validated against test procedures, such as standard battery testing load cycles for current rates up to 20 C, as well as realistic automotive drive cycle loads.

  12. The Drift Diffusion Model can account for the accuracy and reaction time of value-based choices under high and low time pressure

    Directory of Open Access Journals (Sweden)

    Milica Milosavljevic

    2010-10-01

    Full Text Available An important open problem is how values are compared to make simple choices. A natural hypothesis is that the brain carries out the computations associated with the value comparisons in a manner consistent with the Drift Diffusion Model (DDM, since this model has been able to account for a large amount of data in other domains. We investigated the ability of four different versions of the DDM to explain the data in a real binary food choice task under conditions of high and low time pressure. We found that a seven-parameter version of the DDM can account for the choice and reaction time data with high-accuracy, in both the high and low time pressure conditions. The changes associated with the introduction of time pressure could be traced to changes in two key model parameters: the barrier height and the noise in the slope of the drift process.

  13. What natural capital disclosure for integrated reporting? Designing & modelling an Integrated Financial - Natural Capital Accounting and Reporting Framework.

    OpenAIRE

    Houdet, Joel; Burritt, Roger; N. Farrell, Katharine; Martin-Ortega, Julia; Ramin, Kurt; Spurgeon, James; Atkins, Jill; Steuerman, David; Jones, Michael; Maleganos, John; Ding, Helen; Ochieng, Cosmas; Naicker, Kiruben; Chikozho, Claudious; Finisdore, John

    2014-01-01

    Business and government leaders from around the world are increasingly sounding the alarm about the need for effective management of business dependencies and impacts on ecosystems. As a consequence, financial institutions have recently made a formal commitment to work towards integrating natural capital considerations into their decision-making processes, including helping improve the accounting and disclosure practices of reporting organisations. Though various frameworks and standards have...

  14. Implementation of a pilot accountable care organization payment model and the use of discretionary and nondiscretionary cardiovascular care.

    Science.gov (United States)

    Colla, Carrie H; Goodney, Philip P; Lewis, Valerie A; Nallamothu, Brahmajee K; Gottlieb, Daniel J; Meara, Ellen

    2014-11-25

    Accountable care organizations (ACOs) seek to reduce growth in healthcare spending while ensuring high-quality care. We hypothesized that accountable care organization implementation would selectively limit the use of discretionary cardiovascular care (defined as care occurring in the absence of indications such as myocardial infarction or stroke), while maintaining high-quality care, such as nondiscretionary cardiovascular imaging and procedures. The intervention group was composed of fee-for-service Medicare patients (n=819 779) from 10 groups participating in a Medicare pilot accountable care organization, the Physician Group Practice Demonstration (PGPD). Matched controls were patients (n=934 621) from nonparticipating groups in the same regions. We compared use of cardiovascular care before (2002-2004) and after (2005-2009) PGPD implementation, studying both discretionary and nondiscretionary carotid and coronary imaging and procedures. Our main outcome measure was the difference in the proportion of patients treated with imaging and procedures among patients of PGPD practices compared with patients in control practices, before and after PGPD implementation (difference-in-difference). For discretionary imaging, the difference-in-difference between PGPD practices and controls was not statistically significant for discretionary carotid imaging (0.17%; 95% confidence interval, -0.51% to 0.85%; P=0.595) or discretionary coronary imaging (-0.19%; 95% confidence interval, -0.73% to 0.35%; P=0.468). Similarly, the difference-in-difference was also minimal for discretionary carotid revascularization (0.003%; 95% confidence interval, -0.008% to 0.002%; P=0.705) and coronary revascularization (-0.02%; 95% confidence interval, -0.11% to 0.07%; P=0.06). The difference-in-difference associated with PGPD implementation was also essentially 0 for nondiscretionary cardiovascular imaging or procedures. Implementation of a pilot accountable care organization did not limit the

  15. Evaluation of Scat Deposition Transects versus Radio Telemetry for Developing a Species Distribution Model for a Rare Desert Carnivore, the Kit Fox.

    Directory of Open Access Journals (Sweden)

    Steven J Dempsey

    Full Text Available Development and evaluation of noninvasive methods for monitoring species distribution and abundance is a growing area of ecological research. While noninvasive methods have the advantage of reduced risk of negative factors associated with capture, comparisons to methods using more traditional invasive sampling is lacking. Historically kit foxes (Vulpes macrotis occupied the desert and semi-arid regions of southwestern North America. Once the most abundant carnivore in the Great Basin Desert of Utah, the species is now considered rare. In recent decades, attempts have been made to model the environmental variables influencing kit fox distribution. Using noninvasive scat deposition surveys for determination of kit fox presence, we modeled resource selection functions to predict kit fox distribution using three popular techniques (Maxent, fixed-effects, and mixed-effects generalized linear models and compared these with similar models developed from invasive sampling (telemetry locations from radio-collared foxes. Resource selection functions were developed using a combination of landscape variables including elevation, slope, aspect, vegetation height, and soil type. All models were tested against subsequent scat collections as a method of model validation. We demonstrate the importance of comparing multiple model types for development of resource selection functions used to predict a species distribution, and evaluating the importance of environmental variables on species distribution. All models we examined showed a large effect of elevation on kit fox presence, followed by slope and vegetation height. However, the invasive sampling method (i.e., radio-telemetry appeared to be better at determining resource selection, and therefore may be more robust in predicting kit fox distribution. In contrast, the distribution maps created from the noninvasive sampling (i.e., scat transects were significantly different than the invasive method, thus scat

  16. Nuclear material accounting handbook

    International Nuclear Information System (INIS)

    2008-01-01

    The handbook documents existing best practices and methods used to account for nuclear material and to prepare the required nuclear material accounting reports for submission to the IAEA. It provides a description of the processes and steps necessary for the establishment, implementation and maintenance of nuclear material accounting and control at the material balance area, facility and State levels, and defines the relevant terms. This handbook serves the needs of State personnel at various levels, including State authorities, facility operators and participants in training programmes. It can assist in developing and maintaining accounting systems which will support a State's ability to account for its nuclear material such that the IAEA can verify State declarations, and at the same time support the State's ability to ensure its nuclear security. In addition, the handbook is useful for IAEA staff, who is closely involved with nuclear material accounting. The handbook includes the steps and procedures a State needs to set up and maintain to provide assurance that it can account for its nuclear material and submit the prescribed nuclear material accounting reports defined in Section 1 and described in Sections 3 and 4 in terms of the relevant agreement(s), thereby enabling the IAEA to discharge its verification function as defined in Section 1 and described in Sections 3 and 4. The contents of the handbook are based on the model safeguards agreement and, where applicable, there will also be reference to the model additional protocol. As a State using The handbook consists of five sections. In Section 1, definitions or descriptions of terms used are provided in relation to where the IAEA applies safeguards or, for that matter, accounting for and control of nuclear material in a State. The IAEA's approach in applying safeguards in a State is also defined and briefly described, with special emphasis on verification. In Section 2, the obligations of the State

  17. Study about chemical and radiological toxicity of rare earths

    International Nuclear Information System (INIS)

    Goncalez, O.L.

    1987-02-01

    The maximum permissible concentration in workplace air for an admixture of rare earths is calculated to be 1.47 mg/m 3 of air. This value takes into account the biological mean-life of those chemical elements in human body and acute toxicological data. A simplified mathematical models is done that describes the body content of this product as a time function, for cronic intoxication by air particulate inhalation. Under the radiological point of view the limit calculated for the air concentration is about 100 mg/m 3 , showing that the chemical toxity of these products is predominant. (Author) [pt

  18. Physiologically motivated time-delay model to account for mechanisms underlying enterohepatic circulation of piroxicam in human beings.

    Science.gov (United States)

    Tvrdonova, Martina; Dedik, Ladislav; Mircioiu, Constantin; Miklovicova, Daniela; Durisova, Maria

    2009-01-01

    The study was conducted to formulate a physiologically motivated time-delay (PM TD) mathematical model for human beings, which incorporates disintegration of a drug formulation, dissolution, discontinuous gastric emptying and enterohepatic circulation (EHC) of a drug. Piroxicam, administered to 24 European, healthy individuals in 20 mg capsules Feldene Pfizer, was used as a model drug. Plasma was analysed for piroxicam by a validated high-performance liquid chromatography method. The PM TD mathematical model was developed using measured plasma piroxicam concentration-time profiles of the individuals and tools of a computationally efficient mathematical analysis and modeling, based on the theory of linear dynamic systems. The constructed model was capable of (i) quantifying different fractions of the piroxicam dose sequentially disposable for absorption and (ii) estimating time delays between time when the piroxicam dose reaches stomach and time when individual of fractions of the piroxicam dose is disposable for absorption. The model verification was performed through a formal proof, based on comparisons of observed and model-predicted plasma piroxicam concentration-time profiles. The model verification showed an adequate model performance and agreement between the compared profiles. Accordingly, it confirmed that the developed model was an appropriate representative of the piroxicam fate in the individuals enrolled. The presented model provides valuable information on factors that control dynamic mechanisms of EHC, that is, information unobtainable with the models proposed for the EHC analysis previously.

  19. Account of nonlocal ionization by fast electrons in the fluid models of a direct current glow discharge

    Energy Technology Data Exchange (ETDEWEB)

    Rafatov, I. [Physics Department, Middle East Technical University, Ankara (Turkey); Bogdanov, E. A.; Kudryavtsev, A. A. [Saint Petersburg State University, St. Petersburg (Russian Federation)

    2012-09-15

    We developed and tested a simple hybrid model for a glow discharge, which incorporates nonlocal ionization by fast electrons into the 'simple' and 'extended' fluid frameworks. Calculations have been performed for an argon gas. Comparison with the experimental data as well as with the hybrid (particle) and fluid modelling results demonstated good applicability of the proposed model.

  20. A complete soil hydraulic model accounting for capillary and adsorptive water retention, capillary and film conductivity, and hysteresis

    NARCIS (Netherlands)

    Sakai, Masaru; Van Genuchten, Martinus Th|info:eu-repo/dai/nl/31481518X; Alazba, A. A.; Setiawan, Budi Indra; Minasny, Budiman

    2015-01-01

    A soil hydraulic model that considers capillary hysteretic and adsorptive water retention as well as capillary and film conductivity covering the complete soil moisture range is presented. The model was obtained by incorporating the capillary hysteresis model of Parker and Lenhard into the hydraulic

  1. Accounting for the uncertainty related to building occupants with regards to visual comfort: A literature survey on drivers and models

    DEFF Research Database (Denmark)

    Fabi, Valentina; Andersen, Rune Korsholm; Corgnati, Stefano

    2016-01-01

    was related to the occupancy and differentiated for arrival, intermediate, and departure periods. The switching probability has been reported to be higher during the entering or the leaving time in relation to contextual variables. In the analysis of switch-on actions, users were often clustered between those...... who take daylight level into account and switch on lights only if necessary and people who totally disregard the natural lighting. This underlines the importance of how individuality is at the base of the definition of the different types of users....

  2. Modeling the geochemical distribution of rare earth elements (REEs using multivariate statistics in the eastern part of Marvast placer, the Yazd province

    Directory of Open Access Journals (Sweden)

    Amin Hossein Morshedy

    2017-07-01

    Full Text Available Introduction Nowadays, exploration of rare earth element (REE resources is considered as one of the strategic priorities, which has a special position in the advanced and intelligent industries (Castor and Hedrick, 2006. Significant resources of REEs are found in a wide range of geological settings, including primary deposits associated with igneous and hydrothermal processes (e.g. carbonatite, (per alkaline-igneous rocks, iron-oxide breccia complexes, scarns, fluorapatite veins and pegmatites, and secondary deposits concentrated by sedimentary processes and weathering (e.g. heavy-mineral sand deposits, fluviatile sandstones, unconformity-related uranium deposits, and lignites (Jaireth et al., 2014. Recent studies on various parts of Iran led to the identification of promising potential of these elements, including Central Iran, alkaline rocks in the Eslami Peninsula, iron and apatite in the Hormuz Island, Kahnouj titanium deposit, granitoid bodies in Yazd, Azerbaijan, and Mashhad and associated dikes, and finally placers related to the Shemshak formation in Marvast, Kharanagh, and Ardekan indicate high concentration of REE in magmatogenic iron–apatite deposits in Central Iran and placers in Marvast area in Yazd (Ghorbani, 2013. Materials and methods In the present study, the geochemical behavior of rare earth elements is modeled by using multivariate statistical methods in the eastern part of the Marvast placer. Marvast is located 185 km south of the city of Yazd in central Iran between Yazd and Mehriz. This area lies within the southeastern part of the Sanandaj-Sirjan Zone (Alipour-Asll et al., 2012. The samples of 53 wells were analyzed for Whole-rock trace-element concentrations (including REE by inductively coupled plasma-mass spectrometry (ICP-MS (GSI, 2004. The clustering techniques such as multivariate statistical analysis technique can be employed to find appropriate groups in data sets. One of the main objectives of data clustering

  3. micro-mechanical modeling and numerical simulation of creep in concrete taking into account the effects of micro-cracking and hygro-thermal

    International Nuclear Information System (INIS)

    Thai, M.Q.

    2012-01-01

    Concrete is a complex heterogeneous material whose deformations include a delayed part that is affected by a number of factors such as temperature, relative humidity and microstructure evolution. Taking into account differed deformations and in particular creep is essential in the computation of concrete structures such as those dedicated to radioactive waste storage. The present work aims: (1) at elaborating a simple and robust model of creep for concrete by using micro-mechanics and accounting for the effects of damage, temperature and relative humidity; (2) at numerically implementing the creep model developed in a finite element code so as to simulate the behavior of simple structural elements in concrete. To achieve this twofold objective, the present work is partitioned into three parts. In the first part the cement-based material at the microscopic scale is taken to consist of a linear viscoelastic matrix characterized by a generalized Maxwell model and of particulate phases representing elastic aggregates and pores. The Mori-Tanaka micro-mechanical scheme, the Laplace-Carson transform and its inversion are then used to obtain analytical or numerical estimates for the mechanical and hydro-mechanical parameters of the material. Next, the original micromechanical model of creep is coupled to the damage model of Mazars through the concept of pseudo-deformations introduced by Schapery. The parameters involved in the creep-damage model thus established are systematically identified using available experimental data. Finally, the effects of temperature and relative humidity are accounted for in the creep-damage model by using the equivalent time method; the efficiency of this approach is demonstrated and discussed in the case of simple creep tests. (author) [fr

  4. A Comparison of Seven Cox Regression-Based Models to Account for Heterogeneity Across Multiple HIV Treatment Cohorts in Latin America and the Caribbean.

    Science.gov (United States)

    Giganti, Mark J; Luz, Paula M; Caro-Vega, Yanink; Cesar, Carina; Padgett, Denis; Koenig, Serena; Echevarria, Juan; McGowan, Catherine C; Shepherd, Bryan E

    2015-05-01

    Many studies of HIV/AIDS aggregate data from multiple cohorts to improve power and generalizability. There are several analysis approaches to account for cross-cohort heterogeneity; we assessed how different approaches can impact results from an HIV/AIDS study investigating predictors of mortality. Using data from 13,658 HIV-infected patients starting antiretroviral therapy from seven Latin American and Caribbean cohorts, we illustrate the assumptions of seven readily implementable approaches to account for across cohort heterogeneity with Cox proportional hazards models, and we compare hazard ratio estimates across approaches. As a sensitivity analysis, we modify cohort membership to generate specific heterogeneity conditions. Hazard ratio estimates varied slightly between the seven analysis approaches, but differences were not clinically meaningful. Adjusted hazard ratio estimates for the association between AIDS at treatment initiation and death varied from 2.00 to 2.20 across approaches that accounted for heterogeneity; the adjusted hazard ratio was estimated as 1.73 in analyses that ignored across cohort heterogeneity. In sensitivity analyses with more extreme heterogeneity, we noted a slightly greater distinction between approaches. Despite substantial heterogeneity between cohorts, the impact of the specific approach to account for heterogeneity was minimal in our case study. Our results suggest that it is important to account for across cohort heterogeneity in analyses, but that the specific technique for addressing heterogeneity may be less important. Because of their flexibility in accounting for cohort heterogeneity, we prefer stratification or meta-analysis methods, but we encourage investigators to consider their specific study conditions and objectives.

  5. Modeling of the cesium 137 air transfer taking account of dust-making distinction on arable and long-fallow lands

    International Nuclear Information System (INIS)

    Bogdanov, A.P.; Zhmura, G.M.

    1997-01-01

    The mathematical model for air transfer of cesium 137 out from the contaminated regions which takes into account of dust-making distinction on arable and long-fallow lands is suggested. The calculation results of near-ground concentrations of cesium 137 for several towns of Belarus are presented. The sources of the contamination of the atmosphere in each calculated point have been analysed

  6. Spontaneous Retroperitoneal Hematoma: A Rare Devastating ...

    African Journals Online (AJOL)

    rare, accounting for 0.1–10.4% in autopsy statistics. The exact mechanism of rupture of branches of splanchnic vessels is unknown, but likely represents weakness of the tunica ... 2010;5:108. 6. Diagnosis and management of postpartum hemorrhage. ACOG technical bulletin number 143. Washington DC: American College.

  7. First steps of integrated spatial modeling of titanium, zirconium, and rare earth element resources within the Coastal Plain sediments of the southeastern United States

    Science.gov (United States)

    Ellefsen, Karl J.; Van Gosen, Bradley S.; Fey, David L.; Budahn, James R.; Smith, Steven M.; Shah, Anjana K.

    2015-01-01

    The Coastal Plain of the southeastern United States has extensive, unconsolidated sedimentary deposits that are enriched in heavy minerals containing titanium, zirconium, and rare earth element resources. Areas favorable for exploration and development of these resources are being identified by geochemical data, which are supplemented with geological, geophysical, hydrological, and geographical data. The first steps of this analysis have been completed. The concentrations of lanthanum, yttrium, and titanium tend to decrease as distance from the Piedmont (which is the likely source of these resources) increases and are moderately correlated with airborne measurements of equivalent thorium concentration. The concentrations of lanthanum, yttrium, and titanium are relatively high in those watersheds that adjoin the Piedmont, south of the Cape Fear Arch. Although this relation suggests that the concentrations are related to the watersheds, it may be simply an independent regional trend. The concentration of zirconium is unrelated to the distance from the Piedmont, the equivalent thorium concentration, and the watershed. These findings establish a foundation for more sophisticated analyses using integrated spatial modeling.

  8. The $B \\to \\pi \\pi, \\pi K$ Puzzles in the Light of New Data Implications for the Standard Model, New Physics and Rare Decays

    CERN Document Server

    Buras, Andrzej J; Recksiegel, S; Schwab, F; Buras, Andrzej J.; Fleischer, Robert; Recksiegel, Stefan; Schwab, Felix

    2005-01-01

    Recently, we developed a strategy to analyse the B -> pi pi,pi K data. We found that the B -> pi pi measurements can be accommodated in the Standard Model (SM) through large non-factorizable effects. On the other hand, our analysis of the ratios R_c and R_n of the CP-averaged branching ratios of the charged and neutral B -> pi K modes, respectively, suggested new physics (NP) in the electroweak penguin sector, which may have a powerful interplay with rare decays. In this paper, we confront our strategy with recent experimental developments, addressing also the direct CP violation in B_d -> pi^-+ K^+-, which is now an established effect, the relation to its counterpart in B^+- -> pi^0 K^+-, and the first results for the direct CP asymmetry of B_d -> pi^0 pi^0 that turn out to be in agreement with our prediction. We obtain hadronic B -> pi pi,pi K parameters which are almost unchanged and arrive at an allowed region for the unitarity triangle in perfect accordance with the SM. The ``B -> pi K puzzle'' persists,...

  9. Testing the efficacy of existing force-endurance models to account for the prevalence of obesity in the workforce.

    Science.gov (United States)

    Pajoutan, Mojdeh; Cavuoto, Lora A; Mehta, Ranjana K

    2017-10-01

    This study evaluates whether the existing force-endurance relationship models are predictive of endurance time for overweight and obese individuals, and if not, provide revised models that can be applied for ergonomics practice. Data was collected from 141 participants (49 normal weight, 50 overweight, 42 obese) who each performed isometric endurance tasks of hand grip, shoulder flexion, and trunk extension at four levels of relative workload. Subject-specific fatigue rates and a general model of the force-endurance relationship were determined and compared to two fatigue models from the literature. There was a lack of fit between previous models and the current data for the grip (ICC = 0.8), with a shift toward lower endurance times for the new data. Application of the revised models can facilitate improved workplace design and job evaluation to accommodate the capacities of the current workforce.

  10. Contingent Valuation Method and the beta model: an accounting economic vision for environmental damage in Atlântico Sul Shipyard

    Directory of Open Access Journals (Sweden)

    Silvana Karina de Melo Travassos

    2018-02-01

    Full Text Available ABSTRACT The objective of this paper is to apply the beta model as an alternative to the Valuation Method in order to estimate the environmental asset Willingness to Pay (WTP so that the Tribunal de Contas do Estado de Pernambuco (TCE/PE can supervise the Atlântico Sul Shipyard (ASS as a negative environmental externality, which is discussed here from an accounting perspective. Our methodology is exploratory, and the beta regression model was used in the contingent valuation to estimate the environmental asset. The results allowed estimating the value of the Ipojuca mangrove at US$ 134,079,793.50, and the value of the environmental damage caused by the shipyard to the public asset was valued at US$ 61,378,155.37. This latter value is object of interest to the inspection body. However, the final estimated value of the Ipojuca mangrove prompts a discussion about the implications from an accounting point of view, such as the attribution of monetary value to a public asset that does not have a financial value, problems regarding the conceptualization and valuation of public assets for governmental patrimony. It is concluded that the beta regression model to estimate the WTP for contingent valuation will serve as a contribution to the research on accounting measurement techniques for public assets.

  11. Rare sugar d-psicose prevents progression and development of diabetes in T2DM model Otsuka Long-Evans Tokushima Fatty rats

    Science.gov (United States)

    Hossain, Akram; Yamaguchi, Fuminori; Hirose, Kayoko; Matsunaga, Toru; Sui, Li; Hirata, Yuko; Noguchi, Chisato; Katagi, Ayako; Kamitori, Kazuyo; Dong, Youyi; Tsukamoto, Ikuko; Tokuda, Masaaki

    2015-01-01

    Background The fundamental cause of overweight and obesity is consumption of calorie-dense foods. We have introduced a zero-calorie sweet sugar, d-psicose (d-allulose), a rare sugar that has been proven to have strong antihyperglycemic and antihyperlipidemic effects, and could be used as a replacement of natural sugar for the obese and diabetic subjects. Aim Above mentioned efficacy of d-psicose (d-allulose) has been confirmed in our previous studies on type 2 diabetes mellitus (T2DM) model Otsuka Long-Evans Tokushima Fatty (OLETF) rats with short-term treatment. In this study we investigated the long-term effect of d-psicose in preventing the commencement and progression of T2DM with the mechanism of preservation of pancreatic β-cells in OLETF rats. Methods Treated OLETF rats were fed 5% d-psicose dissolved in water and control rats only water. Nondiabetic control rats, Long-Evans Tokushima Otsuka (LETO), were taken as healthy control and fed water. To follow the progression of diabetes, periodic measurements of blood glucose, plasma insulin, and body weight changes were continued till sacrifice at 60 weeks. Periodic in vivo body fat mass was measured. On sacrifice, pancreas, liver, and abdominal adipose tissues were collected for various staining tests. Results d-Psicose prevented the commencement and progression of T2DM till 60 weeks through the maintenance of blood glucose levels, decrease in body weight gain, and the control of postprandial hyperglycemia, with decreased levels of HbA1c in comparison to nontreated control rats. This improvement in glycemic control was accompanied by the maintenance of plasma insulin levels and the preservation of pancreatic β-cells with the significant reduction in inflammatory markers. Body fat accumulation was significantly lower in the treatment group, with decreased infiltration of macrophages in the abdominal adipose tissue. Conclusion Our findings suggest that the rare sugar d-psicose could be beneficial for the

  12. Rare sugar D-psicose prevents progression and development of diabetes in T2DM model Otsuka Long-Evans Tokushima Fatty rats.

    Science.gov (United States)

    Hossain, Akram; Yamaguchi, Fuminori; Hirose, Kayoko; Matsunaga, Toru; Sui, Li; Hirata, Yuko; Noguchi, Chisato; Katagi, Ayako; Kamitori, Kazuyo; Dong, Youyi; Tsukamoto, Ikuko; Tokuda, Masaaki

    2015-01-01

    The fundamental cause of overweight and obesity is consumption of calorie-dense foods. We have introduced a zero-calorie sweet sugar, d-psicose (d-allulose), a rare sugar that has been proven to have strong antihyperglycemic and antihyperlipidemic effects, and could be used as a replacement of natural sugar for the obese and diabetic subjects. Above mentioned efficacy of d-psicose (d-allulose) has been confirmed in our previous studies on type 2 diabetes mellitus (T2DM) model Otsuka Long-Evans Tokushima Fatty (OLETF) rats with short-term treatment. In this study we investigated the long-term effect of d-psicose in preventing the commencement and progression of T2DM with the mechanism of preservation of pancreatic β-cells in OLETF rats. Treated OLETF rats were fed 5% d-psicose dissolved in water and control rats only water. Nondiabetic control rats, Long-Evans Tokushima Otsuka (LETO), were taken as healthy control and fed water. To follow the progression of diabetes, periodic measurements of blood glucose, plasma insulin, and body weight changes were continued till sacrifice at 60 weeks. Periodic in vivo body fat mass was measured. On sacrifice, pancreas, liver, and abdominal adipose tissues were collected for various staining tests. d-Psicose prevented the commencement and progression of T2DM till 60 weeks through the maintenance of blood glucose levels, decrease in body weight gain, and the control of postprandial hyperglycemia, with decreased levels of HbA1c in comparison to nontreated control rats. This improvement in glycemic control was accompanied by the maintenance of plasma insulin levels and the preservation of pancreatic β-cells with the significant reduction in inflammatory markers. Body fat accumulation was significantly lower in the treatment group, with decreased infiltration of macrophages in the abdominal adipose tissue. Our findings suggest that the rare sugar d-psicose could be beneficial for the prevention and control of obesity and

  13. Rare earth germanates

    International Nuclear Information System (INIS)

    Bondar', I.A.; Vinogradova, N.V.; Dem'yanets, L.N.

    1983-01-01

    From the viewpoint of structural chemistry and general regularities controlling formation reactions of compounds and phases in melts, solid and gaseous states, recent achievements in the chemistry of rare earth germanates are generalized. Methods of synthesizing germanates, systems on the base of germanium oxides and rare earths are considered. The data on crystallochemical characteristics are tabulated. Individual compounds of scandium germanate are also characterized. Processes of germanate formation using the data of IR-spectroscopy, X-ray phase analysis are studied. The structure and morphotropic series of rare earth germanates and silicates are determined. Fields of their present and possible future application are considered

  14. Rare earth oxychalcogenides

    International Nuclear Information System (INIS)

    Eliseev, A.A.; Grizik, A.A.

    1977-01-01

    Considered are oxychalcogenides of rare earth elements: their nomenclature, general physico-chemical characteristics, methods of preparation. Considered in detail are chemistry and crystal chemistry of oxychalcogenides of Ln 2 O 2 S, Ln 2 O 2 Se, Ln 4 O 4 Se 3 , Ln 2 O 2 Te types, where Ln=La-Lu. Given are parameters of crystal lattices, elementary cells, interatomic distances and dependences of lattice periods on ion radii of rare earth elements. Described are the prospects of the practical application of rare-earth element oxychalcogenides as various luminophores

  15. Rare Disease Video Portal

    OpenAIRE

    Sánchez Bocanegra, Carlos Luis

    2011-01-01

    Rare Disease Video Portal (RD Video) is a portal web where contains videos from Youtube including all details from 12 channels of Youtube. Rare Disease Video Portal (RD Video) es un portal web que contiene los vídeos de Youtube incluyendo todos los detalles de 12 canales de Youtube. Rare Disease Video Portal (RD Video) és un portal web que conté els vídeos de Youtube i que inclou tots els detalls de 12 Canals de Youtube.

  16. Rare kaon, muon, and pion decay

    Energy Technology Data Exchange (ETDEWEB)

    Littenberg, L.

    1998-12-01

    The author discusses the status of and prospects for the study of rare decays of kaons, muons, and pions. Studies of rare kaon decays are entering an interesting new phase wherein they can deliver important short-distance information. It should be possible to construct an alternative unitarity triangle to that determined in the B sector, and thus perform a critical check of the Standard Model by comparing the two. Rare muon decays are beginning to constrain supersymmetric models in a significant way, and future experiments should reach sensitivities which this kind of model must show effects, or become far less appealing.

  17. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    Science.gov (United States)

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  18. Designing a Complete Model for Evaluating Companies in "The Modern Economy" and Refining Financial-Accounting Information

    Directory of Open Access Journals (Sweden)

    Pepi Mitică

    2017-01-01

    Full Text Available The limitations of current evaluation methods call for the expansion of approaches to identifying new solutions for representing the value of ICT companies. Features "The modern economy", the imperative of eliminating the inflection points, the necessity formulating an equidistant definition of value and the absence of a degree correlation refining the accounting regulations on intangible assets with development economic and social based on intellectual capital are as many arguments for the emergence of a new representation of value. The new FMV (Future Market Value method provides economic information in its dynamics and value in its evolution .Concerns practitioners in the field over the last decade reflect a consistency with the premises of our research.

  19. NUMERICAL MODELING OF CONJUGATE HEAT TRANSFER IN AN INSULATED GLASS UNIT (IGU WITH ACCOUNT FOR ITS DEFORMATION

    Directory of Open Access Journals (Sweden)

    Golubev Stanislav Sergeevich

    2012-12-01

    The effects of different climatic impacts lead to the deformation of glasses within an IGU (and its vertical cavity, respectively. Deformation of glasses and vertical cavities reduces the thermal resistance of an IGU. A numerical simulation of conjugate heat transfer within an IGU was implemented as part of the research into this phenomenon. Calculations were performed in ANSYS FLUENT CFD package. Basic equations describing the conservation of mass, conservation of momentum (in the Boussinesq approximation, conservation of energy were solved. Also, the radiation of the cavity wall was taken into account. Vertical walls were considered as non-isothermal, while horizontal walls were adiabatic. Calculations were made for several patterns of glass deformations. Calculation results demonstrate that the heat flow over vertical walls intensifies as the distance between centres of IGU glasses is reduced. The temperature in the central area of the hot glass drops.

  20. Rare sugar D-psicose prevents progression and development of diabetes in T2DM model Otsuka Long-Evans Tokushima Fatty rats

    Directory of Open Access Journals (Sweden)

    Hossain A

    2015-01-01

    Full Text Available Akram Hossain,1,2 Fuminori Yamaguchi,1 Kayoko Hirose,1 Toru Matsunaga,3 Li Sui,1 Yuko Hirata,1 Chisato Noguchi,1 Ayako Katagi,1 Kazuyo Kamitori,1 Youyi Dong,1 Ikuko Tsukamoto,4 Masaaki Tokuda11Department of Cell Physiology, Faculty of Medicine, Kagawa University, Ikenobe, Miki, Kagawa, Japan; 2Research and Development, Matsutani Chemical Industry Co., Ltd., Kitaitami, Itami-shi, Hyogo, Japan; 3Division of Hospital Pathology, Faculty of Medicine, Kagawa University, Ikenobe, Miki, Kagawa, Japan; 4Department of Pharmaco-Bio-Informatics, Faculty of Medicine, Kagawa University, Ikenobe, Miki, Kagawa, JapanBackground: The fundamental cause of overweight and obesity is consumption of calorie-dense foods. We have introduced a zero-calorie sweet sugar, D-psicose (D-allulose, a rare sugar that has been proven to have strong antihyperglycemic and antihyperlipidemic effects, and could be used as a replacement of natural sugar for the obese and diabetic subjects.Aim: Above mentioned efficacy of D-psicose (D-allulose has been confirmed in our previous studies on type 2 diabetes mellitus (T2DM model Otsuka Long-Evans Tokushima Fatty (OLETF rats with short-term treatment. In this study we investigated the long-term effect of D-psicose in preventing the commencement and progression of T2DM with the mechanism of preservation of pancreatic β-cells in OLETF rats.Methods: Treated OLETF rats were fed 5% d-psicose dissolved in water and control rats only water. Nondiabetic control rats, Long-Evans Tokushima Otsuka (LETO, were taken as healthy control and fed water. To follow the progression of diabetes, periodic measurements of blood glucose, plasma insulin, and body weight changes were continued till sacrifice at 60 weeks. Periodic in vivo body fat mass was measured. On sacrifice, pancreas, liver, and abdominal adipose tissues were collected for various staining tests.Results: D-Psicose prevented the commencement and progression of T2DM till 60 weeks through the