WorldWideScience

Sample records for model ii accounts

  1. Biological parametric mapping accounting for random regressors with regression calibration and model II regression.

    Science.gov (United States)

    Yang, Xue; Lauzon, Carolyn B; Crainiceanu, Ciprian; Caffo, Brian; Resnick, Susan M; Landman, Bennett A

    2012-09-01

    Massively univariate regression and inference in the form of statistical parametric mapping have transformed the way in which multi-dimensional imaging data are studied. In functional and structural neuroimaging, the de facto standard "design matrix"-based general linear regression model and its multi-level cousins have enabled investigation of the biological basis of the human brain. With modern study designs, it is possible to acquire multi-modal three-dimensional assessments of the same individuals--e.g., structural, functional and quantitative magnetic resonance imaging, alongside functional and ligand binding maps with positron emission tomography. Largely, current statistical methods in the imaging community assume that the regressors are non-random. For more realistic multi-parametric assessment (e.g., voxel-wise modeling), distributional consideration of all observations is appropriate. Herein, we discuss two unified regression and inference approaches, model II regression and regression calibration, for use in massively univariate inference with imaging data. These methods use the design matrix paradigm and account for both random and non-random imaging regressors. We characterize these methods in simulation and illustrate their use on an empirical dataset. Both methods have been made readily available as a toolbox plug-in for the SPM software. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. On the importance of accounting for competing risks in pediatric brain cancer: II. Regression modeling and sample size.

    Science.gov (United States)

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-03-15

    To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. SARP-II: Safeguards Accounting and Reports Program, Revised

    International Nuclear Information System (INIS)

    Kempf, C.R.

    1994-01-01

    A computer code, SARP (Safeguards Accounting and Reports Program) which will generate and maintain at-facility safeguards accounting records, and generate IAEA safeguards reports based on accounting data input by the user, was completed in 1990 by the Safeguards, Safety, and Nonproliferation Division (formerly the Technical Support Organization) at Brookhaven National Laboratory as a task under the US Program of Technical Support to IAEA safeguards. The code was based on a State System of Accounting for and Control of Nuclear Material (SSAC) for off-load refueled power reactor facilities, with model facility and safeguards accounting regime as described in IAEA Safeguards Publication STR-165. Since 1990, improvements in computing capabilities and comments and suggestions from users engendered revision of the original code. The result is an updated, revised version called SARP-II which is discussed in this report

  4. Some Determinants of Student Performance in Principles of Financial Accounting (II) – Further Evidence from Kuwait

    OpenAIRE

    Khalid, Abdulla A.

    2012-01-01

    The purpose of this study was to perform an empirical investigation of the influence of select factors on the academic performance of students studying Principles of Financial Accounting (II). This study attempts to fill some of the gaps in the existing local and regional accounting education literature and to provide comparative evidence for the harmonization of international accounting education. A stepwise regression model using a sample of 205 students from the College of B...

  5. Modelling in Accounting. Theoretical and Practical Dimensions

    Directory of Open Access Journals (Sweden)

    Teresa Szot -Gabryś

    2010-10-01

    Full Text Available Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic measurements areas which have not been hitherto covered by any accounting system (it applies, for example, to small businesses, agricultural farms, human capital, which requires the development of an appropriate theoretical and practical model. The article illustrates the issue of modelling in accounting based on the example of an accounting model developed for small businesses, i.e. economic entities which are not obliged by law to keep accounting records.

  6. Stream II-V5: Revision Of Stream II-V4 To Account For The Effects Of Rainfall Events

    International Nuclear Information System (INIS)

    Chen, K.

    2010-01-01

    STREAM II-V4 is the aqueous transport module currently used by the Savannah River Site emergency response Weather Information Display (WIND) system. The transport model of the Water Quality Analysis Simulation Program (WASP) was used by STREAM II to perform contaminant transport calculations. WASP5 is a US Environmental Protection Agency (EPA) water quality analysis program that simulates contaminant transport and fate through surface water. STREAM II-V4 predicts peak concentration and peak concentration arrival time at downstream locations for releases from the SRS facilities to the Savannah River. The input flows for STREAM II-V4 are derived from the historical flow records measured by the United States Geological Survey (USGS). The stream flow for STREAM II-V4 is fixed and the flow only varies with the month in which the releases are taking place. Therefore, the effects of flow surge due to a severe storm are not accounted for by STREAM II-V4. STREAM II-V4 has been revised to account for the effects of a storm event. The steps used in this method are: (1) generate rainfall hyetographs as a function of total rainfall in inches (or millimeters) and rainfall duration in hours; (2) generate watershed runoff flow based on the rainfall hyetographs from step 1; (3) calculate the variation of stream segment volume (cross section) as a function of flow from step 2; (4) implement the results from steps 2 and 3 into the STREAM II model. The revised model (STREAM II-V5) will find the proper stream inlet flow based on the total rainfall and rainfall duration as input by the user. STREAM II-V5 adjusts the stream segment volumes (cross sections) based on the stream inlet flow. The rainfall based stream flow and the adjusted stream segment volumes are then used for contaminant transport calculations.

  7. Implementing a trustworthy cost-accounting model.

    Science.gov (United States)

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  8. Questioning Stakeholder Legitimacy: A Philanthropic Accountability Model.

    Science.gov (United States)

    Kraeger, Patsy; Robichau, Robbie

    2017-01-01

    Philanthropic organizations contribute to important work that solves complex problems to strengthen communities. Many of these organizations are moving toward engaging in public policy work, in addition to funding programs. This paper raises questions of legitimacy for foundations, as well as issues of transparency and accountability in a pluralistic democracy. Measures of civic health also inform how philanthropic organizations can be accountable to stakeholders. We propose a holistic model for philanthropic accountability that combines elements of transparency and performance accountability, as well as practices associated with the American pluralistic model for democratic accountability. We argue that philanthropic institutions should seek stakeholder and public input when shaping any public policy agenda. This paper suggests a new paradigm, called philanthropic accountability that can be used for legitimacy and democratic governance of private foundations engaged in policy work. The Philanthropic Accountability Model can be empirically tested and used as a governance tool.

  9. Understanding financial crisis through accounting models

    NARCIS (Netherlands)

    Bezemer, D.J.

    2010-01-01

    This paper presents evidence that accounting (or flow-of-funds) macroeconomic models helped anticipate the credit crisis and economic recession Equilibrium models ubiquitous in mainstream policy and research did not This study traces the Intellectual pedigrees of the accounting approach as an

  10. Modelling of functional systems of managerial accounting

    Directory of Open Access Journals (Sweden)

    O.V. Fomina

    2017-12-01

    Full Text Available The modern stage of managerial accounting development takes place under the powerful influence of managerial innovations. The article aimed at the development of integrational model of budgeting and the system of balanced indices in the system of managerial accounting that will contribute the increasing of relevance for making managerial decisions by managers of different levels management. As a result of the study the author proposed the highly pragmatical integration model of budgeting and system of the balanced indices in the system of managerial accounting, which is realized by the development of the system of gathering, consolidation, analysis, and interpretation of financial and nonfinancial information, contributes the increasing of relevance for making managerial decisions on the base of coordination and effective and purpose orientation both strategical and operative resources of an enterprise. The effective integrational process of the system components makes it possible to distribute limited resources rationally taking into account prospective purposes and strategic initiatives, to carry

  11. Modeling habitat dynamics accounting for possible misclassification

    Science.gov (United States)

    Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.

    2012-01-01

    Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.

  12. Media Accountability Systems: Models, proposals and outlooks

    Directory of Open Access Journals (Sweden)

    Fernando O. Paulino

    2007-06-01

    Full Text Available This paper analyzes one of the basic actions of SOS-Imprensa, the mechanism to assure Media Accountability with the goal of proposing a synthesis of models for the Brazilian reality. The article aims to address the possibilities of creating and improving mechanisms to stimulate the democratic press process and to mark out and assure freedom of speech and personal rights with respect to the media. Based on the Press Social Responsibility Theory, the hypothesis is that the experiences analyzed (Communication Council, Press Council, Ombudsman and Readers Council are alternatives for accountability, mediation and arbitration, seeking visibility, trust and public support in favor of fairer media.

  13. The OntoREA Accounting Model: Ontology-based Modeling of the Accounting Domain

    Directory of Open Access Journals (Sweden)

    Christian Fischer-Pauzenberger

    2017-07-01

    Full Text Available McCarthy developed a framework for modeling the economic rationale of different business transactions along the enterprise value chain described in his seminal article “The REA Accounting Model – A Generalized Framework for Accounting Systems in a Shared Data Environment” Originally, the REA accounting model was specified in the entity-relationship (ER language. Later on other languages – especially in form of generic data models and UML class models (UML language – were used. Recently, the OntoUML language was developed by Guizzardi and used by Gailly et al. for a metaphysical reengineering of the REA enterprise ontology. Although the REA accounting model originally addressed the accounting domain, it most successfuly is applied as a reference framework for the conceptual modeling of enterprise systems. The primary research objective of this article is to anchor the REA-based models more deeply in the accounting domain. In order to achieve this objective, essential primitives of the REA model are identified and conceptualized in the OntoUML language within the Asset Liability Equity (ALE context of the traditional ALE accounting domain.

  14. Solvency ii. partial internal model

    OpenAIRE

    Baltrėnas, Rokas

    2016-01-01

    Solvency II. Partial Internal Model Solvency is one of the most important characteristics of the insurance company. Sufficient solvency ratio ensures long–term performance of the company and the necessary protection of policyholders. The new solvency assessment framework (Solvency II) came into force across the EU on 1 January 2016. It is based on a variety of risk evaluation modules, so it better reflects the real state of the company’s solvency. Under the Solvency II insurance company’s sol...

  15. Driving Strategic Risk Planning With Predictive Modelling For Managerial Accounting

    DEFF Research Database (Denmark)

    Nielsen, Steen; Pontoppidan, Iens Christian

    mathematical. The ultimate purpose of this paper is to “make the risk concept procedural and analytical” and to argue that accountants should now include stochastic risk management as a standard tool. Drawing on mathematical modelling and statistics, this paper methodically develops risk analysis approach......Currently, risk management in management/managerial accounting is treated as deterministic. Although it is well-known that risk estimates are necessarily uncertain or stochastic, until recently the methodology required to handle stochastic risk-based elements appear to be impractical and too...... for managerial accounting and shows how it can be used to determine the impact of different types of risk assessment input parameters on the variability of important outcome measures. The purpose is to: (i) point out the theoretical necessity of a stochastic risk framework; (ii) present a stochastic framework...

  16. Guidelines for School Property Accounting in Colorado, Part II--General Fixed Asset Accounts.

    Science.gov (United States)

    Stiverson, Clare L.

    The second publication of a series of three issued by the Colorado Department of Education is designed as a guide for local school districts in the development of a property accounting system. It defines and classifies groups of accounts whereby financial information, taken from inventory records, may be transcribed into debit and credit entries…

  17. Fusion of expertise among accounting accounting faculty. Towards an expertise model for academia in accounting.

    NARCIS (Netherlands)

    Njoku, Jonathan C.; van der Heijden, Beatrice; Inanga, Eno L.

    2010-01-01

    This paper aims to portray an accounting faculty expert. It is argued that neither the academic nor the professional orientation alone appears adequate in developing accounting faculty expertise. The accounting faculty expert is supposed to develop into a so-called ‘flexpert’ (Van der Heijden, 2003)

  18. A simulation model for material accounting systems

    International Nuclear Information System (INIS)

    Coulter, C.A.; Thomas, K.E.

    1987-01-01

    A general-purpose model that was developed to simulate the operation of a chemical processing facility for nuclear materials has been extended to describe material measurement and accounting procedures as well. The model now provides descriptors for material balance areas, a large class of measurement instrument types and their associated measurement errors for various classes of materials, the measurement instruments themselves with their individual calibration schedules, and material balance closures. Delayed receipt of measurement results (as for off-line analytical chemistry assay), with interim use of a provisional measurement value, can be accurately represented. The simulation model can be used to estimate inventory difference variances for processing areas that do not operate at steady state, to evaluate the timeliness of measurement information, to determine process impacts of measurement requirements, and to evaluate the effectiveness of diversion-detection algorithms. Such information is usually difficult to obtain by other means. Use of the measurement simulation model is illustrated by applying it to estimate inventory difference variances for two material balance area structures of a fictitious nuclear material processing line

  19. Modelling in Accounting. Theoretical and Practical Dimensions

    OpenAIRE

    Teresa Szot -Gabryś

    2010-01-01

    Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic...

  20. Display of the information model accounting system

    OpenAIRE

    Matija Varga

    2011-01-01

    This paper presents the accounting information system in public companies, business technology matrix and data flow diagram. The paper describes the purpose and goals of the accounting process, matrix sub-process and data class. Data flow in the accounting process and the so-called general ledger module are described in detail. Activities of the financial statements and determining the financial statements of the companies are mentioned as well. It is stated how the general ledger...

  1. DMFCA Model as a Possible Way to Detect Creative Accounting and Accounting Fraud in an Enterprise

    Directory of Open Access Journals (Sweden)

    Jindřiška Kouřilová

    2013-05-01

    Full Text Available The quality of reported accounting data as well as the quality and behaviour of their users influence the efficiency of an enterprise’s management. Its assessment could therefore be changed as well. To identify creative accounting and fraud, several methods and tools were used. In this paper we would like to present our proposal of the DMFCA (Detection model Material Flow Cost Accounting balance model based on environmental accounting and the MFCA (Material Flow Cost Accounting as its method. The following balance areas are included: material, financial and legislative. Using the analysis of strengths and weaknesses of the model, its possible use within a production and business company were assessed. Its possible usage to the detection of some creative accounting techniques was also assessed. The Model is developed in details for practical use and describing theoretical aspects.

  2. Accommodating environmental variation in population models: metaphysiological biomass loss accounting.

    Science.gov (United States)

    Owen-Smith, Norman

    2011-07-01

    1. There is a pressing need for population models that can reliably predict responses to changing environmental conditions and diagnose the causes of variation in abundance in space as well as through time. In this 'how to' article, it is outlined how standard population models can be modified to accommodate environmental variation in a heuristically conducive way. This approach is based on metaphysiological modelling concepts linking populations within food web contexts and underlying behaviour governing resource selection. Using population biomass as the currency, population changes can be considered at fine temporal scales taking into account seasonal variation. Density feedbacks are generated through the seasonal depression of resources even in the absence of interference competition. 2. Examples described include (i) metaphysiological modifications of Lotka-Volterra equations for coupled consumer-resource dynamics, accommodating seasonal variation in resource quality as well as availability, resource-dependent mortality and additive predation, (ii) spatial variation in habitat suitability evident from the population abundance attained, taking into account resource heterogeneity and consumer choice using empirical data, (iii) accommodating population structure through the variable sensitivity of life-history stages to resource deficiencies, affecting susceptibility to oscillatory dynamics and (iv) expansion of density-dependent equations to accommodate various biomass losses reducing population growth rate below its potential, including reductions in reproductive outputs. Supporting computational code and parameter values are provided. 3. The essential features of metaphysiological population models include (i) the biomass currency enabling within-year dynamics to be represented appropriately, (ii) distinguishing various processes reducing population growth below its potential, (iii) structural consistency in the representation of interacting populations and

  3. River water quality modelling: II

    DEFF Research Database (Denmark)

    Shanahan, P.; Henze, Mogens; Koncsos, L.

    1998-01-01

    The U.S. EPA QUAL2E model is currently the standard for river water quality modelling. While QUAL2E is adequate for the regulatory situation for which it was developed (the U.S. wasteload allocation process), there is a need for a more comprehensive framework for research and teaching. Moreover......, and to achieve robust model calibration. Mass balance problems arise from failure to account for mass in the sediment as well as in the water column and due to the fundamental imprecision of BOD as a state variable. (C) 1998 IAWQ Published by Elsevier Science Ltd. All rights reserved....

  4. Display of the information model accounting system

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2011-12-01

    Full Text Available This paper presents the accounting information system in public companies, business technology matrix and data flow diagram. The paper describes the purpose and goals of the accounting process, matrix sub-process and data class. Data flow in the accounting process and the so-called general ledger module are described in detail. Activities of the financial statements and determining the financial statements of the companies are mentioned as well. It is stated how the general ledger module should function and what characteristics it must have. Line graphs will depict indicators of the company’s business success, indebtedness and company’s efficiency coefficients based on financial balance reports, and profit and loss report.

  5. 25 CFR 547.9 - What are the minimum technical standards for Class II gaming system accounting functions?

    Science.gov (United States)

    2010-04-01

    ... gaming system accounting functions? 547.9 Section 547.9 Indians NATIONAL INDIAN GAMING COMMISSION... systems. (a) Required accounting data.The following minimum accounting data, however named, shall be...) Accounting data storage. If the Class II gaming system electronically maintains accounting data: (1...

  6. Studi Model Penerimaan Tehnologi (Technology Acceptance Model) Novice Accountant

    OpenAIRE

    Rustiana, Rustiana

    2006-01-01

    This study investigates adoption or application of behavior information technologyacceptance. Davis' Technology Acceptance Model is employed to explain perceived usefulness, perceived ease of use, and intention to use in information systems. The respondents were 228 accounting students in management information system. Data was collected by questionnaire and then analyzed by using linear regression analysis and independent t-test. The results are in line with most of the hypotheses, only hypo...

  7. Anisotropic models to account for large borehole washouts to estimate gas hydrate saturations in the Gulf of Mexico Gas Hydrate Joint Industry Project Leg II Alaminos 21 B well

    Science.gov (United States)

    Lee, M.W.; Collett, T.S.; Lewis, K.A.

    2012-01-01

    Through the use of 3-D seismic amplitude mapping, several gashydrate prospects were identified in the Alaminos Canyon (AC) area of the Gulf of Mexico. Two locations were drilled as part of the Gulf of MexicoGasHydrate Joint Industry Project Leg II (JIP Leg II) in May of 2009 and a comprehensive set of logging-while-drilling (LWD) logs were acquired at each well site. LWD logs indicated that resistivity in the range of ~2 ohm-m and P-wave velocity in the range of ~1.9 km/s were measured in the target sand interval between 515 and 645 feet below sea floor. These values were slightly elevated relative to those measured in the sediment above and below the target sand. However, the initial well log analysis was inconclusive regarding the presence of gashydrate in the logged sand interval, mainly because largewashouts caused by drilling in the target interval degraded confidence in the well log measurements. To assess gashydratesaturations in the sedimentary section drilled in the Alaminos Canyon 21B (AC21-B) well, a method of compensating for the effect of washouts on the resistivity and acoustic velocities was developed. The proposed method models the washed-out portion of the borehole as a vertical layer filled with sea water (drilling fluid) and the apparent anisotropic resistivity and velocities caused by a vertical layer are used to correct the measured log values. By incorporating the conventional marine seismic data into the well log analysis, the average gashydratesaturation in the target sand section in the AC21-Bwell can be constrained to the range of 8–28%, with 20% being our best estimate.

  8. Accountability

    Science.gov (United States)

    Fielding, Michael; Inglis, Fred

    2017-01-01

    This contribution republishes extracts from two important articles published around 2000 concerning the punitive accountability system suffered by English primary and secondary schools. The first concerns the inspection agency Ofsted, and the second managerialism. Though they do not directly address assessment, they are highly relevant to this…

  9. Uncertainty in Discount Models and Environmental Accounting

    Directory of Open Access Journals (Sweden)

    Donald Ludwig

    2005-12-01

    Full Text Available Cost-benefit analysis (CBA is controversial for environmental issues, but is nevertheless employed by many governments and private organizations for making environmental decisions. Controversy centers on the practice of economic discounting in CBA for decisions that have substantial long-term consequences, as do most environmental decisions. Customarily, economic discounting has been calculated at a constant exponential rate, a practice that weights the present heavily in comparison with the future. Recent analyses of economic data show that the assumption of constant exponential discounting should be modified to take into account large uncertainties in long-term discount rates. A proper treatment of this uncertainty requires that we consider returns over a plausible range of assumptions about future discounting rates. When returns are averaged in this way, the schemes with the most severe discounting have a negligible effect on the average after a long period of time has elapsed. This re-examination of economic uncertainty provides support for policies that prevent or mitigate environmental damage. We examine these effects for three examples: a stylized renewable resource, management of a long-lived species (Atlantic Right Whales, and lake eutrophication.

  10. The financial accounting model from a system dynamics' perspective

    NARCIS (Netherlands)

    Melse, E.

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal

  11. Accountancy Modeling on Intangible Fixed Assets in Terms of the Main Provisions of International Accounting Standards

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2014-12-01

    Full Text Available Intangible fixed assets are of great importance in terms of progress of economic units. In recent years new approaches have been developed, additions to old standards so that intangible assets have gained a reputation both in the economic environment and in academia. We intend to develop a practical study on the main accounting approaches of the accounting modeling of the intangibles that impact on a company's brand development research PRORESEARCH SRL.

  12. Multilayer piezoelectric transducer models combined with Field II

    DEFF Research Database (Denmark)

    Bæk, David; Willatzen, Morten; Jensen, Jørgen Arendt

    2012-01-01

    with a polymer ring, and submerged into water. The transducer models are developed to account for any external electrical loading impedance in the driving circuit. The models are adapted to calculate the surface acceleration needed by the Field II software in predicting pressure pulses at any location in front....... If the three-dimensional model is restricted in its radial movement at the circular boundary both models exhibit identical results. The Field II predicted pressure pulses are found to have oscillating consistency with a 2.0 dB overshoot on the maximum amplitude using the one-dimensional compared to the three...

  13. Testing of a one dimensional model for Field II calibration

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2008-01-01

    Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...... to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show...

  14. Modelling adversary actions against a nuclear material accounting system

    International Nuclear Information System (INIS)

    Lim, J.J.; Huebel, J.G.

    1979-01-01

    A typical nuclear material accounting system employing double-entry bookkeeping is described. A logic diagram is used to model the interactions of the accounting system and the adversary when he attempts to thwart it. Boolean equations are derived from the logic diagram; solution of these equations yields the accounts and records through which the adversary may disguise a SSNM theft and the collusion requirements needed to accomplish this feat. Some technical highlights of the logic diagram are also discussed

  15. Project MAP: Model Accounting Plan for Special Education. Final Report.

    Science.gov (United States)

    Rossi, Robert J.

    The Model Accounting Plan (MAP) is a demographic accounting system designed to meet three major goals related to improving planning, evaluation, and monitoring of special education programs. First, MAP provides local-level data for administrators and parents to monitor the progress, transition patterns, expected attainments, and associated costs…

  16. The Relevance of the CIPP Evaluation Model for Educational Accountability.

    Science.gov (United States)

    Stufflebeam, Daniel L.

    The CIPP Evaluation Model was originally developed to provide timely information in a systematic way for decision making, which is a proactive application of evaluation. This article examines whether the CIPP model also serves the retroactive purpose of providing information for accountability. Specifically, can the CIPP Model adequately assist…

  17. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Niculae Feleaga

    2006-04-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  18. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Liliana Feleaga

    2006-06-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  19. Can An Amended Standard Model Account For Cold Dark Matter?

    International Nuclear Information System (INIS)

    Goldhaber, Maurice

    2004-01-01

    It is generally believed that one has to invoke theories beyond the Standard Model to account for cold dark matter particles. However, there may be undiscovered universal interactions that, if added to the Standard Model, would lead to new members of the three generations of elementary fermions that might be candidates for cold dark matter particles

  20. SNMSP II: A system to fully automate special nuclear materials accountability reporting for electric utilities

    International Nuclear Information System (INIS)

    Pareto, V.; Venegas, R.

    1987-01-01

    The USNRC requires each licensee who is authorized to possess Special Nuclear Materials (SNM) to prepare and submit reports concerning SNM received, produced, possessed, transferred, consumed, disposed of, or lost. These SNM accountability reports, which need to be submitted twice a year, contain detailed information on the origin, quantity, and type of SNM for several locations. The amount of detail required makes these reports very time consuming and error prone when prepared manually. Yankee Atomic is developing an IBM PC-based computer code that fully automates the process of generating SNM accountability reports. The program, called SNMSP II, prints a number of summaries including facsimiles of the NRC/DOE-741, 742, 742C, and RW-859 reports in a format that can be submitted directly to the NRC/DOE. SNMSP II is menu-driven and is especially designed for people with little or no computer training. Input can be either from a mainframe-based corporate data base or manually through user-friendly screens. In addition, extensive quality assurance features are available to ensure the security and accuracy of the data. This paper discusses the major features of the code and describes its implementation at Yankee

  1. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  2. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.

    2000-01-01

    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest

  3. Supo Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Wass, Alexander Joseph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-14

    This report describes the continuation of the Computational Fluid Dynamics (CFD) model of the Supo cooling system described in the report, Supo Thermal Model Development1, by Cynthia Buechler. The goal for this report is to estimate the natural convection heat transfer coefficient (HTC) of the system using the CFD results and to compare those results to remaining past operational data. Also, the correlation for determining radiolytic gas bubble size is reevaluated using the larger simulation sample size. The background, solution vessel geometry, mesh, material properties, and boundary conditions are developed in the same manner as the previous report. Although, the material properties and boundary conditions are determined using the appropriate experiment results for each individual power level.

  4. Accounting for Business Models: Increasing the Visibility of Stakeholders

    Directory of Open Access Journals (Sweden)

    Colin Haslam

    2015-01-01

    Full Text Available Purpose: This paper conceptualises a firm’s business model employing stakeholder theory as a central organising element to help inform the purpose and objective(s of business model financial reporting and disclosure. Framework: Firms interact with a complex network of primary and secondary stakeholders to secure the value proposition of a firm’s business model. This value proposition is itself a complex amalgam of value creating, value capturing and value manipulating arrangements with stakeholders. From a financial accounting perspective the purpose of the value proposition for a firm’s business model is to sustain liquidity and solvency as a going concern. Findings: This article argues that stakeholder relations impact upon the financial viability of a firm’s business model value proposition. However current financial reporting by function of expenses and the central organising objectives of the accounting conceptual framework conceal firm-stakeholder relations and their impact on reported financials. Practical implications: The practical implication of our paper is that ‘Business Model’ financial reporting would require a reorientation in the accounting conceptual framework that defines the objectives and purpose of financial reporting. This reorientation would involve reporting about stakeholder relations and their impact on a firms financials not simply reporting financial information to ‘investors’. Social Implications: Business model financial reporting has the potential to be stakeholder inclusive because the numbers and narratives reported by firms in their annual financial statements will increase the visibility of stakeholder relations and how these are being managed. What is original/value of paper: This paper’s original perspective is that it argues that a firm’s business model is structured out of stakeholder relations. It presents the firm’s value proposition as the product of value creating, capturing and

  5. Accounting for small scale heterogeneity in ecohydrologic watershed models

    Science.gov (United States)

    Burke, W.; Tague, C.

    2017-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach

  6. Accounting for household heterogeneity in general equilibrium economic growth models

    International Nuclear Information System (INIS)

    Melnikov, N.B.; O'Neill, B.C.; Dalton, M.G.

    2012-01-01

    We describe and evaluate a new method of aggregating heterogeneous households that allows for the representation of changing demographic composition in a multi-sector economic growth model. The method is based on a utility and labor supply calibration that takes into account time variations in demographic characteristics of the population. We test the method using the Population-Environment-Technology (PET) model by comparing energy and emissions projections employing the aggregate representation of households to projections representing different household types explicitly. Results show that the difference between the two approaches in terms of total demand for energy and consumption goods is negligible for a wide range of model parameters. Our approach allows the effects of population aging, urbanization, and other forms of compositional change on energy demand and CO 2 emissions to be estimated and compared in a computationally manageable manner using a representative household under assumptions and functional forms that are standard in economic growth models.

  7. ACCOUNTING MODELS FOR OUTWARD PROCESSING TRANSACTIONS OF GOODS

    Directory of Open Access Journals (Sweden)

    Lucia PALIU-POPA

    2010-09-01

    Full Text Available In modern international trade, a significant expansion is experienced by commercial operations, also including goods outward processing transactions. The motivations for expanding these international economic affairs, which take place in a complex legal framework, consist of: capitalization of the production capacity for some partners, of the brand for others, leading to a significant commercial profit and thus increasing the currency contribution, without excluding the high and complex nature of risks, both in trading and extra-trading. Starting from the content of processing transactions of goods, as part of combined commercial operations and after clarifying the tax matters which affect the entry in the accounts, we shall present models for reflecting in the accounting of an entity established in Romania the operations of outward processing of goods, if the provider of such operations belongs to the extra-Community or Community area

  8. Principles of Public School Accounting. State Educational Records and Reports Series: Handbook II-B.

    Science.gov (United States)

    Adams, Bert K.; And Others

    This handbook discusses the following primary aspects of school accounting: Definitions and principles; opening the general ledger; recording the approved budget; a sample month of transactions; the balance sheet, monthly, and annual reports; subsidiary journals; payroll procedures; cafeteria fund accounting; debt service accounting; construction…

  9. Accounting for Water Insecurity in Modeling Domestic Water Demand

    Science.gov (United States)

    Galaitsis, S. E.; Huber-lee, A. T.; Vogel, R. M.; Naumova, E.

    2013-12-01

    Water demand management uses price elasticity estimates to predict consumer demand in relation to water pricing changes, but studies have shown that many additional factors effect water consumption. Development scholars document the need for water security, however, much of the water security literature focuses on broad policies which can influence water demand. Previous domestic water demand studies have not considered how water security can affect a population's consumption behavior. This study is the first to model the influence of water insecurity on water demand. A subjective indicator scale measuring water insecurity among consumers in the Palestinian West Bank is developed and included as a variable to explore how perceptions of control, or lack thereof, impact consumption behavior and resulting estimates of price elasticity. A multivariate regression model demonstrates the significance of a water insecurity variable for data sets encompassing disparate water access. When accounting for insecurity, the R-squaed value improves and the marginal price a household is willing to pay becomes a significant predictor for the household quantity consumption. The model denotes that, with all other variables held equal, a household will buy more water when the users are more water insecure. Though the reasons behind this trend require further study, the findings suggest broad policy implications by demonstrating that water distribution practices in scarcity conditions can promote consumer welfare and efficient water use.

  10. Comprehensive impedance model of cobalt deposition in sulfate solutions accounting for homogeneous reactions and adsorptive effects

    International Nuclear Information System (INIS)

    Vazquez-Arenas, Jorge; Pritzker, Mark

    2011-01-01

    A comprehensive physicochemical model for cobalt deposition onto a cobalt rotating disk electrode in sulfate-borate (pH 3) solutions is derived and statistically fit to experimental EIS spectra obtained over a range of CoSO 4 concentrations, overpotentials and rotation speeds. The model accounts for H + and water reduction, homogeneous reactions and mass transport within the boundary layer. Based on a thermodynamic analysis, the species CoSO 4(aq) , B(OH) 3(aq) , B 3 O 3 (OH) 4 - , H + and OH - and two homogeneous reactions (B(OH) 3(aq) hydrolysis and water dissociation) are included in the model. Kinetic and transport parameters are estimated by minimizing the sum-of-squares error between the model and experimental measurements using a simplex method. The electrode response is affected most strongly by parameters associated with the first step of Co(II) reduction, reflecting its control of the rate of Co deposition, and is moderately sensitive to the parameters for H + reduction and the Co(II) diffusion coefficient. Water reduction is found not to occur to any significant extent under the conditions studied. These trends are consistent with that obtained by fitting equivalent electrical circuits to the experimental spectra. The simplest circuit that best fits the data consists of two RQ elements (resistor-constant phase element) in parallel or series with the solution resistance.

  11. The origins of the spanish railroad accounting model: a qualitative study of the MZA'S operating account (1856-1874

    Directory of Open Access Journals (Sweden)

    Beatriz Santos

    2014-12-01

    Full Text Available The lack of external regulation about the form and substance of the financial statements that railroad companies had to report during the implementation phase of the Spanish railway, meant that each company developed its own accounting model. In this study we have described, analysed and interpreted the more relevant changes in the accounting information in relation to the business result. Using the analysis of an historical case, we developed an ad-hoc research tool, for recording all the changes of the operating account. The results of the study prove that MZA’s operating account reflected the particularities of the railway business although subject to limitations, and the reported information improved during the study period in terms of relevance and reliability

  12. Accrual based accounting implementation: An approach for modelling major decisions

    Directory of Open Access Journals (Sweden)

    Ratno Agriyanto

    2016-12-01

    Full Text Available Over the last three decades the main issues of implementation of accrual based accounting government institutions in Indonesia. Implementation of accrual based accounting in government institutions amid debate about the usefulness of accounting information for decision-making. Empirical study shows that the accrual based of accounting information on a government institution is not used for decision making. The research objective was to determine the impact of the implementation of the accrual based accounting to the accrual basis of accounting information use for decision-making basis. We used the survey questionnaires. The data were processed by SEM using statistical software WarpPLS. The results showed that the implementation of the accrual based accounting in City Government Semarang has significantly positively associated with decision-making. Another important finding is the City Government officials of Semarang have personality, low tolerance of ambiguity is a negative effect on the relationship between the implementation of the accrual based accounting for decision making

  13. Bedrijfsrisico's van de accountant en het Audit Risk Model

    NARCIS (Netherlands)

    Wallage, Ph.; Klijnsmit, P.; Sodekamp, M.

    2003-01-01

    In de afgelopen jaren is het bedrijfsrisico van de controlerend accountant sterk toegenomen. De bedrijfsrisico’s van de accountant beginnen in toenemende mate een belemmering te vormen voor het aanvaarden van opdrachten. In dit artikel wordt aandacht besteed aan de wijze waarop de bedrijfsrisico’s

  14. Spectral modeling of Type II SNe

    Science.gov (United States)

    Dessart, Luc

    2015-08-01

    The red supergiant phase represents the final stage of evolution in the life of moderate mass (8-25Msun) massive stars. Hidden from view, the core changes considerably its structure, progressing through the advanced stages of nuclear burning, and eventually becomes degenerate. Upon reaching the Chandrasekhar mass, this Fe or ONeMg core collapses, leading to the formation of a proto neutron star. A type II supernova results if the shock that forms at core bounce, eventually wins over the envelope accretion and reaches the progenitor surface.The electromagnetic display of such core-collapse SNe starts with this shock breakout, and persists for months as the ejecta releases the energy deposited initially by the shock or continuously through radioactive decay. Over a timescale of weeks to months, the originally optically-thick ejecta thins out and turns nebular. SN radiation contains a wealth of information about the explosion physics (energy, explosive nucleosynthesis), the progenitor properties (structure and composition). Polarised radiation also offers signatures that can help constrain the morphology of the ejecta.In this talk, I will review the current status of type II SN spectral modelling, and emphasise that a proper solution requires a time dependent treatment of the radiative transfer problem. I will discuss the wealth of information that can be gleaned from spectra as well as light curves, from both the early times (photospheric phase) and late times (nebular phase). I will discuss the diversity of Type SNe properties and how they are related to the diversity of red supergiant stars from which they originate.SN radiation offers an alternate means of constraining the properties of red-supergiant stars. To wrap up, I will illustrate how SNe II-P can also be used as probes, for example to constrain the metallicity of their environment.

  15. Creative Accounting and Financial Reporting: Model Development and Empirical Testing

    OpenAIRE

    Fizza Tassadaq; Qaisar Ali Malik

    2015-01-01

    This paper empirically and critically investigates the issue of creative accounting in financial reporting. It not only analyzes the ethical responsibility of creative accounting but also focuses on other factors which influence the financial reporting like role of auditors, role of government regulations or international standards, impact of manipulative behaviors and impact of ethical values of an individual. Data has been collected through structured questionnaire from industrial sector. D...

  16. Creative Accounting & Financial Reporting: Model Development & Empirical Testing

    OpenAIRE

    Tassadaq, Fizza; Malik, Qaisar Ali

    2015-01-01

    This paper empirically and critically investigates the issue of creative accounting in financial reporting. It not only analyzes the ethical responsibility of creative accounting but also focuses on other factors which influence the financial reporting like role of auditors, role of govt. regulations or international standards, impact of manipulative behaviors and impact of ethical values of an individual. Data has been collected through structured questionnaire from industrial sector. Descri...

  17. Joint genome-wide prediction in several populations accounting for randomness of genotypes: A hierarchical Bayes approach. II: Multivariate spike and slab priors for marker effects and derivation of approximate Bayes and fractional Bayes factors for the complete family of models.

    Science.gov (United States)

    Martínez, Carlos Alberto; Khare, Kshitij; Banerjee, Arunava; Elzo, Mauricio A

    2017-03-21

    This study corresponds to the second part of a companion paper devoted to the development of Bayesian multiple regression models accounting for randomness of genotypes in across population genome-wide prediction. This family of models considers heterogeneous and correlated marker effects and allelic frequencies across populations, and has the ability of considering records from non-genotyped individuals and individuals with missing genotypes in any subset of loci without the need for previous imputation, taking into account uncertainty about imputed genotypes. This paper extends this family of models by considering multivariate spike and slab conditional priors for marker allele substitution effects and contains derivations of approximate Bayes factors and fractional Bayes factors to compare models from part I and those developed here with their null versions. These null versions correspond to simpler models ignoring heterogeneity of populations, but still accounting for randomness of genotypes. For each marker loci, the spike component of priors corresponded to point mass at 0 in R S , where S is the number of populations, and the slab component was a S-variate Gaussian distribution, independent conditional priors were assumed. For the Gaussian components, covariance matrices were assumed to be either the same for all markers or different for each marker. For null models, the priors were simply univariate versions of these finite mixture distributions. Approximate algebraic expressions for Bayes factors and fractional Bayes factors were found using the Laplace approximation. Using the simulated datasets described in part I, these models were implemented and compared with models derived in part I using measures of predictive performance based on squared Pearson correlations, Deviance Information Criterion, Bayes factors, and fractional Bayes factors. The extensions presented here enlarge our family of genome-wide prediction models making it more flexible in the

  18. A Unifying Modeling of Plant Shoot Gravitropism With an Explicit Account of the Effects of Growth

    Directory of Open Access Journals (Sweden)

    Renaud eBastien

    2014-04-01

    Full Text Available Gravitropism, the slow reorientation of plant growth in response to gravity, is a major determinant of the form and posture of land plants. Recently a universal model of shoot gravitropism, the AC model, has been presented, in which the dynamics of the tropic movement is only determined by the contradictory controls of i graviception, that tends to curve the plants towards the vertical, and ii proprioception, that tends to keep the stem straights. This model was found valid over a large range of species and over two order of magnitude in organ size. However the motor of the movement, the elongation, has been neglected in the AC model. Taking into account explicit growth effects, however, requires consideration of the material derivative, i.e. the rate of change of curvature bound to an expanding and convected organ elements. Here we show that it is possible to rewrite the material equation of curvature in a compact simplified form that express directly the curvature variation as a function of the median elongation andof the distribution of the differential growth. Through this extended model, called the ACE model, two main destabilizing effects of growth on the tropic movement are identified : i the passive orientation drift, which occurs when a curved element elongates without differential growth and ii the fixed curvature which occurs when a element leaves the elongation zone and is no longer able to change its curvature actively. By comparing the AC and ACE models to experiments, these two effects were however found negligible, revealing a probable selection for rapid convergence to the steady state shape during the tropic movement so as to escape the growth destabilizing effects, involving in particular a selection over proprioceptive sensitivity. Then the simplified AC mode can be used to analyze gravitropism and posture control in actively elongating plant organs without significant information loss.

  19. A Model Driven Approach to domain standard specifications examplified by Finance Accounts receivable/ Accounts payable

    OpenAIRE

    Khan, Bahadar

    2005-01-01

    This thesis was written as a part of a master degree at the University of Oslo. The thesis work was conducted at SINTEF. The work has been carried out in the period November 2002 and April 2005. This thesis might be interesting to anyone interested in Domain Standard Specification Language developed by using the MDA approach to software development. The Model Driven Architecture (MDA) allows to separate the system functionality specification from its implementation on any specific technolo...

  20. Models and Rules of Evaluation in International Accounting

    OpenAIRE

    Liliana Feleaga; Niculae Feleaga

    2006-01-01

    The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is freque...

  1. Accounting for heterogeneity of public lands in hedonic property models

    Science.gov (United States)

    Charlotte Ham; Patricia A. Champ; John B. Loomis; Robin M. Reich

    2012-01-01

    Open space lands, national forests in particular, are usually treated as homogeneous entities in hedonic price studies. Failure to account for the heterogeneous nature of public open spaces may result in inappropriate inferences about the benefits of proximate location to such lands. In this study the hedonic price method is used to estimate the marginal values for...

  2. Material control in nuclear fuel fabrication facilities. Part II. Accountability, instrumentation and measurement techniques in fuel fabrication facilities

    International Nuclear Information System (INIS)

    Borgonovi, G.M.; McCartin, T.J.; McDaniel, T.; Miller, C.L.; Nguyen, T.

    1978-01-01

    This report describes the measurement techniques, the instrumentation, and the procedures used in accountability and control of nuclear materials, as they apply to fuel fabrication facilities. A general discussion is given of instrumentation and measurement techniques which are presently used being considered for fuel fabrication facilities. Those aspects which are most significant from the point of view of satisfying regulatory constraints have been emphasized. Sensors and measurement devices have been discussed, together with their interfacing into a computerized system designed to permit real-time data collection and analysis. Estimates of accuracy and precision of measurement techniques have been given, and, where applicable, estimates of associated costs have been presented. A general description of material control and accounting is also included. In this section, the general principles of nuclear material accounting have been reviewed first (closure of material balance). After a discussion of the most current techniques used to calculate the limit of error on inventory difference, a number of advanced statistical techniques are reviewed. The rest of the section deals with some regulatory aspects of data collection and analysis, for accountability purposes, and with the overall effectiveness of accountability in detecting diversion attempts in fuel fabrication facilities. A specific example of application of the accountability methods to a model fuel fabrication facility is given. The effect of random and systematic errors on the total material uncertainty has been discussed, together with the effect on uncertainty of the length of the accounting period

  3. Modelling Financial-Accounting Decisions by Means of OLAP Tools

    Directory of Open Access Journals (Sweden)

    Diana Elena CODREAN

    2011-03-01

    Full Text Available At present, one can say that a company’s good running largely depends on the information quantity and quality it relies on when making decisions. The information needed to underlie decisions and be obtained due to the existence of a high-performing information system which makes it possible for the data to be shown quickly, synthetically and truly, also providing the opportunity for complex analyses and predictions. In such circumstances, computerized accounting systems, too, have grown their complexity by means of data analyzing information solutions such as OLAP and Data Mining which help perform a multidimensional analysis of financial-accounting data, potential frauds can be detected and data hidden information can be revealed, trends for certain indicators can be set up, therefore ensuring useful information to a company’s decision making.

  4. Accounting for Epistemic and Aleatory Uncertainty in Early System Design, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — This project extends Probability Bounds Analysis to model epistemic and aleatory uncertainty during early design of engineered systems in an Integrated Concurrent...

  5. SEBAL-A: A Remote Sensing ET Algorithm that Accounts for Advection with Limited Data. Part II: Test for Transferability

    Directory of Open Access Journals (Sweden)

    Mcebisi Mkhwanazi

    2015-11-01

    Full Text Available Because the Surface Energy Balance Algorithm for Land (SEBAL tends to underestimate ET when there is advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET. The modification involved the estimation of advected energy, which required the development of a wind function. In Part I, the modified SEBAL model (SEBAL-A was developed and validated on well-watered alfalfa of a standard height of 40–60 cm. In this Part II, SEBAL-A was tested on different crops and irrigation treatments in order to determine its performance under varying conditions. The crops used for the transferability test were beans (Phaseolus vulgaris L., wheat (Triticum aestivum L. and corn (Zea mays L.. The estimated ET using SEBAL-A was compared to actual ET measured using a Bowen Ratio Energy Balance (BREB system. Results indicated that SEBAL-A estimated ET fairly well for beans and wheat, only showing some slight underestimation of a Mean Bias Error (MBE of −0.7 mm·d−1 (−11.3%, a Root Mean Square Error (RMSE of 0.82 mm·d−1 (13.9% and a Nash Sutcliffe Coefficient of Efficiency (NSCE of 0.64. On corn, SEBAL-A resulted in an ET estimation error MBE of −0.7 mm·d−1 (−9.9%, a RMSE of 1.59 mm·d−1 (23.1% and NSCE = 0.24. This result shows an improvement on the original SEBAL model, which for the same data resulted in an ET MBE of −1.4 mm·d−1 (−20.4%, a RMSE of 1.97 mm·d−1 (28.8% and a NSCE of −0.18. When SEBAL-A was tested on only fully irrigated corn, it performed well, resulting in no bias, i.e., MBE of 0.0 mm·d−1; RMSE of 0.78 mm·d−1 (10.7% and NSCE of 0.82. The SEBAL-A model showed less or no improvement on corn that was either water-stressed or at early stages of growth. The errors incurred under these conditions were not due to advection not accounted for but rather were due to the nature of SEBAL and SEBAL-A being single-source energy balance models and

  6. The Charitable Trust Model: An Alternative Approach For Department Of Defense Accounting

    Science.gov (United States)

    2016-12-01

    unqualified opinion creates accountability issues that extend beyond the agency by making an audit of the U.S. consolidated financial statements challenging ...the foundation of contemporary reporting. The chapter then discusses the establishment and purpose of the Federal Accounting Standards Advisory...TRUST MODEL: AN ALTERNATIVE APPROACH FOR DEPARTMENT OF DEFENSE ACCOUNTING by Gerald V. Weers Jr. December 2016 Thesis Advisor: Philip J

  7. Higgs potential in the type II seesaw model

    International Nuclear Information System (INIS)

    Arhrib, A.; Benbrik, R.; Chabab, M.; Rahili, L.; Ramadan, J.; Moultaka, G.; Peyranere, M. C.

    2011-01-01

    The standard model Higgs sector, extended by one weak gauge triplet of scalar fields with a very small vacuum expectation value, is a very promising setting to account for neutrino masses through the so-called type II seesaw mechanism. In this paper we consider the general renormalizable doublet/triplet Higgs potential of this model. We perform a detailed study of its main dynamical features that depend on five dimensionless couplings and two mass parameters after spontaneous symmetry breaking, and highlight the implications for the Higgs phenomenology. In particular, we determine (i) the complete set of tree-level unitarity constraints on the couplings of the potential and (ii) the exact tree-level boundedness from below constraints on these couplings, valid for all directions. When combined, these constraints delineate precisely the theoretically allowed parameter space domain within our perturbative approximation. Among the seven physical Higgs states of this model, the mass of the lighter (heavier) CP even state h 0 (H 0 ) will always satisfy a theoretical upper (lower) bound that is reached for a critical value μ c of μ (the mass parameter controlling triple couplings among the doublet/triplet Higgses). Saturating the unitarity bounds, we find an upper bound m h 0 or approx. μ c and μ c . In the first regime the Higgs sector is typically very heavy, and only h 0 that becomes SM-like could be accessible to the LHC. In contrast, in the second regime, somewhat overlooked in the literature, most of the Higgs sector is light. In particular, the heaviest state H 0 becomes SM-like, the lighter states being the CP odd Higgs, the (doubly) charged Higgses, and a decoupled h 0 , possibly leading to a distinctive phenomenology at the colliders.

  8. Resource Allocation Models and Accountability: A Jamaican Case Study

    Science.gov (United States)

    Nkrumah-Young, Kofi K.; Powell, Philip

    2008-01-01

    Higher education institutions (HEIs) may be funded privately, by the state or by a mixture of the two. Nevertheless, any state financing of HE necessitates a mechanism to determine the level of support and the channels through which it is to be directed; that is, a resource allocation model. Public funding, through resource allocation models,…

  9. Modeling Antibiotic Tolerance in Biofilms by Accounting for Nutrient Limitation

    OpenAIRE

    Roberts, Mark E.; Stewart, Philip S.

    2004-01-01

    A mathematical model of biofilm dynamics was used to investigate the protection from antibiotic killing that can be afforded to microorganisms in biofilms based on a mechanism of localized nutrient limitation and slow growth. The model assumed that the rate of killing by the antibiotic was directly proportional to the local growth rate. Growth rates in the biofilm were calculated by using the local concentration of a single growth-limiting substrate with Monod kinetics. The concentration prof...

  10. Accountability: a missing construct in models of adherence behavior and in clinical practice.

    Science.gov (United States)

    Oussedik, Elias; Foy, Capri G; Masicampo, E J; Kammrath, Lara K; Anderson, Robert E; Feldman, Steven R

    2017-01-01

    Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients' motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8-12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability) to patients' autonomous internal desire to please a respected health care provider (autonomous accountability), the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura's Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well as the testing and refinement of adherence interventions that make use of this critical determinant of human behavior.

  11. Accounting for ecosystem services in Life Cycle Assessment, Part II: toward an ecologically based LCA.

    Science.gov (United States)

    Zhang, Yi; Baral, Anil; Bakshi, Bhavik R

    2010-04-01

    Despite the essential role of ecosystem goods and services in sustaining all human activities, they are often ignored in engineering decision making, even in methods that are meant to encourage sustainability. For example, conventional Life Cycle Assessment focuses on the impact of emissions and consumption of some resources. While aggregation and interpretation methods are quite advanced for emissions, similar methods for resources have been lagging, and most ignore the role of nature. Such oversight may even result in perverse decisions that encourage reliance on deteriorating ecosystem services. This article presents a step toward including the direct and indirect role of ecosystems in LCA, and a hierarchical scheme to interpret their contribution. The resulting Ecologically Based LCA (Eco-LCA) includes a large number of provisioning, regulating, and supporting ecosystem services as inputs to a life cycle model at the process or economy scale. These resources are represented in diverse physical units and may be compared via their mass, fuel value, industrial cumulative exergy consumption, or ecological cumulative exergy consumption or by normalization with total consumption of each resource or their availability. Such results at a fine scale provide insight about relative resource use and the risk and vulnerability to the loss of specific resources. Aggregate indicators are also defined to obtain indices such as renewability, efficiency, and return on investment. An Eco-LCA model of the 1997 economy is developed and made available via the web (www.resilience.osu.edu/ecolca). An illustrative example comparing paper and plastic cups provides insight into the features of the proposed approach. The need for further work in bridging the gap between knowledge about ecosystem services and their direct and indirect role in supporting human activities is discussed as an important area for future work.

  12. Accounting Fundamentals and Variations of Stock Price: Methodological Refinement with Recursive Simultaneous Model

    OpenAIRE

    Sumiyana, Sumiyana; Baridwan, Zaki

    2013-01-01

    This study investigates association between accounting fundamentals and variations of stock prices using recursive simultaneous equation model. The accounting fundamentalsconsist of earnings yield, book value, profitability, growth opportunities and discount rate. The prior single relationships model has been investigated by Chen and Zhang (2007),Sumiyana (2011) and Sumiyana et al. (2010). They assume that all accounting fundamentals associate direct-linearly to the stock returns. This study ...

  13. ACCOUNTING FUNDAMENTALS AND VARIATIONS OF STOCK PRICE: METHODOLOGICAL REFINEMENT WITH RECURSIVE SIMULTANEOUS MODEL

    OpenAIRE

    Sumiyana, Sumiyana; Baridwan, Zaki

    2015-01-01

    This study investigates association between accounting fundamentals and variations of stock prices using recursive simultaneous equation model. The accounting fundamentalsconsist of earnings yield, book value, profitability, growth opportunities and discount rate. The prior single relationships model has been investigated by Chen and Zhang (2007),Sumiyana (2011) and Sumiyana et al. (2010). They assume that all accounting fundamentals associate direct-linearly to the stock returns. This study ...

  14. Applying the International Medical Graduate Program Model to Alleviate the Supply Shortage of Accounting Doctoral Faculty

    Science.gov (United States)

    HassabElnaby, Hassan R.; Dobrzykowski, David D.; Tran, Oanh Thikie

    2012-01-01

    Accounting has been faced with a severe shortage in the supply of qualified doctoral faculty. Drawing upon the international mobility of foreign scholars and the spirit of the international medical graduate program, this article suggests a model to fill the demand in accounting doctoral faculty. The underlying assumption of the suggested model is…

  15. A cellular automation model accounting for bicycle's group behavior

    Science.gov (United States)

    Tang, Tie-Qiao; Rui, Ying-Xu; Zhang, Jian; Shang, Hua-Yan

    2018-02-01

    Recently, bicycle has become an important traffic tool in China, again. Due to the merits of bicycle, the group behavior widely exists in urban traffic system. However, little effort has been made to explore the impacts of the group behavior on bicycle flow. In this paper, we propose a CA (cellular automaton) model with group behavior to explore the complex traffic phenomena caused by shoulder group behavior and following group behavior on an open road. The numerical results illustrate that the proposed model can qualitatively describe the impacts of the two kinds of group behaviors on bicycle flow and that the effects are related to the mode and size of group behaviors. The results can help us to better understand the impacts of the bicycle's group behaviors on urban traffic system and effectively control the bicycle's group behavior.

  16. Accounting for latent classes in movie box office modeling

    OpenAIRE

    Antipov, Evgeny; Pokryshevskaya, Elena

    2010-01-01

    This paper addresses the issue of unobserved heterogeneity in film characteristics influence on box-office. We argue that the analysis of pooled samples, most common among researchers, does not shed light on underlying segmentations and leads to significantly different estimates obtained by researchers running similar regressions for movie success modeling. For instance, it may be expected that a restrictive MPAA rating is a box office poison for a family comedy, while it insignificantly infl...

  17. Biblical Scriptures impact on six ethical models influencing accounting practices

    OpenAIRE

    Rodgers, Waymond; Gago Rodríguez, Susana

    2006-01-01

    The recent frauds in organizations have been a point for reflection among researchers and practitioners regarding the lack of morality in certain decision-making. We argue for a modification of decision-making models that has been accepted in organizations with stronger links with ethics and morality. With this aim we propose a return to the base value of Christianity, supported by Bible scriptures, underlying six dominant ethical approaches that drive practices in organizations. Publicado

  18. Spherical Detector Device Mathematical Modelling with Taking into Account Detector Module Symmetry

    International Nuclear Information System (INIS)

    Batyj, V.G.; Fedorchenko, D.V.; Prokopets, S.I.; Prokopets, I.M.; Kazhmuradov, M.A.

    2005-01-01

    Mathematical Model for spherical detector device accounting to symmetry properties is considered. Exact algorithm for simulation of measurement procedure with multiple radiation sources is developed. Modelling results are shown to have perfect agreement with calibration measurements

  19. Computer-Based Resource Accounting Model for Automobile Technology Impact Assessment

    Science.gov (United States)

    1976-10-01

    A computer-implemented resource accounting model has been developed for assessing resource impacts of future automobile technology options. The resources tracked are materials, energy, capital, and labor. The model has been used in support of the Int...

  20. Financial Organization Information Security System Development using Modeling, IT assets and Accounts Classification Processes

    Directory of Open Access Journals (Sweden)

    Anton Sergeevich Zaytsev

    2013-12-01

    Full Text Available This article deals with processes of modeling, IT assets and account classification. Key principles of these processes configuration are pointed up. Also a model of Russian Federation banking system organization is developed.

  1. Nurse-directed care model in a psychiatric hospital: a model for clinical accountability.

    Science.gov (United States)

    E-Morris, Marlene; Caldwell, Barbara; Mencher, Kathleen J; Grogan, Kimberly; Judge-Gorny, Margaret; Patterson, Zelda; Christopher, Terrian; Smith, Russell C; McQuaide, Teresa

    2010-01-01

    The focus on recovery for persons with severe and persistent mental illness is leading state psychiatric hospitals to transform their method of care delivery. This article describes a quality improvement project involving a hospital's administration and multidisciplinary state-university affiliation that collaborated in the development and implementation of a nursing care delivery model in a state psychiatric hospital. The quality improvement project team instituted a new model to promote the hospital's vision of wellness and recovery through utilization of the therapeutic relationship and greater clinical accountability. Implementation of the model was accomplished in 2 phases: first, the establishment of a structure to lay the groundwork for accountability and, second, the development of a mechanism to provide a clinical supervision process for staff in their work with clients. Effectiveness of the model was assessed by surveys conducted at baseline and after implementation. Results indicated improvement in clinical practices and client living environment. As a secondary outcome, these improvements appeared to be associated with increased safety on the units evidenced by reduction in incidents of seclusion and restraint. Restructuring of the service delivery system of care so that clients are the center of clinical focus improves safety and can enhance the staff's attention to work with clients on their recovery. The role of the advanced practice nurse can influence the recovery of clients in state psychiatric hospitals. Future research should consider the impact on clients and their perceptions of the new service models.

  2. Towards accounting for dissolved iron speciation in global ocean models

    Directory of Open Access Journals (Sweden)

    A. Tagliabue

    2011-10-01

    Full Text Available The trace metal iron (Fe is now routinely included in state-of-the-art ocean general circulation and biogeochemistry models (OGCBMs because of its key role as a limiting nutrient in regions of the world ocean important for carbon cycling and air-sea CO2 exchange. However, the complexities of the seawater Fe cycle, which impact its speciation and bioavailability, are simplified in such OGCBMs due to gaps in understanding and to avoid high computational costs. In a similar fashion to inorganic carbon speciation, we outline a means by which the complex speciation of Fe can be included in global OGCBMs in a reasonably cost-effective manner. We construct an Fe speciation model based on hypothesised relationships between rate constants and environmental variables (temperature, light, oxygen, pH, salinity and assumptions regarding the binding strengths of Fe complexing organic ligands and test hypotheses regarding their distributions. As a result, we find that the global distribution of different Fe species is tightly controlled by spatio-temporal environmental variability and the distribution of Fe binding ligands. Impacts on bioavailable Fe are highly sensitive to assumptions regarding which Fe species are bioavailable and how those species vary in space and time. When forced by representations of future ocean circulation and climate we find large changes to the speciation of Fe governed by pH mediated changes to redox kinetics. We speculate that these changes may exert selective pressure on phytoplankton Fe uptake strategies in the future ocean. In future work, more information on the sources and sinks of ocean Fe ligands, their bioavailability, the cycling of colloidal Fe species and kinetics of Fe-surface coordination reactions would be invaluable. We hope our modeling approach can provide a means by which new observations of Fe speciation can be tested against hypotheses of the processes present in governing the ocean Fe cycle in an

  3. An extended lattice model accounting for traffic jerk

    Science.gov (United States)

    Redhu, Poonam; Siwach, Vikash

    2018-02-01

    In this paper, a flux difference lattice hydrodynamics model is extended by considering the traffic jerk effect which comes due to vehicular motion of non-motor automobiles. The effect of traffic jerk has been examined through linear stability analysis and shown that it can significantly enlarge the unstable region on the phase diagram. To describe the phase transition of traffic flow, mKdV equation near the critical point is derived through nonlinear stability analysis. The theoretical findings have been verified using numerical simulation which confirms that the jerk parameter plays an important role in stabilizing the traffic jam efficiently in sensing the flux difference of leading sites.

  4. MODEL OF ACCOUNTING ENGINEERING IN VIEW OF EARNINGS MANAGEMENT IN POLAND

    Directory of Open Access Journals (Sweden)

    Leszek Michalczyk

    2012-10-01

    Full Text Available The article introduces the theoretical foundations of the author’s original concept of accounting engineering. We assume a theoretical premise whereby accounting engineering is understood as a system of accounting practice utilising differences in economic events resultant from the use of divergent accounting methods. Unlike, for instance, creative or praxeological accounting, accounting engineering is composed only, and under all circumstances, of lawful activities and adheres to the current regulations of the balance sheet law. The aim of the article is to construct a model of accounting engineering exploiting taking into account differences inherently present in variant accounting. These differences result in disparate financial results of identical economic events. Given the fact that regardless of which variant is used in accounting, all settlements are eventually equal to one another, a new class of differences emerges - the accounting engineering potential. It is transferred to subsequent reporting (balance sheet periods. In the end, the profit “made” in a given period reduces the financial result of future periods. This effect is due to the “transfer” of costs from one period to another. Such actions may have sundry consequences and are especially dangerous whenever many individuals are concerned with the profit of a given company, e.g. on a stock exchange. The reverse may be observed when a company is privatised and its value is being intentionally reduced by a controlled recording of accounting provisions, depending on the degree to which they are justified. The reduction of a company’s goodwill in Balcerowicz’s model of no-tender privatisation allows to justify the low value of the purchased company. These are only some of many manifestations of variant accounting which accounting engineering employs. A theoretical model of the latter is presented in this article.

  5. A hill-type muscle model expansion accounting for effects of varying transverse muscle load.

    Science.gov (United States)

    Siebert, Tobias; Stutzig, Norman; Rode, Christian

    2018-01-03

    Recent studies demonstrated that uniaxial transverse loading (F G ) of a rat gastrocnemius medialis muscle resulted in a considerable reduction of maximum isometric muscle force (ΔF im ). A hill-type muscle model assuming an identical gearing G between both ΔF im and F G as well as lifting height of the load (Δh) and longitudinal muscle shortening (Δl CC ) reproduced experimental data for a single load. Here we tested if this model is able to reproduce experimental changes in ΔF im and Δh for increasing transverse loads (0.64 N, 1.13 N, 1.62 N, 2.11 N, 2.60 N). Three different gearing ratios were tested: (I) constant G c representing the idea of a muscle specific gearing parameter (e.g. predefined by the muscle geometry), (II) G exp determined in experiments with varying transverse load, and (III) G f that reproduced experimental ΔF im for each transverse load. Simulations using G c overestimated ΔF im (up to 59%) and Δh (up to 136%) for increasing load. Although the model assumption (equal G for forces and length changes) held for the three lower loads using G exp and G f , simulations resulted in underestimation of ΔF im by 38% and overestimation of Δh by 58% for the largest load, respectively. To simultaneously reproduce experimental ΔF im and Δh for the two larger loads, it was necessary to reduce F im by 1.9% and 4.6%, respectively. The model seems applicable to account for effects of muscle deformation within a range of transverse loading when using a linear load-dependent function for G. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. PARALLEL MEASUREMENT AND MODELING OF TRANSPORT IN THE DARHT II BEAMLINE ON ETA II

    International Nuclear Information System (INIS)

    Chambers, F W; Raymond, B A; Falabella, S; Lee, B S; Richardson, R A; Weir, J T; Davis, H A; Schultze, M E

    2005-01-01

    To successfully tune the DARHT II transport beamline requires the close coupling of a model of the beam transport and the measurement of the beam observables as the beam conditions and magnet settings are varied. For the ETA II experiment using the DARHT II beamline components this was achieved using the SUICIDE (Simple User Interface Connecting to an Integrated Data Environment) data analysis environment and the FITS (Fully Integrated Transport Simulation) model. The SUICIDE environment has direct access to the experimental beam transport data at acquisition and the FITS predictions of the transport for immediate comparison. The FITS model is coupled into the control system where it can read magnet current settings for real time modeling. We find this integrated coupling is essential for model verification and the successful development of a tuning aid for the efficient convergence on a useable tune. We show the real time comparisons of simulation and experiment and explore the successes and limitations of this close coupled approach

  7. A new model in achieving Green Accounting at hotels in Bali

    Science.gov (United States)

    Astawa, I. P.; Ardina, C.; Yasa, I. M. S.; Parnata, I. K.

    2018-01-01

    The concept of green accounting becomes a debate in terms of its implementation in a company. The result of previous studies indicates that there are no standard model regarding its implementation to support performance. The research aims to create a different green accounting model to other models by using local cultural elements as the variables in building it. The research is conducted in two steps. The first step is designing the model based on theoretical studies by considering the main and supporting elements in building the concept of green accounting. The second step is conducting a model test at 60 five stars hotels started with data collection through questionnaire and followed by data processing using descriptive statistic. The result indicates that the hotels’ owner has implemented green accounting attributes and it supports previous studies. Another result, which is a new finding, shows that the presence of local culture, government regulation, and the awareness of hotels’ owner has important role in the development of green accounting concept. The results of the research give contribution to accounting science in terms of green reporting. The hotel management should adopt local culture in building the character of accountant hired in the accounting department.

  8. Asymmetric Gepner models II. Heterotic weight lifting

    Energy Technology Data Exchange (ETDEWEB)

    Gato-Rivera, B. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); Schellekens, A.N., E-mail: t58@nikhef.n [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); IMAPP, Radboud Universiteit, Nijmegen (Netherlands)

    2011-05-21

    A systematic study of 'lifted' Gepner models is presented. Lifted Gepner models are obtained from standard Gepner models by replacing one of the N=2 building blocks and the E{sub 8} factor by a modular isomorphic N=0 model on the bosonic side of the heterotic string. The main result is that after this change three family models occur abundantly, in sharp contrast to ordinary Gepner models. In particular, more than 250 new and unrelated moduli spaces of three family models are identified. We discuss the occurrence of fractionally charged particles in these spectra.

  9. Integrating Seasonal Oscillations into Basel II Behavioural Scoring Models

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-09-01

    Full Text Available The article introduces a new methodology of temporal influence measurement (seasonal oscillations, temporal patterns for behavioural scoring development purposes. The paper shows how significant temporal variables can be recognised and then integrated into the behavioural scoring models in order to improve model performance. Behavioural scoring models are integral parts of the Basel II standard on Internal Ratings-Based Approaches (IRB. The IRB approach much more precisely reflects individual risk bank profile.A solution of the problem of how to analyze and integrate macroeconomic and microeconomic factors represented in time series into behavioural scorecard models will be shown in the paper by using the REF II model.

  10. H II control for model helicopter in hover

    Science.gov (United States)

    Kim, Moo Seok; Kim, Joon Ki; Han, Jeong Yup; Park, Hong Bae; Kang, Soon Ju

    2005-12-01

    This paper presents mathematical model of six degree-of-freedom (6-DOF) helicopter (ERGO50) in hover, and H II feedback controller which is a powerful technique for the MIMO system as a helicopter. Mathematical model of the helicopter is multi-input multi-output (MIMO) and linearized system which accommodates aerodynamics. H II controller based on optimal control theory is used in a myriad application and plays an important role as a valuable precursor to other advanced methods for future work, when we need to improve stability of the helicopter. We design linear-quadratic-gaussian controller as H II controller. Simulation results show good performance.

  11. Impact of Distance in the Provision of Maternal Health Care Services and Its Accountability in Murarai-II Block, Birbhum District

    Directory of Open Access Journals (Sweden)

    Alokananda Ghosh

    2016-06-01

    Full Text Available The maternal health issue was a part of the Millennium Development Goals (MDGs, Target-5. Now it has been incorporated into Target-3 of 17 points Sustainable Development Goal-2030, declared by the United Nations, 2015. In India, about 50% of newborn deaths can be reduced by taking good care of the mother during pregnancy, childbirth and postpartum period. This requires timely, well-equipped healthcare by trained providers, along with emergency transportation for referral obstetric emergency. Governments need to ensure physicians in the rural underserved areas. The utilisation of maternal healthcare services (MHCSs depends on both the availability and accessibility of services along with accountability. This study is based on an empirical retrospective survey, also called a historic study, to evaluate the influences of distance on the provision of maternal health services and on its accountability in Murarai-II block, Birbhum District. The major objective of the study is to identify the influence of distance on the provision and accountability of the overall MHCSs. The investigation has found that there is a strong inverse relationship (-0.75 between accessibility index and accountability score with p-value = 0.05. Tracking of pregnant women, identification of high risk pregnancy and timely Postnatal Care (PNC have become the dominant factors of the maternal healthcare services in the first Principal Component Analysis (PCA, explaining 49.67% of the accountability system. Overall, institutional barriers to accessibility are identified as important constraints behind lesser accountability of the services, preventing the anticipated benefit. This study highlights the critical areas where maternal healthcare services are lacking. The analysis has highlighted the importance of physical access to health services in shaping the provision of maternal healthcare services. Drawing on empirical observations of operation of public distribution system in

  12. An Integrative Model of the Strategic Management Accounting at the Enterprises of Chemical Industry

    Directory of Open Access Journals (Sweden)

    Aleksandra Vasilyevna Glushchenko

    2016-06-01

    Full Text Available Currently, the issues of information and analytical support of strategic management enabling to take timely and high-quality management decisions, are extremely relevant. Conflicting and poor information, haphazard collected in the practice of large companies from unreliable sources, affects the effective implementation of their development strategies and carries the threat of risk, by the increasing instability of the external environment. Thus chemical industry is one of the central places in the industry of Russia and, of course, has its specificity in the formation of the informationsupport system. Such an information system suitable for the development and implementation of strategic directions, changes in recognized competitive advantages of strategic management accounting. The issues of the lack of requirements for strategic accounting information, its inconsistency in the result of simultaneous accumulation in different parts and using different methods of calculation and assessment of indicators is impossible without a well-constructed model of organization of strategic management accounting. The purpose of this study is to develop such a model, the implementation of which will allow realizing the possibility of achieving strategic goals by harmonizing information from the individual objects of the strategic account to increase the functional effectiveness of management decisions with a focus on strategy. Case study was based on dialectical logic and methods of system analysis, and identifying causal relationships in building a model of strategic management accounting that contributes to the forecasts of its development. The study proposed to implement an integrative model of organization of strategic management accounting. The purpose of a phased implementation of this model defines the objects and tools of strategic management accounting. Moreover, it is determined that from the point of view of increasing the usefulness of management

  13. Aqueous Solution Vessel Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-28

    The work presented in this report is a continuation of the work described in the May 2015 report, “Aqueous Solution Vessel Thermal Model Development”. This computational fluid dynamics (CFD) model aims to predict the temperature and bubble volume fraction in an aqueous solution of uranium. These values affect the reactivity of the fissile solution, so it is important to be able to calculate them and determine their effects on the reaction. Part A of this report describes some of the parameter comparisons performed on the CFD model using Fluent. Part B describes the coupling of the Fluent model with a Monte-Carlo N-Particle (MCNP) neutron transport model. The fuel tank geometry is the same as it was in the May 2015 report, annular with a thickness-to-height ratio of 0.16. An accelerator-driven neutron source provides the excitation for the reaction, and internal and external water cooling channels remove the heat. The model used in this work incorporates the Eulerian multiphase model with lift, wall lubrication, turbulent dispersion and turbulence interaction. The buoyancy-driven flow is modeled using the Boussinesq approximation, and the flow turbulence is determined using the k-ω Shear-Stress-Transport (SST) model. The dispersed turbulence multiphase model is employed to capture the multiphase turbulence effects.

  14. Chinese Basic Pension Substitution Rate: A Monte Carlo Demonstration of the Individual Account Model

    OpenAIRE

    Dong, Bei; Zhang, Ling; Lu, Xuan

    2008-01-01

    At the end of 2005, the State Council of China passed ”The Decision on adjusting the Individual Account of Basic Pension System”, which adjusted the individual account in the 1997 basic pension system. In this essay, we will analyze the adjustment above, and use Life Annuity Actuarial Theory to establish the basic pension substitution rate model. Monte Carlo simulation is also used to prove the rationality of the model. Some suggestions are put forward associated with the substitution rate ac...

  15. Fitting a code-red virus spread model: An account of putting theory into practice

    NARCIS (Netherlands)

    Kolesnichenko, A.V.; Haverkort, Boudewijn R.H.M.; Remke, Anne Katharina Ingrid; de Boer, Pieter-Tjerk

    This paper is about fitting a model for the spreading of a computer virus to measured data, contributing not only the fitted model, but equally important, an account of the process of getting there. Over the last years, there has been an increased interest in epidemic models to study the speed of

  16. Mutual Calculations in Creating Accounting Models: A Demonstration of the Power of Matrix Mathematics in Accounting Education

    Science.gov (United States)

    Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg

    2016-01-01

    The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…

  17. Nyala and Bushbuck II: A Harvesting Model.

    Science.gov (United States)

    Fay, Temple H.; Greeff, Johanna C.

    1999-01-01

    Adds a cropping or harvesting term to the animal overpopulation model developed in Part I of this article. Investigates various harvesting strategies that might suggest a solution to the overpopulation problem without actually culling any animals. (ASK)

  18. Marshal: Maintaining Evolving Models, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — SIFT proposes to design and develop the Marshal system, a mixed-initiative tool for maintaining task models over the course of evolving missions. SIFT will...

  19. Base Flow Model Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  20. Mineral vein dynamics modelling (FRACS II)

    International Nuclear Information System (INIS)

    Urai, J.; Virgo, S.; Arndt, M.

    2016-08-01

    The Mineral Vein Dynamics Modeling group ''FRACS'' started out as a team of 7 research groups in its first phase and continued with a team of 5 research groups at the Universities of Aachen, Tuebingen, Karlsruhe, Mainz and Glasgow during its second phase ''FRACS 11''. The aim of the group was to develop an advanced understanding of the interplay between fracturing, fluid flow and fracture healing with a special emphasis on the comparison of field data and numerical models. Field areas comprised the Oman mountains in Oman (which where already studied in detail in the first phase), a siliciclastic sequence in the Internal Ligurian Units in Italy (closed to Sestri Levante) and cores of Zechstein carbonates from a Lean Gas reservoir in Northern Germany. Numerical models of fracturing, sealing and interaction with fluid that were developed in phase I where expanded in phase 11. They were used to model small scale fracture healing by crystal growth and the resulting influence on flow, medium scale fracture healing and its influence on successive fracturing and healing, as well as large scale dynamic fluid flow through opening and closing fractures and channels as a function of fluid overpressure. The numerical models were compared with structures in the field and we were able to identify first proxies for mechanical vein-hostrock properties and fluid overpressures versus tectonic stresses. Finally we propose a new classification of stylolites based on numerical models and observations in the Zechstein cores and continued to develop a new stress inversion tool to use stylolites to estimate depth of their formation.

  1. Accountability: a missing construct in models of adherence behavior and in clinical practice

    Directory of Open Access Journals (Sweden)

    Oussedik E

    2017-07-01

    Full Text Available Elias Oussedik,1 Capri G Foy,2 E J Masicampo,3 Lara K Kammrath,3 Robert E Anderson,1 Steven R Feldman1,4,5 1Center for Dermatology Research, Department of Dermatology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 2Department of Social Sciences and Health Policy, Wake Forest School of Medicine, Winston-Salem, NC, USA; 3Department of Psychology, Wake Forest University, Winston-Salem, NC, USA; 4Department of Pathology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 5Department of Public Health Sciences, Wake Forest School of Medicine, Winston-Salem, NC, USA Abstract: Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients’ motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8–12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability to patients’ autonomous internal desire to please a respected health care provider (autonomous accountability, the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura’s Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well

  2. STRATIFICATION IN WASTE STABILIZATION PONDS II: MODELLING

    African Journals Online (AJOL)

    NIJOTECH

    The occurrence of thermal stratification in waste stabilization ponds (WSPs) alters the flow pattern of the pond. ... compared favourably with the experimental observation with coefficients of correlation ranging from .... is determined experimentally by sampling in the region of the pond inlet at various depths. Four models exist ...

  3. A Social Accountable Model for Medical Education System in Iran: A Grounded-Theory

    Directory of Open Access Journals (Sweden)

    Mohammadreza Abdolmaleki

    2017-10-01

    Full Text Available Social accountability has been increasingly discussed over the past three decades in various fields providing service to the community and has been expressed as a goal for various areas. In medical education system, like other social accountability areas, it is considered as one of the main objectives globally. The aim of this study was to seek a social accountability theory in the medical education system that is capable of identifying all the standards, norms, and conditions within the country related to the study subject and recognize their relationship. In this study, a total of eight experts in the field of social accountability in medical education system with executive or study experience were interviewedpersonally. After analysis of interviews, 379 codes, 59 secondary categories, 16 subcategories, and 9 main categories were obtained. The resulting data was collected and analyzed at three levels of open coding, axial coding, and selective coding in the form of grounded theory study of “Accountability model of medical education in Iran”, which can be used in education system’s policies and planning for social accountability, given that almost all effective components of social accountability in highereducation health system with causal and facilitator associations were determined.Keywords: SOCIAL ACCOUNTABILITY, COMMUNITY–ORIENTED MEDICINE, COMMUNITY MEDICINE, EDUCATION SYSTEM, GROUNDED THEORY

  4. Toward a Useful Model for Group Mentoring in Public Accounting Firms

    Directory of Open Access Journals (Sweden)

    Steven J. Johnson

    2013-07-01

    Full Text Available Today’s public accounting firms face a number of challenges in relation to their most valuable resource and primary revenue generator, human capital. Expanding regulations, technology advances, increased competition and high turnover rates are just a few of the issues confronting public accounting leaders in today’s complex business environment. In recent years, some public accounting firms have attempted to combat low retention and high burnout rates with traditional one-to-one mentoring programs, with varying degrees of success. Many firms have found that they lack the resources necessary to successfully implement and maintain such programs. In other industries, organizations have used a group mentoring approach in attempt to remove potential barriers to mentoring success. Although the research regarding group mentoring shows promise for positive organizational outcomes, no cases could be found in the literature regarding its usage in a public accounting firm. Because of the unique challenges associated with public accounting firms, this paper attempts to answer two questions: (1Does group mentoring provide a viable alternative to traditional mentoring in a public accounting firm? (2 If so, what general model might be used for implementing such a program? In answering these questions, a review of the group mentoring literature is provided, along with a suggested model for the implementation of group mentoring in a public accounting firm.

  5. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    DEFF Research Database (Denmark)

    Jensen, Ninna Reitzel; Schomacker, Kristian Juul

    2015-01-01

    Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death......, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance...... and unit-linked insurance. By use of a two-account model, we are able to illustrate general concepts without making the model too abstract. To allow for complicated financial markets without dramatically increasing the mathematical complexity, we focus on economic scenarios. We illustrate the use of our...

  6. A simulation model of hospital management based on cost accounting analysis according to disease.

    Science.gov (United States)

    Tanaka, Koji; Sato, Junzo; Guo, Jinqiu; Takada, Akira; Yoshihara, Hiroyuki

    2004-12-01

    Since a little before 2000, hospital cost accounting has been increasingly performed at Japanese national university hospitals. At Kumamoto University Hospital, for instance, departmental costs have been analyzed since 2000. And, since 2003, the cost balance has been obtained according to certain diseases for the preparation of Diagnosis-Related Groups and Prospective Payment System. On the basis of these experiences, we have constructed a simulation model of hospital management. This program has worked correctly at repeated trials and with satisfactory speed. Although there has been room for improvement of detailed accounts and cost accounting engine, the basic model has proved satisfactory. We have constructed a hospital management model based on the financial data of an existing hospital. We will later improve this program from the viewpoint of construction and using more various data of hospital management. A prospective outlook may be obtained for the practical application of this hospital management model.

  7. Facility level SSAC for model country - an introduction and material balance accounting principles

    International Nuclear Information System (INIS)

    Jones, R.J.

    1989-01-01

    A facility level State System of Accounting for and Control of Nuclear Materials (SSAC) for a model country and the principles of materials balance accounting relating to that country are described. The seven principal elements of a SSAC are examined and a facility level system based on them discussed. The seven elements are organization and management; nuclear material measurements; measurement quality; records and reports; physical inventory taking; material balance closing; containment and surveillance. 11 refs., 19 figs., 5 tabs

  8. Assimilation of tourism satellite accounts and applied general equilibrium models to inform tourism policy analysis

    OpenAIRE

    Rossouw, Riaan; Saayman, Melville

    2011-01-01

    Historically, tourism policy analysis in South Africa has posed challenges to accurate measurement. The primary reason for this is that tourism is not designated as an 'industry' in standard economic accounts. This paper therefore demonstrates the relevance and need for applied general equilibrium (AGE) models to be completed and extended through an integration with tourism satellite accounts (TSAs) as a tool for policy makers (especially tourism policy makers) in South Africa. The paper sets...

  9. Analysis Social Security System Model in South Sulawesi Province: On Accounting Perspective

    OpenAIRE

    Mediaty,; Said, Darwis; Syahrir,; Indrijawati, Aini

    2015-01-01

    - This research aims to analyze the poverty, education, and health in social security system model based on accounting perspective using empirical study on South Sulawesi Province. Issued Law No. 40 for 2004 regarding National Social Security System is one of attentions from government about social welfare. Accounting as a social science deserves to create social security mechanisms. One of the crucial mechanisms is social security system. This research is a grounded exploratory research w...

  10. School Board Improvement Plans in Relation to the AIP Model of Educational Accountability: A Content Analysis

    Science.gov (United States)

    van Barneveld, Christina; Stienstra, Wendy; Stewart, Sandra

    2006-01-01

    For this study we analyzed the content of school board improvement plans in relation to the Achievement-Indicators-Policy (AIP) model of educational accountability (Nagy, Demeris, & van Barneveld, 2000). We identified areas of congruence and incongruence between the plans and the model. Results suggested that the content of the improvement…

  11. Lessons learned for spatial modelling of ecosystem services in support of ecosystem accounting

    NARCIS (Netherlands)

    Schroter, M.; Remme, R.P.; Sumarga, E.; Barton, D.N.; Hein, L.G.

    2015-01-01

    Assessment of ecosystem services through spatial modelling plays a key role in ecosystem accounting. Spatial models for ecosystem services try to capture spatial heterogeneity with high accuracy. This endeavour, however, faces several practical constraints. In this article we analyse the trade-offs

  12. Measurement and modeling of advanced coal conversion processes, Volume II

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, P.R.; Serio, M.A.; Hamblen, D.G. [and others

    1993-06-01

    A two dimensional, steady-state model for describing a variety of reactive and nonreactive flows, including pulverized coal combustion and gasification, is presented. The model, referred to as 93-PCGC-2 is applicable to cylindrical, axi-symmetric systems. Turbulence is accounted for in both the fluid mechanics equations and the combustion scheme. Radiation from gases, walls, and particles is taken into account using a discrete ordinates method. The particle phase is modeled in a lagrangian framework, such that mean paths of particle groups are followed. A new coal-general devolatilization submodel (FG-DVC) with coal swelling and char reactivity submodels has been added.

  13. Mortality Probability Model III and Simplified Acute Physiology Score II

    Science.gov (United States)

    Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams

    2009-01-01

    Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210

  14. PEP-II vacuum system pressure profile modeling using EXCEL

    International Nuclear Information System (INIS)

    Nordby, M.; Perkins, C.

    1994-06-01

    A generic, adaptable Microsoft EXCEL program to simulate molecular flow in beam line vacuum systems is introduced. Modeling using finite-element approximation of the governing differential equation is discussed, as well as error estimation and program capabilities. The ease of use and flexibility of the spreadsheet-based program is demonstrated. PEP-II vacuum system models are reviewed and compared with analytical models

  15. Accounting for differences in dieting status: steps in the refinement of a model.

    Science.gov (United States)

    Huon, G; Hayne, A; Gunewardene, A; Strong, K; Lunn, N; Piira, T; Lim, J

    1999-12-01

    The overriding objective of this paper is to outline the steps involved in refining a structural model to explain differences in dieting status. Cross-sectional data (representing the responses of 1,644 teenage girls) derive from the preliminary testing in a 3-year longitudinal study. A battery of measures assessed social influence, vulnerability (to conformity) disposition, protective (social coping) skills, and aspects of positive familial context as core components in a model proposed to account for the initiation of dieting. Path analyses were used to establish the predictive ability of those separate components and their interrelationships in accounting for differences in dieting status. Several components of the model were found to be important predictors of dieting status. The model incorporates significant direct, indirect (or mediated), and moderating relationships. Taking all variables into account, the strongest prediction of dieting status was from peer competitiveness, using a new scale developed specifically for this study. Systematic analyses are crucial for the refinement of models to be used in large-scale multivariate studies. In the short term, the model investigated in this study has been shown to be useful in accounting for cross-sectional differences in dieting status. The refined model will be most powerfully employed in large-scale time-extended studies of the initiation of dieting to lose weight. Copyright 1999 by John Wiley & Sons, Inc.

  16. Modelling characteristics of photovoltaic panels with thermal phenomena taken into account

    International Nuclear Information System (INIS)

    Krac, Ewa; Górecki, Krzysztof

    2016-01-01

    In the paper a new form of the electrothermal model of photovoltaic panels is proposed. This model takes into account the optical, electrical and thermal properties of the considered panels, as well as electrical and thermal properties of the protecting circuit and thermal inertia of the considered panels. The form of this model is described and some results of measurements and calculations of mono-crystalline and poly-crystalline panels are presented

  17. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  18. PREDICTIVE CAPACITY OF INSOLVENCY MODELS BASED ON ACCOUNTING NUMBERS AND DESCRIPTIVE DATA

    Directory of Open Access Journals (Sweden)

    Rony Petson Santana de Souza

    2012-09-01

    Full Text Available In Brazil, research into models to predict insolvency started in the 1970s, with most authors using discriminant analysis as a statistical tool in their models. In more recent years, authors have increasingly tried to verify whether it is possible to forecast insolvency using descriptive data contained in firms’ reports. This study examines the capacity of some insolvency models to predict the failure of Brazilian companies that have gone bankrupt. The study is descriptive in nature with a quantitative approach, based on research of documents. The sample is omposed of 13 companies that were declared bankrupt between 1997 and 2003. The results indicate that the majority of the insolvency prediction models tested showed high rates of correct forecasts. The models relying on descriptive reports on average were more likely to succeed than those based on accounting figures. These findings demonstrate that although some studies indicate a lack of validity of predictive models created in different business settings, some of these models have good capacity to forecast insolvency in Brazil. We can conclude that both models based on accounting numbers and those relying on descriptive reports can predict the failure of firms. Therefore, it can be inferred that the majority of bankruptcy prediction models that make use of accounting numbers can succeed in predicting the failure of firms.

  19. Kinetic modeling of desorption of Cadmium (ii) ion from ...

    African Journals Online (AJOL)

    Kinetic modeling of desorption of Cadmium (ii) ion from Mercaptoacetic acide modified and unmodified agricultural adsorbents. A A Abia, E D Asuquo. Abstract. No Abstract. Global Journal of Environmental Science Vol. 6 (2) 2007: pp. 89-98. Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL ...

  20. Bianchi Type-II inflationary models with constant deceleration ...

    Indian Academy of Sciences (India)

    pp. 707–720. Bianchi Type-II inflationary models with constant deceleration parameter in general relativity. C P SINGH* and S KUMAR. Department of Applied Mathematics, Delhi College of Engineering, Bawana Road,. Delhi 110 042, India. E-mail: cpsphd@rediffmail.com. MS received 24 January 2006; revised 19 January ...

  1. THE HYDRODYNAMICAL MODELS OF THE COMETARY COMPACT H ii REGION

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Feng-Yao; Zhu, Qing-Feng [Astronomy Department, University of Science and Technology of China, Hefei, 230026 (China); Li, Juan; Wang, Jun-Zhi [Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai (China); Zhang, Jiang-Shui, E-mail: zhufya@mail.ustc.edu.cn, E-mail: zhuqf@ustc.edu.cn, E-mail: lijuan@shao.ac.cn, E-mail: jzwang@shao.ac.cn, E-mail: jszhang@gzhu.edu.cn [Center for Astrophysics, Guangzhou University, Guangzhou (China)

    2015-10-10

    We have developed a full numerical method to study the gas dynamics of cometary ultracompact H ii regions, and associated photodissociation regions (PDRs). The bow-shock and champagne-flow models with a 40.9/21.9 M{sub ⊙} star are simulated. In the bow-shock models, the massive star is assumed to move through dense (n = 8000 cm{sup −3}) molecular material with a stellar velocity of 15 km s{sup −1}. In the champagne-flow models, an exponential distribution of density with a scale height of 0.2 pc is assumed. The profiles of the [Ne ii] 12.81 μm and H{sub 2} S(2) lines from the ionized regions and PDRs are compared for two sets of models. In champagne-flow models, emission lines from the ionized gas clearly show the effect of acceleration along the direction toward the tail due to the density gradient. The kinematics of the molecular gas inside the dense shell are mainly due to the expansion of the H ii region. However, in bow-shock models the ionized gas mainly moves in the same direction as the stellar motion. The kinematics of the molecular gas inside the dense shell simply reflects the motion of the dense shell with respect to the star. These differences can be used to distinguish two sets of models.

  2. The Anachronism of the Local Public Accountancy Determinate by the Accrual European Model

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2009-01-01

    Full Text Available Placing the European accrual model upon cash accountancy model,presently used in Romania, at the level of the local communities, makespossible that the anachronism of the model to manifest itself on the discussion’sconcentration at the nominalization about the model’s inclusion in everydaypublic practice. The basis of the accrual model were first defined in the lawregarding the commercial societies adopted in Great Britain in 1985, when theydetermined that all income and taxes referring to the financial year “will betaken into consideration without any boundary to the reception or paymentdate.”1 The accrual model in accountancy needs the recording of the non-casheffects in transactions or financial events for their appearance periods and not inany generated cash, received or paid. The business development was the basisfor “sophistication” of the recordings of the transactions and financial events,being prerequisite for recording the debtors’ or creditors’ sums.

  3. Multiple Imputation to Account for Measurement Error in Marginal Structural Models.

    Science.gov (United States)

    Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J

    2015-09-01

    Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.

  4. Computing Models of CDF and D0 in Run II

    International Nuclear Information System (INIS)

    Lammel, S.

    1997-05-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunchspacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II

  5. Model Application of Accounting Information Systems of Spare Parts Sales and Purchase on Car Service Company

    Directory of Open Access Journals (Sweden)

    Lianawati Christian

    2015-09-01

    Full Text Available The purpose of this research is to analyze accounting information systems of sales and purchases of spare parts in general car service companies and to identify the problems encountered and the needs of necessary information. This research used literature study to collect data, field study with observation, and design using UML (Unified Modeling Language with activity diagrams, class diagrams, use case diagrams, database design, form design, display design, draft reports. The result achieved is an application model of accounting information systems of sales and purchases of spare parts in general car service companies. As a conclusion, the accounting information systems of sales and purchases provides ease for management to obtain information quickly and easily as well as the presentation of reports quickly and accurately.

  6. Spike Neural Models Part II: Abstract Neural Models

    OpenAIRE

    Johnson, Melissa G.; Chartier, Sylvain

    2018-01-01

    Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN) though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF) model whic...

  7. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    Energy Technology Data Exchange (ETDEWEB)

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  8. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    Science.gov (United States)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  9. Accounting for Co-Teaching: A Guide for Policymakers and Developers of Value-Added Models

    Science.gov (United States)

    Isenberg, Eric; Walsh, Elias

    2015-01-01

    We outline the options available to policymakers for addressing co-teaching in a value-added model. Building on earlier work, we propose an improvement to a method of accounting for co-teaching that treats co-teachers as teams, with each teacher receiving equal credit for co-taught students. Hock and Isenberg (2012) described a method known as the…

  10. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  11. One Model Fits All: Explaining Many Aspects of Number Comparison within a Single Coherent Model-A Random Walk Account

    Science.gov (United States)

    Reike, Dennis; Schwarz, Wolf

    2016-01-01

    The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…

  12. Theoretical analysis of a hybrid traffic model accounting for safe velocity

    Science.gov (United States)

    Wang, Yu-Qing; Zhou, Chao-Fan; Yan, Bo-Wen; Zhang, De-Chen; Wang, Ji-Xin; Jia, Bin; Gao, Zi-You; Wu, Qing-Song

    2017-04-01

    A hybrid traffic-flow model [Wang-Zhou-Yan (WZY) model] is brought out in this paper. In WZY model, the global equilibrium velocity is replaced by the local equilibrium one, which emphasizes that the modification of vehicle velocity is based on the view of safe-driving rather than the global deployment. In the view of safe-driving, the effect of drivers’ estimation is taken into account. Moreover, the linear stability of the traffic model has been performed. Furthermore, in order to test the robustness of the system, the evolvement of the density wave and the velocity wave of the traffic flow has been numerically calculated.

  13. Accounting of inter-electron correlations in the model of mobile electron shells

    International Nuclear Information System (INIS)

    Panov, Yu.D.; Moskvin, A.S.

    2000-01-01

    One studied the basic peculiar features of the model for mobile electron shells for multielectron atom or cluster. One offered a variation technique to take account of the electron correlations where the coordinates of the centre of single-particle atomic orbital served as variation parameters. It enables to interpret dramatically variation of electron density distribution under anisotropic external effect in terms of the limited initial basis. One studied specific correlated states that might make correlation contribution into the orbital current. Paper presents generalization of the typical MO-LCAO pattern with the limited set of single particle functions enabling to take account of additional multipole-multipole interactions in the cluster [ru

  14. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    Science.gov (United States)

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  15. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  16. Using a Rasch Model to Account for Guessing as a Source of Low Discrimination.

    Science.gov (United States)

    Humphry, Stephen

    2015-01-01

    The most common approach to modelling item discrimination and guessing for multiple-choice questions is the three parameter logistic (3PL) model. However, proponents of Rasch models generally avoid using the 3PL model because to model guessing entails sacrificing the distinctive property and advantages of Rasch models. One approach to dealing with guessing based on the application of Rasch models is to omit responses in which guessing appears to play a significant role. However, this approach entails loss of information and it does not account for variable item discrimination. It has been shown, though, that provided specific constraints are met, it is possible to parameterize discrimination while preserving the distinctive property of Rasch models. This article proposes an approach that uses Rasch models to account for guessing on standard multiple-choice items simply by treating it as a source of low item discrimination. Technical considerations are noted although a detailed examination of such considerations is beyond the scope of this article.

  17. Accounting for Local Dependence with the Rasch Model: The Paradox of Information Increase.

    Science.gov (United States)

    Andrich, David

    Test theories imply statistical, local independence. Where local independence is violated, models of modern test theory that account for it have been proposed. One violation of local independence occurs when the response to one item governs the response to a subsequent item. Expanding on a formulation of this kind of violation between two items in the dichotomous Rasch model, this paper derives three related implications. First, it formalises how the polytomous Rasch model for an item constituted by summing the scores of the dependent items absorbs the dependence in its threshold structure. Second, it shows that as a consequence the unit when the dependence is accounted for is not the same as if the items had no response dependence. Third, it explains the paradox, known, but not explained in the literature, that the greater the dependence of the constituent items the greater the apparent information in the constituted polytomous item when it should provide less information.

  18. Cost accounting models used for price-setting of health services: an international review.

    Science.gov (United States)

    Raulinajtys-Grzybek, Monika

    2014-12-01

    The aim of the article was to present and compare cost accounting models which are used in the area of healthcare for pricing purposes in different countries. Cost information generated by hospitals is further used by regulatory bodies for setting or updating prices of public health services. The article presents a set of examples from different countries of the European Union, Australia and the United States and concentrates on DRG-based payment systems as they primarily use cost information for pricing. Differences between countries concern the methodology used, as well as the data collection process and the scope of the regulations on cost accounting. The article indicates that the accuracy of the calculation is only one of the factors that determine the choice of the cost accounting methodology. Important aspects are also the selection of the reference hospitals, precise and detailed regulations and the existence of complex healthcare information systems in hospitals. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide.

    Science.gov (United States)

    Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark

    2011-01-01

    Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.

  20. Modeling of Accounting Doctoral Thesis with Emphasis on Solution for Financial Problems

    Directory of Open Access Journals (Sweden)

    F. Mansoori

    2015-02-01

    Full Text Available By passing the instruction period and increase of graduate students and also research budget, knowledge of accounting in Iran entered to the field of research in a way that number of accounting projects has been implemented in the real world. Because of that different experience in implementing the accounting standards were achieved. So, it was expected the mentioned experiences help to solve the financial problems in country, in spite of lots of efforts which were done for researching; we still have many financial and accounting problems in our country. PHD projects could be considered as one of the important solutions to improve the University subjects including accounting. PHD projects are considered as team work job and it will be legitimate by supervisor teams in universities.It is obvious that applied projects should solve part of the problems in accounting field but unfortunately it is not working in the real world. The question which came in to our mind is how come that the out put of the applied and knowledge base projects could not make the darkness of the mentioned problems clear and also why politicians in difficult situations prefer to use their own previous experiences in important decision makings instead of using the consultant’s knowledge base suggestions.In this research I’m going to study, the reasons behind that prevent the applied PHD projects from success in real world which relates to the point of view that consider the political suggestions which are out put of knowledge base projects are not qualified enough for implementation. For this purpose, the indicators of an applied PHD project were considered and 110 vise people were categorized the mentioned indicators and then in a comprehensive study other applied PHD accounting projects were compared to each other.As result, in this study problems of the studied researches were identified and a proper and applied model for creating applied research was developed.

  1. The self-consistent field model for Fermi systems with account of three-body interactions

    Directory of Open Access Journals (Sweden)

    Yu.M. Poluektov

    2015-12-01

    Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.

  2. Analysis of a microscopic model of taking into account 2p2h configurations

    International Nuclear Information System (INIS)

    Kamerdzhiev, S.P.; Tkachev, V.N.

    1986-01-01

    The Green's-function method has been used to obtain a general equation for the effective field in a nucleus, taking into account both 1p1h and 2p2h configurations. This equation has been used as the starting point for derivation of a previously developed microscopic model of taking 1p1h+phonon configurations into account in magic nuclei. The equation for the density matrix is analyzed in this model. It is shown that the number of quasiparticles is conserved. An equation is obtained for the effective field in the coordinate representation, which provides a formulation of the problem in the 1p1h+2p2h+continuum approximation. The equation is derived and quantitatively analyzed in the space of one-phonon states

  3. A Simple Accounting-based Valuation Model for the Debt Tax Shield

    Directory of Open Access Journals (Sweden)

    Andreas Scholze

    2010-05-01

    Full Text Available This paper describes a simple way to integrate the debt tax shield into an accounting-based valuation model. The market value of equity is determined by forecasting residual operating income, which is calculated by charging operating income for the operating assets at a required return that accounts for the tax benefit that comes from borrowing to raise cash for the operations. The model assumes that the firm maintains a deterministic financial leverage ratio, which tends to converge quickly to typical steady-state levels over time. From a practical point of view, this characteristic is of particular help, because it allows a continuing value calculation at the end of a short forecast period.

  4. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect

    Science.gov (United States)

    Carenzo, M.; Pellicciotti, F.; Mabillard, J.; Reid, T.; Brock, B. W.

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the

  5. [Application of detecting and taking overdispersion into account in Poisson regression model].

    Science.gov (United States)

    Bouche, G; Lepage, B; Migeot, V; Ingrand, P

    2009-08-01

    Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.

  6. Modeling Laterally Loaded Single Piles Accounting for Nonlinear Soil-Pile Interactions

    Directory of Open Access Journals (Sweden)

    Maryam Mardfekri

    2013-01-01

    Full Text Available The nonlinear behavior of a laterally loaded monopile foundation is studied using the finite element method (FEM to account for soil-pile interactions. Three-dimensional (3D finite element modeling is a convenient and reliable approach to account for the continuity of the soil mass and the nonlinearity of the soil-pile interactions. Existing simple methods for predicting the deflection of laterally loaded single piles in sand and clay (e.g., beam on elastic foundation, p-y method, and SALLOP are assessed using linear and nonlinear finite element analyses. The results indicate that for the specific case considered here the p-y method provides a reasonable accuracy, in spite of its simplicity, in predicting the lateral deflection of single piles. A simplified linear finite element (FE analysis of piles, often used in the literature, is also investigated and the influence of accounting for the pile diameter in the simplified linear FE model is evaluated. It is shown that modeling the pile as a line with beam-column elements results in a reduced contribution of the surrounding soil to the lateral stiffness of the pile and an increase of up to 200% in the predicted maximum lateral displacement of the pile head.

  7. Bayesian model accounting for within-class biological variability in Serial Analysis of Gene Expression (SAGE

    Directory of Open Access Journals (Sweden)

    Brentani Helena

    2004-08-01

    Full Text Available Abstract Background An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE, "Digital Northern" or Massively Parallel Signature Sequencing (MPSS, is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. Results We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries" and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Conclusion Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.

  8. Bayesian model accounting for within-class biological variability in Serial Analysis of Gene Expression (SAGE).

    Science.gov (United States)

    Vêncio, Ricardo Z N; Brentani, Helena; Patrão, Diogo F C; Pereira, Carlos A B

    2004-08-31

    An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE), "Digital Northern" or Massively Parallel Signature Sequencing (MPSS), is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries") and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.

  9. Material control in nuclear fuel fabrication facilities. Part II. Accountability, instrumntation, and measurement techniques in fuel fabrication facilities, P.O.1236909. Final report

    International Nuclear Information System (INIS)

    Borgonovi, G.M.; McCartin, T.J.; McDaniel, T.; Miller, C.L.; Nguyen, T.

    1978-12-01

    This report describes the measurement techniques, the instrumentation, and the procedures used in accountability and control of nuclear materials, as they apply to fuel fabrication facilities. Some of the material included has appeared elswhere and it has been summarized. An extensive bibliography is included. A spcific example of application of the accountability methods to a model fuel fabrication facility which is based on the Westinghouse Anderson design

  10. Insurance: Accounting, Regulation, Actuarial Science

    OpenAIRE

    Alain Tosetti; Thomas Behar; Michel Fromenteau; Stéphane Ménart

    2001-01-01

    We shall be examining the following topics: (i) basic frameworks for accounting and for statutory insurance rules; and (ii) actuarial principles of insurance; for both life and nonlife (i.e. casualty and property) insurance.Section 1 introduces insurance terminology, regarding what an operation must include in order to be an insurance operation (the legal, statistical, financial or economic aspects), and introduces the accounting and regulation frameworks and the two actuarial models of insur...

  11. A multiscale active structural model of the arterial wall accounting for smooth muscle dynamics.

    Science.gov (United States)

    Coccarelli, Alberto; Edwards, David Hughes; Aggarwal, Ankush; Nithiarasu, Perumal; Parthimos, Dimitris

    2018-02-01

    Arterial wall dynamics arise from the synergy of passive mechano-elastic properties of the vascular tissue and the active contractile behaviour of smooth muscle cells (SMCs) that form the media layer of vessels. We have developed a computational framework that incorporates both these components to account for vascular responses to mechanical and pharmacological stimuli. To validate the proposed framework and demonstrate its potential for testing hypotheses on the pathogenesis of vascular disease, we have employed a number of pharmacological probes that modulate the arterial wall contractile machinery by selectively inhibiting a range of intracellular signalling pathways. Experimental probes used on ring segments from the rabbit central ear artery are: phenylephrine, a selective α 1-adrenergic receptor agonist that induces vasoconstriction; cyclopiazonic acid (CPA), a specific inhibitor of sarcoplasmic/endoplasmic reticulum Ca 2+ -ATPase; and ryanodine, a diterpenoid that modulates Ca 2+ release from the sarcoplasmic reticulum. These interventions were able to delineate the role of membrane versus intracellular signalling, previously identified as main factors in smooth muscle contraction and the generation of vessel tone. Each SMC was modelled by a system of nonlinear differential equations that account for intracellular ionic signalling, and in particular Ca 2+ dynamics. Cytosolic Ca 2+ concentrations formed the catalytic input to a cross-bridge kinetics model. Contractile output from these cellular components forms the input to the finite-element model of the arterial rings under isometric conditions that reproduces the experimental conditions. The model does not account for the role of the endothelium, as the nitric oxide production was suppressed by the action of L-NAME, and also due to the absence of shear stress on the arterial ring, as the experimental set-up did not involve flow. Simulations generated by the integrated model closely matched experimental

  12. Modeling 2-alternative forced-choice tasks: Accounting for both magnitude and difference effects.

    Science.gov (United States)

    Ratcliff, Roger; Voskuilen, Chelsea; Teodorescu, Andrei

    2018-03-01

    We present a model-based analysis of two-alternative forced-choice tasks in which two stimuli are presented side by side and subjects must make a comparative judgment (e.g., which stimulus is brighter). Stimuli can vary on two dimensions, the difference in strength of the two stimuli and the magnitude of each stimulus. Differences between the two stimuli produce typical RT and accuracy effects (i.e., subjects respond more quickly and more accurately when there is a larger difference between the two). However, the overall magnitude of the pair of stimuli also affects RT and accuracy. In the more common two-choice task, a single stimulus is presented and the stimulus varies on only one dimension. In this two-stimulus task, if the standard diffusion decision model is fit to the data with only drift rate (evidence accumulation rate) differing among conditions, the model cannot fit the data. However, if either of one of two variability parameters is allowed to change with stimulus magnitude, the model can fit the data. This results in two models that are extremely constrained with about one tenth of the number of parameters than there are data points while at the same time the models account for accuracy and correct and error RT distributions. While both of these versions of the diffusion model can account for the observed data, the model that allows across-trial variability in drift to vary might be preferred for theoretical reasons. The diffusion model fits are compared to the leaky competing accumulator model which did not perform as well. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Monolithic Controlled Delivery Systems: Part II. Basic Mathematical Models

    Directory of Open Access Journals (Sweden)

    Rumiana Blagoeva

    2006-12-01

    Full Text Available The article presents a brief but comprehensive review of the large variety of mathematical models of drug controlled release from polymeric monoliths in the last 25 years. The models are considered systematically, from the first simple empirical models up to the most comprehensive theoretical ones taking into account the main release mechanisms (diffusion, swelling, dissolution or erosion simultaneously. Their advantages and limitations are briefly discussed and some applications are outlined. The present review shows the choice of appropriate mathematical model for a particular controlled system design mainly depends on the desired predictive ability and accuracy of the model. This aspect is connected with the necessity the main factors influencing the concrete release kinetics, especially the basic controlling mechanisms, to be identified in advance.

  14. Model of inventory replenishment in periodic review accounting for the occurrence of shortages

    Directory of Open Access Journals (Sweden)

    Stanisław Krzyżaniak

    2014-03-01

    Full Text Available Background: Despite the development of alternative concepts of goods flow management, the inventory management under conditions of random variations of demand is still an important issue, both from the point of view of inventory keeping and replenishment costs and the service level measured as the level of inventory availability. There is a number of inventory replenishment systems used in these conditions, but they are mostly developments of two basic systems: reorder point-based and periodic review-based. The paper deals with the latter system. Numerous researches indicate the need to improve the classical models describing that system, the reason being mainly the necessity to adapt the model better to the actual conditions. This allows a correct selection of parameters that control the used inventory replenishment system and - as a result - to obtain expected economic effects. Methods: This research aimed at building a model of the periodic review system to reflect the relations (observed during simulation tests between the volume of inventory shortages and the degree of accounting for so-called deferred demand, and the service level expressed as the probability of satisfying the demand in the review and the inventory replenishment cycle. The following model building and testing method has been applied: numerical simulation of inventory replenishment - detailed analysis of simulation results - construction of the model taking into account the regularities observed during the simulations - determination of principles of solving the system of relations creating the model - verification of the results obtained from the model using the results from simulation. Results: Presented are selected results of calculations based on classical formulas and using the developed model, which describe the relations between the service level and the parameters controlling the discussed inventory replenishment system. The results are compared to the simulation

  15. A two-phase moisture transport model accounting for sorption hysteresis in layered porous building constructions

    DEFF Research Database (Denmark)

    Johannesson, Björn; Janz, Mårten

    2009-01-01

    and exhibits different transport properties. A successful model of such a case may shred light on the performance of different constructions with regards to, for example, mould growth and freeze thaw damages. For this purpose a model has been developed which is based on a two phase flow, vapor and liquid water......, with account also to sorption hysteresis. The different materials in the considered layered construction are assigned different properties, i.e. vapor and liquid water diffusivities and boundary (wetting and drying) sorption curves. Further, the scanning behavior between wetting and drying boundary curves...

  16. @AACAnatomy twitter account goes live: A sustainable social media model for professional societies.

    Science.gov (United States)

    Benjamin, Hannah K; Royer, Danielle F

    2018-05-01

    Social media, with its capabilities of fast, global information sharing, provides a useful medium for professional development, connecting and collaborating with peers, and outreach. The goals of this study were to describe a new, sustainable model for Twitter use by professional societies, and analyze its impact on @AACAnatomy, the Twitter account of the American Association of Clinical Anatomists. Under supervision of an Association committee member, an anatomy graduate student developed a protocol for publishing daily tweets for @AACAnatomy. Five tweet categories were used: Research, Announcements, Replies, Engagement, and Community. Analytics from the 6-month pilot phase were used to assess the impact of the new model. @AACAnatomy had a steady average growth of 33 new followers per month, with less than 10% likely representing Association members. Research tweets, based on Clinical Anatomy articles with an abstract link, were the most shared, averaging 5,451 impressions, 31 link clicks, and nine #ClinAnat hashtag clicks per month. However, tweets from non-Research categories accounted for the highest impression and engagement metrics in four out of six months. For all tweet categories, monthly averages show consistent interaction of followers with the account. Daily tweet publication resulted in a 103% follower increase. An active Twitter account successfully facilitated regular engagement with @AACAnatomy followers and the promotion of clinical anatomy topics within a broad community. This Twitter model has the potential for implementation by other societies as a sustainable medium for outreach, networking, collaboration, and member engagement. Clin. Anat. 31:566-575, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. MODELING ENERGY EXPENDITURE AND OXYGEN CONSUMPTION IN HUMAN EXPOSURE MODELS: ACCOUNTING FOR FATIGUE AND EPOC

    Science.gov (United States)

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled Individual's physical activity level as described in an activity diary. Activity level is quantified via standardized val...

  18. Accounting for standard errors of vision-specific latent trait in regression models.

    Science.gov (United States)

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits

  19. Comparing dark matter models, modified Newtonian dynamics and modified gravity in accounting for galaxy rotation curves

    Science.gov (United States)

    Li, Xin; Tang, Li; Lin, Hai-Nan

    2017-05-01

    We compare six models (including the baryonic model, two dark matter models, two modified Newtonian dynamics models and one modified gravity model) in accounting for galaxy rotation curves. For the dark matter models, we assume NFW profile and core-modified profile for the dark halo, respectively. For the modified Newtonian dynamics models, we discuss Milgrom’s MOND theory with two different interpolation functions, the standard and the simple interpolation functions. For the modified gravity, we focus on Moffat’s MSTG theory. We fit these models to the observed rotation curves of 9 high-surface brightness and 9 low-surface brightness galaxies. We apply the Bayesian Information Criterion and the Akaike Information Criterion to test the goodness-of-fit of each model. It is found that none of the six models can fit all the galaxy rotation curves well. Two galaxies can be best fitted by the baryonic model without involving nonluminous dark matter. MOND can fit the largest number of galaxies, and only one galaxy can be best fitted by the MSTG model. Core-modified model fits about half the LSB galaxies well, but no HSB galaxies, while the NFW model fits only a small fraction of HSB galaxies but no LSB galaxies. This may imply that the oversimplified NFW and core-modified profiles cannot model the postulated dark matter haloes well. Supported by Fundamental Research Funds for the Central Universities (106112016CDJCR301206), National Natural Science Fund of China (11305181, 11547305 and 11603005), and Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y5KF181CJ1)

  20. An analytical model accounting for tip shape evolution during atom probe analysis of heterogeneous materials.

    Science.gov (United States)

    Rolland, N; Larson, D J; Geiser, B P; Duguay, S; Vurpillot, F; Blavette, D

    2015-12-01

    An analytical model describing the field evaporation dynamics of a tip made of a thin layer deposited on a substrate is presented in this paper. The difference in evaporation field between the materials is taken into account in this approach in which the tip shape is modeled at a mesoscopic scale. It was found that the non-existence of sharp edge on the surface is a sufficient condition to derive the morphological evolution during successive evaporation of the layers. This modeling gives an instantaneous and smooth analytical representation of the surface that shows good agreement with finite difference simulations results, and a specific regime of evaporation was highlighted when the substrate is a low evaporation field phase. In addition, the model makes it possible to calculate theoretically the tip analyzed volume, potentially opening up new horizons for atom probe tomographic reconstruction. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Regional Balance Model of Financial Flows through Sectoral Approaches System of National Accounts

    Directory of Open Access Journals (Sweden)

    Ekaterina Aleksandrovna Zaharchuk

    2017-03-01

    Full Text Available The main purpose of the study, the results of which are reflected in this article, is the theoretical and methodological substantiation of possibilities to build a regional balance model of financial flows consistent with the principles of the construction of the System of National Accounts (SNA. The paper summarizes the international experience of building regional accounts in the SNA as well as reflects the advantages and disadvantages of the existing techniques for constructing Social Accounting Matrix. The authors have proposed an approach to build the regional balance model of financial flows, which is based on the disaggregated tables of the formation, distribution and use of the added value of territory in the framework of institutional sectors of SNA (corporations, public administration, households. Within the problem resolution of the transition of value added from industries to sectors, the authors have offered an approach to the accounting of development, distribution and use of value added within the institutional sectors of the territories. The methods of calculation are based on the publicly available information base of statistics agencies and federal services. The authors provide the scheme of the interrelations of the indicators of the regional balance model of financial flows. It allows to coordinate mutually the movement of regional resources by the sectors of «corporation», «public administration» and «households» among themselves, and cash flows of the region — by the sectors and directions of use. As a result, they form a single account of the formation and distribution of territorial financial resources, which is a regional balance model of financial flows. This matrix shows the distribution of financial resources by income sources and sectors, where the components of the formation (compensation, taxes and gross profit, distribution (transfers and payments and use (final consumption, accumulation of value added are

  2. Accounting for Zero Inflation of Mussel Parasite Counts Using Discrete Regression Models

    Directory of Open Access Journals (Sweden)

    Emel Çankaya

    2017-06-01

    Full Text Available In many ecological applications, the absences of species are inevitable due to either detection faults in samples or uninhabitable conditions for their existence, resulting in high number of zero counts or abundance. Usual practice for modelling such data is regression modelling of log(abundance+1 and it is well know that resulting model is inadequate for prediction purposes. New discrete models accounting for zero abundances, namely zero-inflated regression (ZIP and ZINB, Hurdle-Poisson (HP and Hurdle-Negative Binomial (HNB amongst others are widely preferred to the classical regression models. Due to the fact that mussels are one of the economically most important aquatic products of Turkey, the purpose of this study is therefore to examine the performances of these four models in determination of the significant biotic and abiotic factors on the occurrences of Nematopsis legeri parasite harming the existence of Mediterranean mussels (Mytilus galloprovincialis L.. The data collected from the three coastal regions of Sinop city in Turkey showed more than 50% of parasite counts on the average are zero-valued and model comparisons were based on information criterion. The results showed that the probability of the occurrence of this parasite is here best formulated by ZINB or HNB models and influential factors of models were found to be correspondent with ecological differences of the regions.

  3. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  4. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  5. Assessing and accounting for the effects of model error in Bayesian solutions to hydrogeophysical inverse problems

    Science.gov (United States)

    Koepke, C.; Irving, J.; Roubinet, D.

    2014-12-01

    Geophysical methods have gained much interest in hydrology over the past two decades because of their ability to provide estimates of the spatial distribution of subsurface properties at a scale that is often relevant to key hydrological processes. Because of an increased desire to quantify uncertainty in hydrological predictions, many hydrogeophysical inverse problems have recently been posed within a Bayesian framework, such that estimates of hydrological properties and their corresponding uncertainties can be obtained. With the Bayesian approach, it is often necessary to make significant approximations to the associated hydrological and geophysical forward models such that stochastic sampling from the posterior distribution, for example using Markov-chain-Monte-Carlo (MCMC) methods, is computationally feasible. These approximations lead to model structural errors, which, so far, have not been properly treated in hydrogeophysical inverse problems. Here, we study the inverse problem of estimating unsaturated hydraulic properties, namely the van Genuchten-Mualem (VGM) parameters, in a layered subsurface from time-lapse, zero-offset-profile (ZOP) ground penetrating radar (GPR) data, collected over the course of an infiltration experiment. In particular, we investigate the effects of assumptions made for computational tractability of the stochastic inversion on model prediction errors as a function of depth and time. These assumptions are that (i) infiltration is purely vertical and can be modeled by the 1D Richards equation, and (ii) the petrophysical relationship between water content and relative dielectric permittivity is known. Results indicate that model errors for this problem are far from Gaussian and independently identically distributed, which has been the common assumption in previous efforts in this domain. In order to develop a more appropriate likelihood formulation, we use (i) a stochastic description of the model error that is obtained through

  6. Accounting for and predicting the influence of spatial autocorrelation in water quality modeling

    Science.gov (United States)

    Miralha, L.; Kim, D.

    2017-12-01

    Although many studies have attempted to investigate the spatial trends of water quality, more attention is yet to be paid to the consequences of considering and ignoring the spatial autocorrelation (SAC) that exists in water quality parameters. Several studies have mentioned the importance of accounting for SAC in water quality modeling, as well as the differences in outcomes between models that account for and ignore SAC. However, the capacity to predict the magnitude of such differences is still ambiguous. In this study, we hypothesized that SAC inherently possessed by a response variable (i.e., water quality parameter) influences the outcomes of spatial modeling. We evaluated whether the level of inherent SAC is associated with changes in R-Squared, Akaike Information Criterion (AIC), and residual SAC (rSAC), after accounting for SAC during modeling procedure. The main objective was to analyze if water quality parameters with higher Moran's I values (inherent SAC measure) undergo a greater increase in R² and a greater reduction in both AIC and rSAC. We compared a non-spatial model (OLS) to two spatial regression approaches (spatial lag and error models). Predictor variables were the principal components of topographic (elevation and slope), land cover, and hydrological soil group variables. We acquired these data from federal online sources (e.g. USGS). Ten watersheds were selected, each in a different state of the USA. Results revealed that water quality parameters with higher inherent SAC showed substantial increase in R² and decrease in rSAC after performing spatial regressions. However, AIC values did not show significant changes. Overall, the higher the level of inherent SAC in water quality variables, the greater improvement of model performance. This indicates a linear and direct relationship between the spatial model outcomes (R² and rSAC) and the degree of SAC in each water quality variable. Therefore, our study suggests that the inherent level of

  7. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    Science.gov (United States)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  8. A synthesis of literature on evaluation of models for policy applications, with implications for forest carbon accounting

    Science.gov (United States)

    Stephen P. Prisley; Michael J. Mortimer

    2004-01-01

    Forest modeling has moved beyond the realm of scientific discovery into the policy arena. The example that motivates this review is the application of models for forest carbon accounting. As negotiations determine the terms under which forest carbon will be accounted, reported, and potentially traded, guidelines and standards are being developed to ensure consistency,...

  9. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    International Nuclear Information System (INIS)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO 2 and NO x emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner

  10. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    Energy Technology Data Exchange (ETDEWEB)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO{sub 2} and NO{sub x} emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner.

  11. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    Science.gov (United States)

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Matematical modeling of galophytic plants productivity taking into account the temperature factor and soil salinity level

    Science.gov (United States)

    Natalia, Slyusar; Pisman, Tamara; Pechurkin, Nikolai S.

    Among the most challenging tasks faced by contemporary ecology is modeling of biological production process in different plant communities. The difficulty of the task is determined by the complexity of the study material. Models showing the influence of climate and climate change on plant growth, which would also involve soil site parameters, could be of both practical and theoretical interest. In this work a mathematical model has been constructed to describe the growth dynamics of different plant communities of halophytic meadows as dependent upon the temperature factor and soil salinity level, which could be further used to predict yields of these plant communities. The study was performed on plants of halophytic meadows in the coastal area of Lake of the Republic of Khakasia in 2004 - 2006. Every plant community grew on the soil of a different level of salinity - the amount of the solid residue of the saline soil aqueous extract. The mathematical model was analyzed using field data of 2004 and 2006, the years of contrasting air temperatures. Results of model investigations show that there is a correlation between plant growth and the temperature of the air for plant communities growing on soils containing the lowest (0.1Thus, results of our study, in which we used a mathematical model describing the development of plant communities of halophytic meadows and field measurements, suggest that both climate conditions (temperature) and ecological factors of the plants' habitat (soil salinity level) should be taken into account when constructing models for predicting crop yields.

  13. The Dynamics of the Accounting Models and Their Impact upon the Financial Risk Evaluation

    Directory of Open Access Journals (Sweden)

    Victor Munteanu

    2015-03-01

    Full Text Available All the companies are exposed to risks and circumstances can take an unexpected turn at some point in time. What the company can control is how these risks are managed and firstly the steps to be taken to avoid them. The way how the scientific expertise, data and the advice on devising the risk strategies are understood, represented and incorporated into a structured system has visibly evolved since the 19th century until present, along with the accounting models and the main factors that triggered a higher concern in this sector.

  14. System modeling of spent fuel transfers at EBR-II

    International Nuclear Information System (INIS)

    Imel, G.R.; Houshyar, A.

    1994-01-01

    The unloading of spent fuel from the Experimental Breeder Reactor-II (EBR-II) for interim storage and subsequent processing in the Fuel Cycle Facility (FCF) is a multi-stage process, involving complex operations at a minimum of four different facilities at the Argonne National Laboratory-West (ANL-W) site. Each stage typically has complicated handling and/or cooling equipment that must be periodically maintained, leading to both planned and unplanned downtime. A program was initiated in October, 1993 to replace the 330 depleted uranium blanket subassemblies (S/As) with stainless steel reflectors. Routine operation of the reactor for fuels performance and materials testing occurred simultaneously in FY 1994 with the blanket unloading. In the summer of 1994, Congress dictated the October 1, 1994 shutdown of EBR-2. Consequently, all blanket S/As and fueled drivers will be removed from the reactor tank and replaced with stainless steel assemblies (which are needed to maintain a precise configuration within the grid so that the under sodium fuel handling equipment can function). A system modeling effort was conducted to determine the means to achieve the objective for the blanket and fuel unloading program, which under the current plan requires complete unloading of the primary tank of all fueled assemblies in 2 1/2 years. A simulation model of the fuel handling system at ANL-W was developed and used to analyze different unloading scenarios; the model has provided valuable information about required resources and modifications to equipment and procedures. This paper reports the results of this modeling effort

  15. A three-dimensional model of mammalian tyrosinase active site accounting for loss of function mutations.

    Science.gov (United States)

    Schweikardt, Thorsten; Olivares, Concepción; Solano, Francisco; Jaenicke, Elmar; García-Borrón, José Carlos; Decker, Heinz

    2007-10-01

    Tyrosinases are the first and rate-limiting enzymes in the synthesis of melanin pigments responsible for colouring hair, skin and eyes. Mutation of tyrosinases often decreases melanin production resulting in albinism, but the effects are not always understood at the molecular level. Homology modelling of mouse tyrosinase based on recently published crystal structures of non-mammalian tyrosinases provides an active site model accounting for loss-of-function mutations. According to the model, the copper-binding histidines are located in a helix bundle comprising four densely packed helices. A loop containing residues M374, S375 and V377 connects the CuA and CuB centres, with the peptide oxygens of M374 and V377 serving as hydrogen acceptors for the NH-groups of the imidazole rings of the copper-binding His367 and His180. Therefore, this loop is essential for the stability of the active site architecture. A double substitution (374)MS(375) --> (374)GG(375) or a single M374G mutation lead to a local perturbation of the protein matrix at the active site affecting the orientation of the H367 side chain, that may be unable to bind CuB reliably, resulting in loss of activity. The model also accounts for loss of function in two naturally occurring albino mutations, S380P and V393F. The hydroxyl group in S380 contributes to the correct orientation of M374, and the substitution of V393 for a bulkier phenylalanine sterically impedes correct side chain packing at the active site. Therefore, our model explains the mechanistic necessity for conservation of not only active site histidines but also adjacent amino acids in tyrosinase.

  16. Palaeomagnetic dating method accounting for post-depositional remanence and its application to geomagnetic field modelling

    Science.gov (United States)

    Nilsson, A.; Suttie, N.

    2016-12-01

    Sedimentary palaeomagnetic data may exhibit some degree of smoothing of the recorded field due to the gradual processes by which the magnetic signal is `locked-in' over time. Here we present a new Bayesian method to construct age-depth models based on palaeomagnetic data, taking into account and correcting for potential lock-in delay. The age-depth model is built on the widely used "Bacon" dating software by Blaauw and Christen (2011, Bayesian Analysis 6, 457-474) and is designed to combine both radiocarbon and palaeomagnetic measurements. To our knowledge, this is the first palaeomagnetic dating method that addresses the potential problems related post-depositional remanent magnetisation acquisition in age-depth modelling. Age-depth models, including site specific lock-in depth and lock-in filter function, produced with this method are shown to be consistent with independent results based on radiocarbon wiggle match dated sediment sections. Besides its primary use as a dating tool, our new method can also be used specifically to identify the most likely lock-in parameters for a specific record. We explore the potential to use these results to construct high-resolution geomagnetic field models based on sedimentary palaeomagnetic data, adjusting for smoothing induced by post-depositional remanent magnetisation acquisition. Potentially, this technique could enable reconstructions of Holocene geomagnetic field with the same amplitude of variability observed in archaeomagnetic field models for the past three millennia.

  17. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    Directory of Open Access Journals (Sweden)

    Czoli Christine

    2011-10-01

    Full Text Available Abstract Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories. These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed. Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment.

  18. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Science.gov (United States)

    Molenaar, Dylan; de Boeck, Paul

    2018-02-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  19. A Modified Model to Estimate Building Rental Multipiers Accounting for Advalorem Operating Expenses

    Directory of Open Access Journals (Sweden)

    Smolyak S.A.

    2016-09-01

    Full Text Available To develop ideas on building element valuation contained in the first article on the subject published in REMV, we propose an elaboration of the approach accounting for ad valorem expenses incidental to property management, such as land taxes, income/capital gains tax, and insurance premium costs; all such costs, being of an ad valorem nature in the first instance, cause circularity in the logic of the model, which, however, is not intractable under the proposed approach. The resulting formulas for carrying out practical estimation of building rental multipliers and, in consequence, of building values, turn out to be somewhat modified, and we demonstrate the sensitivity of the developed approach to the impact of these ad valorem factors. On the other hand, it is demonstrated that (accounting for building depreciation charges, which should seemingly be included among the considered ad valorem factors, cancel out and do not have any impact on the resulting estimates. However, treating the depreciation of buildings in quantifiable economic terms as a reduction in derivable operating benefits over time (instead of mere physical indications, such as age, we also demonstrate that the approach has implications for estimating the economic service lives of buildings and can be practical when used in conjunction with the market-related approach to valuation – from which the requisite model inputs can be extracted as shown in the final part of the paper.

  20. Radiative transfer modeling through terrestrial atmosphere and ocean accounting for inelastic processes: Software package SCIATRAN

    Science.gov (United States)

    Rozanov, V. V.; Dinter, T.; Rozanov, A. V.; Wolanin, A.; Bracher, A.; Burrows, J. P.

    2017-06-01

    SCIATRAN is a comprehensive software package which is designed to model radiative transfer processes in the terrestrial atmosphere and ocean in the spectral range from the ultraviolet to the thermal infrared (0.18-40 μm). It accounts for multiple scattering processes, polarization, thermal emission and ocean-atmosphere coupling. The main goal of this paper is to present a recently developed version of SCIATRAN which takes into account accurately inelastic radiative processes in both the atmosphere and the ocean. In the scalar version of the coupled ocean-atmosphere radiative transfer solver presented by Rozanov et al. [61] we have implemented the simulation of the rotational Raman scattering, vibrational Raman scattering, chlorophyll and colored dissolved organic matter fluorescence. In this paper we discuss and explain the numerical methods used in SCIATRAN to solve the scalar radiative transfer equation including trans-spectral processes, and demonstrate how some selected radiative transfer problems are solved using the SCIATRAN package. In addition we present selected comparisons of SCIATRAN simulations with those published benchmark results, independent radiative transfer models, and various measurements from satellite, ground-based, and ship-borne instruments. The extended SCIATRAN software package along with a detailed User's Guide is made available for scientists and students, who are undertaking their own research typically at universities, via the web page of the Institute of Environmental Physics (IUP), University of Bremen: http://www.iup.physik.uni-bremen.de.

  1. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    Science.gov (United States)

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  2. Implementation of a cost-accounting model in a biobank: practical implications.

    Science.gov (United States)

    Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C

    2014-01-01

    Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level.

  3. Accounting for rainfall evaporation using dual-polarization radar and mesoscale model data

    Science.gov (United States)

    Pallardy, Quinn; Fox, Neil I.

    2018-02-01

    Implementation of dual-polarization radar should allow for improvements in quantitative precipitation estimates due to dual-polarization capability allowing for the retrieval of the second moment of the gamma drop size distribution. Knowledge of the shape of the DSD can then be used in combination with mesoscale model data to estimate the motion and evaporation of each size of drop falling from the height at which precipitation is observed by the radar to the surface. Using data from Central Missouri at a range between 130 and 140 km from the operational National Weather Service radar a rain drop tracing scheme was developed to account for the effects of evaporation, where individual raindrops hitting the ground were traced to the point in space and time where they interacted with the radar beam. The results indicated evaporation played a significant role in radar rainfall estimation in situations where the atmosphere was relatively dry. Improvements in radar estimated rainfall were also found in these situations by accounting for evaporation. The conclusion was made that the effects of raindrop evaporation were significant enough to warrant further research into the inclusion high resolution model data in the radar rainfall estimation process for appropriate locations.

  4. An extended car-following model accounting for the average headway effect in intelligent transportation system

    Science.gov (United States)

    Kuang, Hua; Xu, Zhi-Peng; Li, Xing-Li; Lo, Siu-Ming

    2017-04-01

    In this paper, an extended car-following model is proposed to simulate traffic flow by considering average headway of preceding vehicles group in intelligent transportation systems environment. The stability condition of this model is obtained by using the linear stability analysis. The phase diagram can be divided into three regions classified as the stable, the metastable and the unstable ones. The theoretical result shows that the average headway plays an important role in improving the stabilization of traffic system. The mKdV equation near the critical point is derived to describe the evolution properties of traffic density waves by applying the reductive perturbation method. Furthermore, through the simulation of space-time evolution of the vehicle headway, it is shown that the traffic jam can be suppressed efficiently with taking into account the average headway effect, and the analytical result is consistent with the simulation one.

  5. MODELLING OF THERMOELASTIC TRANSIENT CONTACT INTERACTION FOR BINARY BEARING TAKING INTO ACCOUNT CONVECTION

    Directory of Open Access Journals (Sweden)

    Igor KOLESNIKOV

    2016-12-01

    Full Text Available Serviceability of metal-polymeric "dry-friction" sliding bearings depends on many parameters, including the rotational speed, friction coefficient, thermal and mechanical properties of the bearing system and, as a result, the value of contact temperature. The objective of this study is to develop a computational model for the metallic-polymer bearing, determination on the basis of this model temperature distribution, equivalent and contact stresses for elements of the bearing arrangement and selection of the optimal parameters for the bearing system to achieve thermal balance. Static problem for the combined sliding bearing with the account of heat generation due to friction has been studied in [1]; the dynamic thermoelastic problem of the shaft rotation in a single and double layer bronze bearings were investigated in [2, 3].

  6. Accounting for genetic interactions improves modeling of individual quantitative trait phenotypes in yeast.

    Science.gov (United States)

    Forsberg, Simon K G; Bloom, Joshua S; Sadhu, Meru J; Kruglyak, Leonid; Carlborg, Örjan

    2017-04-01

    Experiments in model organisms report abundant genetic interactions underlying biologically important traits, whereas quantitative genetics theory predicts, and data support, the notion that most genetic variance in populations is additive. Here we describe networks of capacitating genetic interactions that contribute to quantitative trait variation in a large yeast intercross population. The additive variance explained by individual loci in a network is highly dependent on the allele frequencies of the interacting loci. Modeling of phenotypes for multilocus genotype classes in the epistatic networks is often improved by accounting for the interactions. We discuss the implications of these results for attempts to dissect genetic architectures and to predict individual phenotypes and long-term responses to selection.

  7. Calibration of an experimental model of tritium storage bed designed for 'in situ' accountability

    International Nuclear Information System (INIS)

    Bidica, Nicolae; Stefanescu, Ioan; Bucur, Ciprian; Bulubasa, Gheorghe; Deaconu, Mariea

    2009-01-01

    Full text: Objectives: Tritium accountancy of the storage beds in tritium facilities is an important issue for tritium inventory control. The purpose of our work was to perform calibration of an experimental model of tritium storage bed with a special design, using electric heaters to simulate tritium decay, and to evaluate the detection limit of the accountancy method. The objective of this paper is to present an experimental method used for calibration of the storage bed and the experimental results consisting of calibration curves and detection limit. Our method is based on a 'self-assaying' tritium storage bed. The basic characteristics of the design of our storage bed consists, in principle, of a uniform distribution of the storage material on several copper thin fins (in order to obtain a uniform temperature field inside the bed), an electrical heat source to simulate the tritium decay heat, a system of thermocouples for measuring the temperature field inside the bed, and good thermal isolation of the bed from the external environment. Within this design of the tritium storage bed, the tritium accounting method is based on determining the decay heat of tritium by measuring the temperature increase of the isolated storage bed. Experimental procedure consisted in measuring of temperature field inside the bed for few values of the power injected with the aid of electrical heat source. Data have been collected for few hours and the temperature increase rate was determined for each value of the power injected. Graphical representation of temperature rise versus injected powers was obtained. This accounting method of tritium inventory stored as metal tritide is a reliable solution for in-situ tritium accountability in a tritium handling facility. Several improvements can be done regarding the design of the storage bed in order to improve the measurement accuracy and to obtain a lower detection limit as for instance use of more accurate thermocouples or special

  8. A Global Model of The Light Curves and Expansion Velocities of Type II-plateau Supernovae

    Science.gov (United States)

    Pejcha, Ondřej; Prieto, Jose L.

    2015-02-01

    We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of Type II-Plateau supernovae based on their expansion velocities and photometric measurements. We apply the method to a sample of 26 well-observed, nearby supernovae with published light curves and velocities. We simultaneously fit ~230 velocity and ~6800 mag measurements distributed over 21 photometric passbands spanning wavelengths from 0.19 to 2.2 μm. The light-curve differences among the Type II-Plateau supernovae are well modeled by assuming different rates of photospheric radius expansion, which we explain as different density profiles of the ejecta, and we argue that steeper density profiles result in flatter plateaus, if everything else remains unchanged. The steep luminosity decline of Type II-Linear supernovae is due to fast evolution of the photospheric temperature, which we verify with a successful fit of SN 1980K. Eliminating the need for theoretical supernova atmosphere models, we obtain self-consistent relative distances, reddenings, and nickel masses fully accounting for all internal model uncertainties and covariances. We use our global fit to estimate the time evolution of any missing band tailored specifically for each supernova, and we construct spectral energy distributions and bolometric light curves. We produce bolometric corrections for all filter combinations in our sample. We compare our model to the theoretical dilution factors and find good agreement for the B and V filters. Our results differ from the theory when the I, J, H, or K bands are included. We investigate the reddening law toward our supernovae and find reasonable agreement with standard \\mathscr{R}_V˜ 3.1 reddening law in UBVRI bands. Results for other bands are inconclusive. We make our fitting code publicly available.

  9. A GLOBAL MODEL OF THE LIGHT CURVES AND EXPANSION VELOCITIES OF TYPE II-PLATEAU SUPERNOVAE

    Energy Technology Data Exchange (ETDEWEB)

    Pejcha, Ondřej [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08540 (United States); Prieto, Jose L., E-mail: pejcha@astro.princeton.edu [Núcleo de Astronomía de la Facultad de Ingeniería, Universidad Diego Portales, Av. Ejército 441 Santiago (Chile)

    2015-02-01

    We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of Type II-Plateau supernovae based on their expansion velocities and photometric measurements. We apply the method to a sample of 26 well-observed, nearby supernovae with published light curves and velocities. We simultaneously fit ∼230 velocity and ∼6800 mag measurements distributed over 21 photometric passbands spanning wavelengths from 0.19 to 2.2 μm. The light-curve differences among the Type II-Plateau supernovae are well modeled by assuming different rates of photospheric radius expansion, which we explain as different density profiles of the ejecta, and we argue that steeper density profiles result in flatter plateaus, if everything else remains unchanged. The steep luminosity decline of Type II-Linear supernovae is due to fast evolution of the photospheric temperature, which we verify with a successful fit of SN 1980K. Eliminating the need for theoretical supernova atmosphere models, we obtain self-consistent relative distances, reddenings, and nickel masses fully accounting for all internal model uncertainties and covariances. We use our global fit to estimate the time evolution of any missing band tailored specifically for each supernova, and we construct spectral energy distributions and bolometric light curves. We produce bolometric corrections for all filter combinations in our sample. We compare our model to the theoretical dilution factors and find good agreement for the B and V filters. Our results differ from the theory when the I, J, H, or K bands are included. We investigate the reddening law toward our supernovae and find reasonable agreement with standard R{sub V}∼3.1 reddening law in UBVRI bands. Results for other bands are inconclusive. We make our fitting code publicly available.

  10. Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.

    Science.gov (United States)

    Martínez, C A; Khare, K; Rahman, S; Elzo, M A

    2017-10-01

    Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.

  11. A New Evapotranspiration Model Accounting for Advection and Its Validation during SMEX02

    Directory of Open Access Journals (Sweden)

    Yongmin Yang

    2013-01-01

    Full Text Available Based on the crop water stress index (CWSI concept, a new model was proposed to account for advection to estimate evapotranspiration. Both local scale evaluation with sites observations and regional scale evaluation with a remote dataset from Landsat 7 ETM+ were carried out to assess the performance of this model. Local scale evaluation indicates that this newly developed model can effectively characterize the daily variations of evapotranspiration and the predicted results show good agreement with the site observations. For all the 6 corn sites, the coefficient of determination (R2 is 0.90 and the root mean square difference (RMSD is 58.52W/m2. For all the 6 soybean sites, the R2 and RMSD are 0.85 and 49.46W/m2, respectively. Regional scale evaluation shows that the model can capture the spatial variations of evapotranspiration at the Landsat-based scale. Clear spatial patterns were observed at the Landsat-based scale and are closely related to the dominant land covers, corn and soybean. Furthermore, the surface resistance derived from instantaneous CWSI was applied to the Penman-Monteith equation to estimate daily evapotranspiration. Overall, results indicate that this newly developed model is capable of estimating reliable surface heat fluxes using remotely sensed data.

  12. A model proposal concerning balance scorecard application integrated with resource consumption accounting in enterprise performance management

    Directory of Open Access Journals (Sweden)

    ORHAN ELMACI

    2014-06-01

    Full Text Available The present study intended to investigate the “Balance Scorecard (BSC model integrated with Resource Consumption Accounting (RCA” which helps to evaluate the enterprise as matrix structure in its all parts. It aims to measure how much tangible and intangible values (assets of enterprises contribute to the enterprises. In other words, it measures how effectively, actively, and efficiently these values (assets are used. In short, it aims to measure sustainable competency of enterprises. As expressing the effect of tangible and intangible values (assets of the enterprise on the performance in mathematical and statistical methods is insufficient, it is targeted that RCA Method integrated with BSC model is based on matrix structure and control models. The effects of all complex factors in the enterprise on the performance (productivity and efficiency estimated algorithmically with cause and effect diagram. The contributions of matrix structures for reaching the management functional targets of the enterprises that operate in market competitive environment increasing day to day, is discussed. So in the context of modern management theories, as a contribution to BSC approach which is in the foreground in today’s administrative science of enterprises in matrix organizational structures, multidimensional performance evaluation model -RCA integrated with BSC Model proposal- is presented as strategic planning and strategic evaluation instrument.

  13. A Model for Urban Environment and Resource Planning Based on Green GDP Accounting System

    Directory of Open Access Journals (Sweden)

    Linyu Xu

    2013-01-01

    Full Text Available The urban environment and resources are currently on course that is unsustainable in the long run due to excessive human pursuit of economic goals. Thus, it is very important to develop a model to analyse the relationship between urban economic development and environmental resource protection during the process of rapid urbanisation. This paper proposed a model to identify the key factors in urban environment and resource regulation based on a green GDP accounting system, which consisted of four parts: economy, society, resource, and environment. In this model, the analytic hierarchy process (AHP method and a modified Pearl curve model were combined to allow for dynamic evaluation, with higher green GDP value as the planning target. The model was applied to the environmental and resource planning problem of Wuyishan City, and the results showed that energy use was a key factor that influenced the urban environment and resource development. Biodiversity and air quality were the most sensitive factors that influenced the value of green GDP in the city. According to the analysis, the urban environment and resource planning could be improved for promoting sustainable development in Wuyishan City.

  14. Air quality modeling for accountability research: Operational, dynamic, and diagnostic evaluation

    Science.gov (United States)

    Henneman, Lucas R. F.; Liu, Cong; Hu, Yongtao; Mulholland, James A.; Russell, Armistead G.

    2017-10-01

    Photochemical grid models play a central role in air quality regulatory frameworks, including in air pollution accountability research, which seeks to demonstrate the extent to which regulations causally impacted emissions, air quality, and public health. There is a need, however, to develop and demonstrate appropriate practices for model application and evaluation in an accountability framework. We employ a combination of traditional and novel evaluation techniques to assess four years (2001-02, 2011-12) of simulated pollutant concentrations across a decade of major emissions reductions using the Community Multiscale Air Quality (CMAQ) model. We have grouped our assessments in three categories: Operational evaluation investigates how well CMAQ captures absolute concentrations; dynamic evaluation investigates how well CMAQ captures changes in concentrations across the decade of changing emissions; diagnostic evaluation investigates how CMAQ attributes variability in concentrations and sensitivities to emissions between meteorology and emissions, and how well this attribution compares to empirical statistical models. In this application, CMAQ captures O3 and PM2.5 concentrations and change over the decade in the Eastern United States similarly to past CMAQ applications and in line with model evaluation guidance; however, some PM2.5 species-EC, OC, and sulfate in particular-exhibit high biases in various months. CMAQ-simulated PM2.5 has a high bias in winter months and low bias in the summer, mainly due to a high bias in OC during the cold months and low bias in OC and sulfate during the summer. Simulated O3 and PM2.5 changes across the decade have normalized mean bias of less than 2.5% and 17%, respectively. Detailed comparisons suggest biased EC emissions, negative wintertime SO42- sensitivities to mobile source emissions, and incomplete capture of OC chemistry in the summer and winter. Photochemical grid model-simulated O3 and PM2.5 responses to emissions and

  15. Radiative transfer modeling through terrestrial atmosphere and ocean accounting for inelastic processes: Software package SCIATRAN

    International Nuclear Information System (INIS)

    Rozanov, V.V.; Dinter, T.; Rozanov, A.V.; Wolanin, A.; Bracher, A.; Burrows, J.P.

    2017-01-01

    SCIATRAN is a comprehensive software package which is designed to model radiative transfer processes in the terrestrial atmosphere and ocean in the spectral range from the ultraviolet to the thermal infrared (0.18–40 μm). It accounts for multiple scattering processes, polarization, thermal emission and ocean–atmosphere coupling. The main goal of this paper is to present a recently developed version of SCIATRAN which takes into account accurately inelastic radiative processes in both the atmosphere and the ocean. In the scalar version of the coupled ocean–atmosphere radiative transfer solver presented by Rozanov et al. we have implemented the simulation of the rotational Raman scattering, vibrational Raman scattering, chlorophyll and colored dissolved organic matter fluorescence. In this paper we discuss and explain the numerical methods used in SCIATRAN to solve the scalar radiative transfer equation including trans-spectral processes, and demonstrate how some selected radiative transfer problems are solved using the SCIATRAN package. In addition we present selected comparisons of SCIATRAN simulations with those published benchmark results, independent radiative transfer models, and various measurements from satellite, ground-based, and ship-borne instruments. The extended SCIATRAN software package along with a detailed User's Guide is made available for scientists and students, who are undertaking their own research typically at universities, via the web page of the Institute of Environmental Physics (IUP), University of Bremen: (http://www.iup.physik.uni-bremen.de). - Highlights: • A new version of the software package SCIATRAN is presented. • Inelastic scattering in water and atmosphere is implemented in SCIATRAN. • Raman scattering and fluorescence can be included in radiative transfer calculations. • Comparisons to other radiative transfer models show excellent agreement. • Comparisons to observations show consistent results.

  16. Modeling of ethylbenzene dehydrogenation kinetics process taking into account deactivation of catalyst bed of the reactor

    Directory of Open Access Journals (Sweden)

    V. K. Bityukov

    2017-01-01

    Full Text Available Styrene synthesis process occurring in a two-stage continuous adiabatic reactor is a complex chemical engineering system. It is characterized by indeterminacy, nonstationarity and occurs in permanent uncontrolled disturbances. Therefore, the task of developing the predictive control system of the main product concentration of the dehydrogenation reaction - styrene to maintain this value within a predetermined range throughout the period of operation is important. This solution is impossible without the development of the process model on the basis of the kinetic revised scheme, taking into account the drop of the reactor catalytic bed activity due to coke formation on the surface. The article justifies and proposes: the drop changes dependence of catalyst bed activity as a time of reactor block operation function and improved model of chemical reactions kinetics. The synthesized mathematical model of the process is a system of ordinary differential equations and allows you: to calculate the concentration profiles of reaction mixture components during the passage of the charge through the adiabatic reactor stage, to determine the contact gas composition at the outlet of the reactor stages throughout the cycle of catalytic system, taking into account temperature changes and drop of the catalyst bed activity. The compensation of the decreased catalyst bed activity is carried out by raising the temperature in the reactor block for the duration of the operation. The estimation of the values of chemical reactions rate constants, as well as the calculation and analysis of the main and by-products concentrations of dehydrogenation reactions at the outlet of the reactor plant is curried out. Simulation results show that the change of temperature of the reactor, carried out by the exponential law considering deactivation of the catalyst bed allows the yield in a given range of technological regulations throughout the operation cycle of the reactor block.

  17. A margin model to account for respiration-induced tumour motion and its variability

    International Nuclear Information System (INIS)

    Coolens, Catherine; Webb, Steve; Evans, Phil M; Shirato, H; Nishioka, K

    2008-01-01

    In order to reduce the sensitivity of radiotherapy treatments to organ motion, compensation methods are being investigated such as gating of treatment delivery, tracking of tumour position, 4D scanning and planning of the treatment, etc. An outstanding problem that would occur with all these methods is the assumption that breathing motion is reproducible throughout the planning and delivery process of treatment. This is obviously not a realistic assumption and is one that will introduce errors. A dynamic internal margin model (DIM) is presented that is designed to follow the tumour trajectory and account for the variability in respiratory motion. The model statistically describes the variation of the breathing cycle over time, i.e. the uncertainty in motion amplitude and phase reproducibility, in a polar coordinate system from which margins can be derived. This allows accounting for an additional gating window parameter for gated treatment delivery as well as minimizing the area of normal tissue irradiated. The model was illustrated with abdominal motion for a patient with liver cancer and tested with internal 3D lung tumour trajectories. The results confirm that the respiratory phases around exhale are most reproducible and have the smallest variation in motion amplitude and phase (approximately 2 mm). More importantly, the margin area covering normal tissue is significantly reduced by using trajectory-specific margins (as opposed to conventional margins) as the angular component is by far the largest contributor to the margin area. The statistical approach to margin calculation, in addition, offers the possibility for advanced online verification and updating of breathing variation as more data become available

  18. AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS

    Energy Technology Data Exchange (ETDEWEB)

    Rodríguez-Ramírez, J. C.; Raga, A. C. [Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Ap. 70-543, 04510 D.F., México (Mexico); Lora, V. [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Cantó, J., E-mail: juan.rodriguez@nucleares.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ap. 70-468, 04510 D. F., México (Mexico)

    2016-12-20

    We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. We compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.

  19. FPLUME-1.0: An integral volcanic plume model accounting for ash aggregation

    Science.gov (United States)

    Folch, Arnau; Costa, Antonio; Macedonio, Giovanni

    2016-04-01

    Eruption Source Parameters (ESP) characterizing volcanic eruption plumes are crucial inputs for atmospheric tephra dispersal models, used for hazard assessment and risk mitigation. We present FPLUME-1.0, a steady-state 1D cross-section averaged eruption column model based on the Buoyant Plume Theory (BPT). The model accounts for plume bending by wind, entrainment of ambient moisture, effects of water phase changes, particle fallout and re-entrainment, a new parameterization for the air entrainment coefficients and a model for wet aggregation of ash particles in presence of liquid water or ice. In the occurrence of wet aggregation, the model predicts an "effective" grain size distribution depleted in fines with respect to that erupted at the vent. Given a wind profile, the model can be used to determine the column height from the eruption mass flow rate or vice-versa. The ultimate goal is to improve ash cloud dispersal forecasts by better constraining the ESP (column height, eruption rate and vertical distribution of mass) and the "effective" particle grain size distribution resulting from eventual wet aggregation within the plume. As test cases we apply the model to the eruptive phase-B of the 4 April 1982 El Chichón volcano eruption (México) and the 6 May 2010 Eyjafjallajökull eruption phase (Iceland). The modular structure of the code facilitates the implementation in the future code versions of more quantitative ash aggregation parameterization as further observations and experiments data will be available for better constraining ash aggregation processes.

  20. Material Protection, Accounting, and Control Technologies (MPACT): Modeling and Simulation Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    Cipiti, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dunn, Timothy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Durbin, Samual [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Durkee, Joe W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); England, Jeff [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, Robert [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Ketusky, Edward [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Li, Shelly [Idaho National Lab. (INL), Idaho Falls, ID (United States); Lindgren, Eric [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meier, David [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Miller, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Osburn, Laura Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pereira, Candido [Argonne National Lab. (ANL), Argonne, IL (United States); Rauch, Eric Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Scaglione, John [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Scherer, Carolynn P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sprinkle, James K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Yoo, Tae-Sic [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-05

    The development of sustainable advanced nuclear fuel cycles is a long-term goal of the Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technologies program. The Material Protection, Accounting, and Control Technologies (MPACT) campaign is supporting research and development (R&D) of advanced instrumentation, analysis tools, and integration methodologies to meet this goal. This advanced R&D is intended to facilitate safeguards and security by design of fuel cycle facilities. The lab-scale demonstration of a virtual facility, distributed test bed, that connects the individual tools being developed at National Laboratories and university research establishments, is a key program milestone for 2020. These tools will consist of instrumentation and devices as well as computer software for modeling. To aid in framing its long-term goal, during FY16, a modeling and simulation roadmap is being developed for three major areas of investigation: (1) radiation transport and sensors, (2) process and chemical models, and (3) shock physics and assessments. For each area, current modeling approaches are described, and gaps and needs are identified.

  1. Accounting for exhaust gas transport dynamics in instantaneous emission models via smooth transition regression.

    Science.gov (United States)

    Kamarianakis, Yiannis; Gao, H Oliver

    2010-02-15

    Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.

  2. Modelling of gas-metal arc welding taking into account metal vapour

    Energy Technology Data Exchange (ETDEWEB)

    Schnick, M; Fuessel, U; Hertel, M; Haessler, M [Institute of Surface and Manufacturing Technology, Technische Universitaet Dresden, D-01062 Dresden (Germany); Spille-Kohoff, A [CFX Berlin Software GmbH, Karl-Marx-Allee 90, 10243 Berlin (Germany); Murphy, A B [CSIRO Materials Science and Engineering, PO Box 218, Lindfield NSW 2070 (Australia)

    2010-11-03

    The most advanced numerical models of gas-metal arc welding (GMAW) neglect vaporization of metal, and assume an argon atmosphere for the arc region, as is also common practice for models of gas-tungsten arc welding (GTAW). These models predict temperatures above 20 000 K and a temperature distribution similar to GTAW arcs. However, spectroscopic temperature measurements in GMAW arcs demonstrate much lower arc temperatures. In contrast to measurements of GTAW arcs, they have shown the presence of a central local minimum of the radial temperature distribution. This paper presents a GMAW model that takes into account metal vapour and that is able to predict the local central minimum in the radial distributions of temperature and electric current density. The influence of different values for the net radiative emission coefficient of iron vapour, which vary by up to a factor of hundred, is examined. It is shown that these net emission coefficients cause differences in the magnitudes, but not in the overall trends, of the radial distribution of temperature and current density. Further, the influence of the metal vaporization rate is investigated. We present evidence that, for higher vaporization rates, the central flow velocity inside the arc is decreased and can even change direction so that it is directed from the workpiece towards the wire, although the outer plasma flow is still directed towards the workpiece. In support of this thesis, we have attempted to reproduce the measurements of Zielinska et al for spray-transfer mode GMAW numerically, and have obtained reasonable agreement.

  3. Research on mouse model of grade II corneal alkali burn

    Directory of Open Access Journals (Sweden)

    Jun-Qiang Bai

    2016-04-01

    Full Text Available AIM: To choose appropriate concentration of sodium hydroxide (NaOH solution to establish a stable and consistent corneal alkali burn mouse model in grade II. METHODS: The mice (n=60 were randomly divided into four groups and 15 mice each group. Corneal alkali burns were induced by placing circle filter paper soaked with NaOH solutions on the right central cornea for 30s. The concentrations of NaOH solutions of groups A, B, C, and D were 0.1 mol/L, 0.15 mol/L , 0.2 mol/L, and 1.0 mol/L respectively. Then these corneas were irrigated with 20 mL physiological saline (0.9% NaCl. On day 7 postburn, slit lamp microscope was used to observe corneal opacity, corneal epithelial sodium fluorescein staining positive rate, incidence of corneal ulcer and corneal neovascularization, meanwhile pictures of the anterior eyes were taken. Cirrus spectral domain optical coherence tomography was used to scan cornea to observe corneal epithelial defect and corneal ulcer. RESULTS: Corneal opacity scores ( were not significantly different between the group A and group B (P=0.097. Incidence of corneal ulcer in group B was significantly higher than that in group A (P=0.035. Incidence of corneal ulcer and perforation rate in group B was lower than that in group C. Group C and D had corneal neovascularization, and incidence of corneal neovascularization in group D was significantly higher than that in group C (P=0.000. CONCLUSION: Using 0.15 mol/L NaOH can establish grade II mouse model of corneal alkali burns.

  4. Integrated Approach Model of Risk, Control and Auditing of Accounting Information Systems

    Directory of Open Access Journals (Sweden)

    Claudiu BRANDAS

    2013-01-01

    Full Text Available The use of IT in the financial and accounting processes is growing fast and this leads to an increase in the research and professional concerns about the risks, control and audit of Ac-counting Information Systems (AIS. In this context, the risk and control of AIS approach is a central component of processes for IT audit, financial audit and IT Governance. Recent studies in the literature on the concepts of risk, control and auditing of AIS outline two approaches: (1 a professional approach in which we can fit ISA, COBIT, IT Risk, COSO and SOX, and (2 a research oriented approach in which we emphasize research on continuous auditing and fraud using information technology. Starting from the limits of existing approaches, our study is aimed to developing and testing an Integrated Approach Model of Risk, Control and Auditing of AIS on three cycles of business processes: purchases cycle, sales cycle and cash cycle in order to improve the efficiency of IT Governance, as well as ensuring integrity, reality, accuracy and availability of financial statements.

  5. Equilibrium modeling of mono and binary sorption of Cu(II and Zn(II onto chitosan gel beads

    Directory of Open Access Journals (Sweden)

    Nastaj Józef

    2016-12-01

    Full Text Available The objective of the work are in-depth experimental studies of Cu(II and Zn(II ion removal on chitosan gel beads from both one- and two-component water solutions at the temperature of 303 K. The optimal process conditions such as: pH value, dose of sorbent and contact time were determined. Based on the optimal process conditions, equilibrium and kinetic studies were carried out. The maximum sorption capacities equaled: 191.25 mg/g and 142.88 mg/g for Cu(II and Zn(II ions respectively, when the sorbent dose was 10 g/L and the pH of a solution was 5.0 for both heavy metal ions. One-component sorption equilibrium data were successfully presented for six of the most useful three-parameter equilibrium models: Langmuir-Freundlich, Redlich-Peterson, Sips, Koble-Corrigan, Hill and Toth. Extended forms of Langmuir-Freundlich, Koble-Corrigan and Sips models were also well fitted to the two-component equilibrium data obtained for different ratios of concentrations of Cu(II and Zn(II ions (1:1, 1:2, 2:1. Experimental sorption data were described by two kinetic models of the pseudo-first and pseudo-second order. Furthermore, an attempt to explain the mechanisms of the divalent metal ion sorption process on chitosan gel beads was undertaken.

  6. PHYSICS OF ECLIPSING BINARIES. II. TOWARD THE INCREASED MODEL FIDELITY

    Energy Technology Data Exchange (ETDEWEB)

    Prša, A.; Conroy, K. E.; Horvat, M.; Kochoska, A.; Hambleton, K. M. [Villanova University, Dept. of Astrophysics and Planetary Sciences, 800 E Lancaster Avenue, Villanova PA 19085 (United States); Pablo, H. [Université de Montréal, Pavillon Roger-Gaudry, 2900, boul. Édouard-Montpetit Montréal QC H3T 1J4 (Canada); Bloemen, S. [Radboud University Nijmegen, Department of Astrophysics, IMAPP, P.O. Box 9010, 6500 GL, Nijmegen (Netherlands); Giammarco, J. [Eastern University, Dept. of Astronomy and Physics, 1300 Eagle Road, St. Davids, PA 19087 (United States); Degroote, P. [KU Leuven, Instituut voor Sterrenkunde, Celestijnenlaan 200D, B-3001 Heverlee (Belgium)

    2016-12-01

    The precision of photometric and spectroscopic observations has been systematically improved in the last decade, mostly thanks to space-borne photometric missions and ground-based spectrographs dedicated to finding exoplanets. The field of eclipsing binary stars strongly benefited from this development. Eclipsing binaries serve as critical tools for determining fundamental stellar properties (masses, radii, temperatures, and luminosities), yet the models are not capable of reproducing observed data well, either because of the missing physics or because of insufficient precision. This led to a predicament where radiative and dynamical effects, insofar buried in noise, started showing up routinely in the data, but were not accounted for in the models. PHOEBE (PHysics Of Eclipsing BinariEs; http://phoebe-project.org) is an open source modeling code for computing theoretical light and radial velocity curves that addresses both problems by incorporating missing physics and by increasing the computational fidelity. In particular, we discuss triangulation as a superior surface discretization algorithm, meshing of rotating single stars, light travel time effects, advanced phase computation, volume conservation in eccentric orbits, and improved computation of local intensity across the stellar surfaces that includes the photon-weighted mode, the enhanced limb darkening treatment, the better reflection treatment, and Doppler boosting. Here we present the concepts on which PHOEBE is built and proofs of concept that demonstrate the increased model fidelity.

  7. Modeling Degradation in Solid Oxide Electrolysis Cells - Volume II

    Energy Technology Data Exchange (ETDEWEB)

    Manohar Motwani

    2011-09-01

    Idaho National Laboratory has an ongoing project to generate hydrogen from steam using solid oxide electrolysis cells (SOECs). To accomplish this, technical and degradation issues associated with the SOECs will need to be addressed. This report covers various approaches being pursued to model degradation issues in SOECs. An electrochemical model for degradation of SOECs is presented. The model is based on concepts in local thermodynamic equilibrium in systems otherwise in global thermodynamic non-equilibrium. It is shown that electronic conduction through the electrolyte, however small, must be taken into account for determining local oxygen chemical potential,, within the electrolyte. The within the electrolyte may lie out of bounds in relation to values at the electrodes in the electrolyzer mode. Under certain conditions, high pressures can develop in the electrolyte just near the oxygen electrode/electrolyte interface, leading to oxygen electrode delamination. These predictions are in accordance with the reported literature on the subject. Development of high pressures may be avoided by introducing some electronic conduction in the electrolyte. By combining equilibrium thermodynamics, non-equilibrium (diffusion) modeling, and first-principles, atomic scale calculations were performed to understand the degradation mechanisms and provide practical recommendations on how to inhibit and/or completely mitigate them.

  8. Accounting for treatment use when validating a prognostic model: a simulation study.

    Science.gov (United States)

    Pajouheshnia, Romin; Peelen, Linda M; Moons, Karel G M; Reitsma, Johannes B; Groenwold, Rolf H H

    2017-07-14

    Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW) on the estimated model discrimination (c-index) and calibration (observed:expected ratio and calibration plots) in scenarios with different patterns and effects of treatment use. Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and should not be ignored. When treatment use is random, treated

  9. Accounting for treatment use when validating a prognostic model: a simulation study

    Directory of Open Access Journals (Sweden)

    Romin Pajouheshnia

    2017-07-01

    Full Text Available Abstract Background Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. Methods We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW on the estimated model discrimination (c-index and calibration (observed:expected ratio and calibration plots in scenarios with different patterns and effects of treatment use. Results Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. Conclusions When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and

  10. Accounting for detectability in fish distribution models: an approach based on time-to-first-detection

    Directory of Open Access Journals (Sweden)

    Mário Ferreira

    2015-12-01

    Full Text Available Imperfect detection (i.e., failure to detect a species when the species is present is increasingly recognized as an important source of uncertainty and bias in species distribution modeling. Although methods have been developed to solve this problem by explicitly incorporating variation in detectability in the modeling procedure, their use in freshwater systems remains limited. This is probably because most methods imply repeated sampling (≥ 2 of each location within a short time frame, which may be impractical or too expensive in most studies. Here we explore a novel approach to control for detectability based on the time-to-first-detection, which requires only a single sampling occasion and so may find more general applicability in freshwaters. The approach uses a Bayesian framework to combine conventional occupancy modeling with techniques borrowed from parametric survival analysis, jointly modeling factors affecting the probability of occupancy and the time required to detect a species. To illustrate the method, we modeled large scale factors (elevation, stream order and precipitation affecting the distribution of six fish species in a catchment located in north-eastern Portugal, while accounting for factors potentially affecting detectability at sampling points (stream depth and width. Species detectability was most influenced by depth and to lesser extent by stream width and tended to increase over time for most species. Occupancy was consistently affected by stream order, elevation and annual precipitation. These species presented a widespread distribution with higher uncertainty in tributaries and upper stream reaches. This approach can be used to estimate sampling efficiency and provide a practical framework to incorporate variations in the detection rate in fish distribution models.

  11. A Thermodamage Strength Theoretical Model of Ceramic Materials Taking into Account the Effect of Residual Stress

    Directory of Open Access Journals (Sweden)

    Weiguo Li

    2012-01-01

    Full Text Available A thermodamage strength theoretical model taking into account the effect of residual stress was established and applied to each temperature phase based on the study of effects of various physical mechanisms on the fracture strength of ultrahigh-temperature ceramics. The effects of SiC particle size, crack size, and SiC particle volume fraction on strength corresponding to different temperatures were studied in detail. This study showed that when flaw size is not large, the bigger SiC particle size results in the greater effect of tensile residual stress in the matrix grains on strength reduction, and this prediction coincides with experimental results; and the residual stress and the combined effort of particle size and crack size play important roles in controlling material strength.

  12. A hybrid mode choice model to account for the dynamic effect of inertia over time

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Börjesson, Maria; Bierlaire, Michel

    The influence of habits, giving rise to inertia effect, in the choice process has been intensely debated in the literature. Typically inertia is accounted for by letting the indirect utility functions of the alternatives of the choice situation at time t depend on the outcome of the choice made...... gathered over a continuous period of time, six weeks, to study both inertia and the influence of habits. Tendency to stick with the same alternative is measured through lagged variables that link the current choice with the previous trip made with the same purpose, mode and time of day. However, the lagged...... effect of the previous trips is not constant but it depends on the individual propensity to undertake habitual trips which is captured by the individual specific latent variable. And the frequency of the trips in the previous week is used as an indicator of the habitual behavior. The model estimation...

  13. REGRESSION MODEL FOR RISK REPORTING IN FINANCIAL STATEMENTS OF ACCOUNTING SERVICES ENTITIES

    Directory of Open Access Journals (Sweden)

    Mirela NICHITA

    2015-06-01

    Full Text Available The purpose of financial reports is to provide useful information to users; the utility of information is defined through the qualitative characteristics (fundamental and enhancing. The financial crisis emphasized the limits of financial reporting which has been unable to prevent investors about the risks they were facing. Due to the current changes in business environment, managers have been highly motivated to rethink and improve the risk governance philosophy, processes and methodologies. The lack of quality, timely data and adequate systems to capture, report and measure the right information across the organization is a fundamental challenge for implementing and sustaining all aspects of effective risk management. Starting with the 80s, the investors are more interested in narratives (Notes to financial statements, than in primary reports (financial position and performance. The research will apply a regression model for assessment of risk reporting by the professional (accounting and taxation services for major companies from Romania during the period 2009 – 2013.

  14. Hydrodynamic modeling of urban flooding taking into account detailed data about city infrastructure

    Science.gov (United States)

    Belikov, Vitaly; Norin, Sergey; Aleksyuk, Andrey; Krylenko, Inna; Borisova, Natalya; Rumyantsev, Alexey

    2017-04-01

    Flood waves moving across urban areas have specific features. Thus, the linear objects of infrastructure (such as embankments, roads, dams) can change the direction of flow or block the water movement. On the contrary, paved avenues and wide streets in the cities contribute to the concentration of flood waters. Buildings create an additional resistance to the movement of water, which depends on the urban density and the type of constructions; this effect cannot be completely described by Manning's resistance law. In addition, part of the earth surface, occupied by buildings, is excluded from the flooded area, which results in a substantial (relative to undeveloped areas) increase of the depth of flooding, especially for unsteady flow conditions. An approach to numerical simulation of urban areas flooding that consists in direct allocating of all buildings and structures on the computational grid are proposed. This can be done in almost full automatic way with usage of modern software. Real geometry of all objects of infrastructure can be taken into account on the base of highly detailed digital maps and satellite images. The calculations based on two-dimensional Saint-Venant equations on irregular adaptive computational meshes, which can contain millions of cells and take into account tens of thousands of buildings and other objects of infrastructure. Flood maps, received as result of modeling, are the basis for the damage and risk assessment for urban areas. The main advantage of the developed method is high-precision calculations, realistic modeling results and appropriate graphical display of the flood dynamics and dam-break wave's propagation on urban areas. Verification of this method has been done on the experimental data and real events simulations, including catastrophic flooding of the Krymsk city in 2012 year.

  15. Accounting for seasonal isotopic patterns of forest canopy intercepted precipitation in streamflow modeling

    Science.gov (United States)

    Stockinger, Michael P.; Lücke, Andreas; Vereecken, Harry; Bogena, Heye R.

    2017-12-01

    Forest canopy interception alters the isotopic tracer signal of precipitation leading to significant isotopic differences between open precipitation (δOP) and throughfall (δTF). This has important consequences for the tracer-based modeling of streamwater transit times. Some studies have suggested using a simple static correction to δOP by uniformly increasing it because δTF is rarely available for hydrological modeling. Here, we used data from a 38.5 ha spruce forested headwater catchment where three years of δOP and δTF were available to develop a data driven method that accounts for canopy effects on δOP. Changes in isotopic composition, defined as the difference δTF-δOP, varied seasonally with higher values during winter and lower values during summer. We used this pattern to derive a corrected δOP time series and analyzed the impact of using (1) δOP, (2) reference throughfall data (δTFref) and (3) the corrected δOP time series (δOPSine) in estimating the fraction of young water (Fyw), i.e., the percentage of streamflow younger than two to three months. We found that Fyw derived from δOPSine came closer to δTFref in comparison to δOP. Thus, a seasonally-varying correction for δOP can be successfully used to infer δTF where it is not available and is superior to the method of using a fixed correction factor. Seasonal isotopic enrichment patterns should be accounted for when estimating Fyw and more generally in catchment hydrology studies using other tracer methods to reduce uncertainty.

  16. Context-Specific Proportion Congruency Effects: An Episodic Learning Account and Computational Model.

    Science.gov (United States)

    Schmidt, James R

    2016-01-01

    In the Stroop task, participants identify the print color of color words. The congruency effect is the observation that response times and errors are increased when the word and color are incongruent (e.g., the word "red" in green ink) relative to when they are congruent (e.g., "red" in red). The proportion congruent (PC) effect is the finding that congruency effects are reduced when trials are mostly incongruent rather than mostly congruent. This PC effect can be context-specific. For instance, if trials are mostly incongruent when presented in one location and mostly congruent when presented in another location, the congruency effect is smaller for the former location. Typically, PC effects are interpreted in terms of strategic control of attention in response to conflict, termed conflict adaptation or conflict monitoring. In the present manuscript, however, an episodic learning account is presented for context-specific proportion congruent (CSPC) effects. In particular, it is argued that context-specific contingency learning can explain part of the effect, and context-specific rhythmic responding can explain the rest. Both contingency-based and temporal-based learning can parsimoniously be conceptualized within an episodic learning framework. An adaptation of the Parallel Episodic Processing model is presented. This model successfully simulates CSPC effects, both for contingency-biased and contingency-unbiased (transfer) items. The same fixed-parameter model can explain a range of other findings from the learning, timing, binding, practice, and attentional control domains.

  17. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    Science.gov (United States)

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both. © 2015 Society for Risk Analysis.

  18. The Models of Distance Forms of Learning in National Academy of Statistics, Accounting and Audit

    Directory of Open Access Journals (Sweden)

    L. V.

    2017-03-01

    Full Text Available Finding solutions to the problems faced by the Ukrainian education system require an adequate organizing structure for education system, enabling for transition to the principle of life-long education. The best option for this is the distance learning systems (DLS, which are considered by leading Ukrainian universities as high-performance information technologies in modern education, envisaged by the National Informatization Program, with the goals of reforming higher education in Ukraine in the context of joining the European educational area. The experience of implementing DLS “Prometheus” and Moodle and the main directions of distance learning development at the National Academy of Statistics, Accounting and Audit (NASAA are analyzed and summed up. The emphasis is made on the need to improve the skills of teachers with use of open distance courses and gradual preparation of students for the learning process in the new conditions. The structure of distance courses for different forms of education (full-time, part-time, and blended is built. The forms of learning (face-to-face, the driver complementary, rotation model; flex model, etc. are analyzed. The dynamic version of implementing blended learning models in NASAA using DLS “Prometheus” and Moodle is presented. It is concluded that the experience of NASAA shows that the blended form of distance learning based on Moodle platform is the most adequate to the requirements of Ukraine’s development within the framework of European education.

  19. A common signal detection model accounts for both perception and discrimination of the watercolor effect.

    Science.gov (United States)

    Devinck, Frédéric; Knoblauch, Kenneth

    2012-03-21

    Establishing the relation between perception and discrimination is a fundamental objective in psychophysics, with the goal of characterizing the neural mechanisms mediating perception. Here, we show that a procedure for estimating a perceptual scale based on a signal detection model also predicts discrimination performance. We use a recently developed procedure, Maximum Likelihood Difference Scaling (MLDS), to measure the perceptual strength of a long-range, color, filling-in phenomenon, the Watercolor Effect (WCE), as a function of the luminance ratio between the two components of its generating contour. MLDS is based on an equal-variance, gaussian, signal detection model and yields a perceptual scale with interval properties. The strength of the fill-in percept increased 10-15 times the estimate of the internal noise level for a 3-fold increase in the luminance ratio. Each observer's estimated scale predicted discrimination performance in a subsequent paired-comparison task. A common signal detection model accounts for both the appearance and discrimination data. Since signal detection theory provides a common metric for relating discrimination performance and neural response, the results have implications for comparing perceptual and neural response functions.

  20. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    Science.gov (United States)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  1. Accounting comparability and the accuracy of peer-based valuation models

    NARCIS (Netherlands)

    Young, S.; Zeng, Y.

    2015-01-01

    We examine the link between enhanced accounting comparability and the valuation performance of pricing multiples. Using the warranted multiple method proposed by Bhojraj and Lee (2002, Journal of Accounting Research), we demonstrate how enhanced accounting comparability leads to better peer-based

  2. The cyclicality of loan loss provisions under three different accounting models: the United Kingdom, Spain, and Brazil

    Directory of Open Access Journals (Sweden)

    Antônio Maria Henri Beyle de Araújo

    2017-11-01

    Full Text Available ABSTRACT A controversy involving loan loss provisions in banks concerns their relationship with the business cycle. While international accounting standards for recognizing provisions (incurred loss model would presumably be pro-cyclical, accentuating the effects of the current economic cycle, an alternative model, the expected loss model, has countercyclical characteristics, acting as a buffer against economic imbalances caused by expansionary or contractionary phases in the economy. In Brazil, a mixed accounting model exists, whose behavior is not known to be pro-cyclical or countercyclical. The aim of this research is to analyze the behavior of these accounting models in relation to the business cycle, using an econometric model consisting of financial and macroeconomic variables. The study allowed us to identify the impact of credit risk behavior, earnings management, capital management, Gross Domestic Product (GDP behavior, and the behavior of the unemployment rate on provisions in countries that use different accounting models. Data from commercial banks in the United Kingdom (incurred loss, in Spain (expected loss, and in Brazil (mixed model were used, covering the period from 2001 to 2012. Despite the accounting models of the three countries being formed by very different rules regarding possible effects on the business cycles, the results revealed a pro-cyclical behavior of provisions in each country, indicating that when GDP grows, provisions tend to fall and vice versa. The results also revealed other factors influencing the behavior of loan loss provisions, such as earning management.

  3. On unified field theories, dynamical torsion and geometrical models: II

    International Nuclear Information System (INIS)

    Cirilo-Lombardo, D.J.

    2011-01-01

    We analyze in this letter the same space-time structure as that presented in our previous reference (Part. Nucl, Lett. 2010. V.7, No.5. P.299-307), but relaxing now the condition a priori of the existence of a potential for the torsion. We show through exact cosmological solutions from this model, where the geometry is Euclidean RxO 3 ∼ RxSU(2), the relation between the space-time geometry and the structure of the gauge group. Precisely this relation is directly connected with the relation of the spin and torsion fields. The solution of this model is explicitly compared with our previous ones and we find that: i) the torsion is not identified directly with the Yang-Mills type strength field, ii) there exists a compatibility condition connected with the identification of the gauge group with the geometric structure of the space-time: this fact leads to the identification between derivatives of the scale factor a with the components of the torsion in order to allow the Hosoya-Ogura ansatz (namely, the alignment of the isospin with the frame geometry of the space-time), and iii) of two possible structures of the torsion the 'tratorial' form (the only one studied here) forbid wormhole configurations, leading only to cosmological instanton space-time in eternal expansion

  4. Investigation of a new model accounting for rotors of finite tip-speed ratio in yaw or tilt

    DEFF Research Database (Denmark)

    Branlard, Emmanuel; Gaunaa, Mac; Machefaux, Ewan

    2014-01-01

    The main results from a recently developed vortex model are implemented into a Blade Element Momentum(BEM) code. This implementation accounts for the effect of finite tip-speed ratio, an effect which was not considered in standard BEM yaw-models. The model and its implementation are presented. Data...

  5. Dimensional and hierarchical models of depression using the Beck Depression Inventory-II in an Arab college student sample

    Directory of Open Access Journals (Sweden)

    Ohaeri Jude U

    2010-07-01

    Full Text Available Abstract Background An understanding of depressive symptomatology from the perspective of confirmatory factor analysis (CFA could facilitate valid and interpretable comparisons across cultures. The objectives of the study were: (i using the responses of a sample of Arab college students to the Beck Depression Inventory (BDI-II in CFA, to compare the "goodness of fit" indices of the original dimensional three-and two-factor first-order models, and their modifications, with the corresponding hierarchical models (i.e., higher - order and bifactor models; (ii to assess the psychometric characteristics of the BDI-II, including convergent/discriminant validity with the Hopkins Symptom Checklist (HSCL-25. Method Participants (N = 624 were Kuwaiti national college students, who completed the questionnaires in class. CFA was done by AMOS, version 16. Eleven models were compared using eight "fit" indices. Results In CFA, all the models met most "fit" criteria. While the higher-order model did not provide improved fit over the dimensional first - order factor models, the bifactor model (BFM had the best fit indices (CMNI/DF = 1.73; GFI = 0.96; RMSEA = 0.034. All regression weights of the dimensional models were significantly different from zero (P Conclusion The broadly adequate fit of the various models indicates that they have some merit and implies that the relationship between the domains of depression probably contains hierarchical and dimensional elements. The bifactor model is emerging as the best way to account for the clinical heterogeneity of depression. The psychometric characteristics of the BDI-II lend support to our CFA results.

  6. A mass-density model can account for the size-weight illusion

    Science.gov (United States)

    Bergmann Tiest, Wouter M.; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness

  7. Modeling the dynamic behavior of railway track taking into account the occurrence of defects in the system wheel-rail

    OpenAIRE

    Loktev Alexey; Sychev Vyacheslav; Gluzberg Boris; Gridasova Ekaterina

    2017-01-01

    This paper investigates the influence of wheel defects on the development of rail defects up to a state where rail prompt replacement becomes necessary taking into account different models of the dynamic contact between a wheel and a rail. In particular, the quasistatic Hertz model, the linear elastic model and the elastoplastic Aleksandrov-Kadomtsev model. Based on the model of the wheel-rail contact the maximum stresses are determined which take place in the rail in the presence of wheel de...

  8. Historical Account to the State of the Art in Debris Flow Modeling

    Science.gov (United States)

    Pudasaini, Shiva P.

    2013-04-01

    In this contribution, I present a historical account of debris flow modelling leading to the state of the art in simulations and applications. A generalized two-phase model is presented that unifies existing avalanche and debris flow theories. The new model (Pudasaini, 2012) covers both the single-phase and two-phase scenarios and includes many essential and observable physical phenomena. In this model, the solid-phase stress is closed by Mohr-Coulomb plasticity, while the fluid stress is modeled as a non-Newtonian viscous stress that is enhanced by the solid-volume-fraction gradient. A generalized interfacial momentum transfer includes viscous drag, buoyancy and virtual mass forces, and a new generalized drag force is introduced to cover both solid-like and fluid-like drags. Strong couplings between solid and fluid momentum transfer are observed. The two-phase model is further extended to describe the dynamics of rock-ice avalanches with new mechanical models. This model explains dynamic strength weakening and includes internal fluidization, basal lubrication, and exchanges of mass and momentum. The advantages of the two-phase model over classical (effectively single-phase) models are discussed. Advection and diffusion of the fluid through the solid are associated with non-linear fluxes. Several exact solutions are constructed, including the non-linear advection-diffusion of fluid, kinematic waves of debris flow front and deposition, phase-wave speeds, and velocity distribution through the flow depth and through the channel length. The new model is employed to study two-phase subaerial and submarine debris flows, the tsunami generated by the debris impact at lakes/oceans, and rock-ice avalanches. Simulation results show that buoyancy enhances flow mobility. The virtual mass force alters flow dynamics by increasing the kinetic energy of the fluid. Newtonian viscous stress substantially reduces flow deformation, whereas non-Newtonian viscous stress may change the

  9. ACCOUNTING HARMONIZATION AND HISTORICAL COST ACCOUNTING

    OpenAIRE

    Valentin Gabriel CRISTEA

    2017-01-01

    There is a huge interest in accounting harmonization and historical costs accounting, in what they offer us. In this article, different valuation models are discussed. Although one notices the movement from historical cost accounting to fair value accounting, each one has its advantages.

  10. ACCOUNTING HARMONIZATION AND HISTORICAL COST ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Valentin Gabriel CRISTEA

    2017-05-01

    Full Text Available There is a huge interest in accounting harmonization and historical costs accounting, in what they offer us. In this article, different valuation models are discussed. Although one notices the movement from historical cost accounting to fair value accounting, each one has its advantages.

  11. Development of the Mathematical Model of Diesel Fuel Catalytic Dewaxing Process Taking into Account Factors of Nonstationarity

    Directory of Open Access Journals (Sweden)

    Frantsina Evgeniya

    2016-01-01

    Full Text Available The paper describes the results of mathematical modelling of diesel fuel catalytic dewaxing process, performed taking into account the factors of process nonstationarity driven by changes in process technological parameters, feedstock composition and catalyst deactivation. The error of hydrocarbon contents calculation via the use of the developed model does not exceed 1.6 wt.%. This makes it possible to apply the model for solution to optimization and forecasting problems occurred in catalytic systems under industrial conditions. It was shown through the model calculation that temperature in the dewaxing reactor without catalyst deactivation is lower by 19 °C than actual and catalyst deactivation degree accounts for 32 %.

  12. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    Science.gov (United States)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS

  13. Short-run analysis of fiscal policy and the current account in a finite horizon model

    OpenAIRE

    Heng-fu Zou

    1995-01-01

    This paper utilizes a technique developed by Judd to quantify the short-run effects of fiscal policies and income shocks on the current account in a small open economy. It is found that: (1) a future increase in government spending improves the short-run current account; (2) a future tax increase worsens the short-run current account; (3) a present increase in the government spending worsens the short-run current account dollar by dollar, while a present increase in the income improves the cu...

  14. A near-real-time material accountancy model and its preliminary demonstration in the Tokai reprocessing plant

    International Nuclear Information System (INIS)

    Ikawa, K.; Ihara, H.; Nishimura, H.; Tsutsumi, M.; Sawahata, T.

    1983-01-01

    The study of a near-real-time (n.r.t.) material accountancy system as applied to small or medium-sized spent fuel reprocessing facilities has been carried out since 1978 under the TASTEX programme. In this study, a model of the n.r.t. accountancy system, called the ten-day-detection-time model, was developed and demonstrated in the actual operating plant. The programme was closed on May 1981, but the study has been extended. The effectiveness of the proposed n.r.t. accountancy model was evaluated by means of simulation techniques. The results showed that weekly material balances covering the entire process MBA could provide sufficient information to satisfy the IAEA guidelines for small or medium-sized facilities. The applicability of the model to the actual plant has been evaluated by a series of field tests which covered four campaigns. In addition to the material accountancy data, many valuable operational data with regard to additional locations for an in-process inventory, the time needed for an in-process inventory, etc., have been obtained. A CUMUF (cumulative MUF) chart of the resulting MUF data in the C-1 and C-2 campaigns clearly showed that there had been a measurement bias across the process MBA. This chart gave a dramatic picture of the power of the n.r.t. accountancy concept by showing the nature of this bias, which was not clearly shown in the conventional material accountancy data. (author)

  15. A mathematical multiscale model of bone remodeling, accounting for pore space-specific mechanosensation.

    Science.gov (United States)

    Pastrama, Maria-Ioana; Scheiner, Stefan; Pivonka, Peter; Hellmich, Christian

    2018-02-01

    While bone tissue is a hierarchically organized material, mathematical formulations of bone remodeling are often defined on the level of a millimeter-sized representative volume element (RVE), "smeared" over all types of bone microstructures seen at lower observation scales. Thus, there is no explicit consideration of the fact that the biological cells and biochemical factors driving bone remodeling are actually located in differently sized pore spaces: active osteoblasts and osteoclasts can be found in the vascular pores, whereas the lacunar pores host osteocytes - bone cells originating from former osteoblasts which were then "buried" in newly deposited extracellular bone matrix. We here propose a mathematical description which considers size and shape of the pore spaces where the biological and biochemical events take place. In particular, a previously published systems biology formulation, accounting for biochemical regulatory mechanisms such as the rank-rankl-opg pathway, is cast into a multiscale framework coupled to a poromicromechanical model. The latter gives access to the vascular and lacunar pore pressures arising from macroscopic loading. Extensive experimental data on the biological consequences of this loading strongly suggest that the aforementioned pore pressures, together with the loading frequency, are essential drivers of bone remodeling. The novel approach presented here allows for satisfactory simulation of the evolution of bone tissue under various loading conditions, and for different species; including scenarios such as mechanical dis- and overuse of murine and human bone, or in osteocyte-free bone. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Modeling Occupancy of Hosts by Mistletoe Seeds after Accounting for Imperfect Detectability

    Science.gov (United States)

    Fadini, Rodrigo F.; Cintra, Renato

    2015-01-01

    The detection of an organism in a given site is widely used as a state variable in many metapopulation and epidemiological studies. However, failure to detect the species does not necessarily mean that it is absent. Assessing detectability is important for occupancy (presence—absence) surveys; and identifying the factors reducing detectability may help improve survey precision and efficiency. A method was used to estimate the occupancy status of host trees colonized by mistletoe seeds of Psittacanthus plagiophyllus as a function of host covariates: host size and presence of mistletoe infections on the same or on the nearest neighboring host (the cashew tree Anacardium occidentale). The technique also evaluated the effect of taking detectability into account for estimating host occupancy by mistletoe seeds. Individual host trees were surveyed for presence of mistletoe seeds with the aid of two or three observers to estimate detectability and occupancy. Detectability was, on average, 17% higher in focal-host trees with infected neighbors, while decreased about 23 to 50% from smallest to largest hosts. The presence of mistletoe plants in the sample tree had negligible effect on detectability. Failure to detect hosts as occupied decreased occupancy by 2.5% on average, with maximum of 10% for large and isolated hosts. The method presented in this study has potential for use with metapopulation studies of mistletoes, especially those focusing on the seed stage, but also as improvement of accuracy in occupancy models estimates often used for metapopulation dynamics of tree-dwelling plants in general. PMID:25973754

  17. Associative account of self-cognition: extended forward model and multi-layer structure

    Directory of Open Access Journals (Sweden)

    Motoaki eSugiura

    2013-08-01

    Full Text Available The neural correlates of self identified by neuroimaging studies differ depending on which aspects of self are addressed. Here, three categories of self are proposed based on neuroimaging findings and an evaluation of the likely underlying cognitive processes. The physical self, representing self-agency of action, body ownership, and bodily self-recognition, is supported by the sensory and motor association cortices located primarily in the right hemisphere. The interpersonal self, representing the attention or intentions of others directed at the self, is supported by several amodal association cortices in the dorsomedial frontal and lateral posterior cortices. The social self, representing the self as a collection of context-dependent social values, is supported by the ventral aspect of the medial prefrontal cortex and the posterior cingulate cortex. Despite differences in the underlying cognitive processes and neural substrates, all three categories of self are likely to share the computational characteristics of the forward model, which is underpinned by internal schema or learned associations between one’s behavioral output and the consequential input. Additionally, these three categories exist within a hierarchical layer structure based on developmental processes that updates the schema through the attribution of prediction error. In this account, most of the association cortices critically contribute to some aspect of the self through associative learning while the primary regions involved shift from the lateral to the medial cortices in a sequence from the physical to the interpersonal to the social self.

  18. Taking into account hydrological modelling uncertainty in Mediterranean flash-floods forecasting

    Science.gov (United States)

    Edouard, Simon; Béatrice, Vincendon; Véronique, Ducrocq

    2015-04-01

    Title : Taking into account hydrological modelling uncertainty in Mediterranean flash-floods forecasting Authors : Simon EDOUARD*, Béatrice VINCENDON*, Véronique Ducrocq* * : GAME/CNRM(Météo-France, CNRS)Toulouse,France Mediterranean intense weather events often lead to devastating flash-floods (FF). Increasing the lead time of FF forecasts would permit to better anticipate their catastrophic consequences. These events are one part of Mediterranean hydrological cycle. HyMeX (HYdrological cycle in the Mediterranean EXperiment) aims at a better understanding and quantification of the hydrological cycle and related processes in the Mediterranean. In order to get a lot of data, measurement campaigns were conducted. The first special observing period (SOP1) of these campaigns, served as a test-bed for a real-time hydrological ensemble prediction system (HEPS) dedicated to FF forecasting. It produced an ensemble of quantitative discharge forecasts (QDF) using the ISBA-TOP system. ISBATOP is a coupling between the surface scheme ISBA and a version of TOPMODEL dedicated to Mediterranean fast responding rivers. ISBA-TOP was driven with several quantitative precipitation forecasts (QPF) ensembles based on AROME atmospheric convection-permitting model. This permitted to take into account the uncertainty that affects QPF and that propagates up to the QDF. This uncertainty is major for discharge forecasting especially in the case of Mediterranean flash-floods. But other sources of uncertainty need to be sampled in HEPS systems. One of them is inherent to the hydrological modelling. The ISBA-TOP coupled system has been improved since the initial version, that was used for instance during Hymex SOP1. The initial ISBA-TOP consisted into coupling a TOPMODEL approach with ISBA-3L, which represented the soil stratification with 3 layers. The new version consists into coupling the same TOPMODEL approach with a version of ISBA where more than ten layers describe the soil vertical

  19. Romanian accounting policies between international accounting convergence and corporate governance regulation

    Directory of Open Access Journals (Sweden)

    Elena Dobre

    2012-06-01

    Full Text Available Our paper aims to look at accounting impact on the systems of Romanian corporate governance. The purpose is aligned to discover research leads at the intersection of corporate governance and financial accounting. The hypothesis is that the corporate governance is influenced by accounting policies monitored by internal control. The empirical study focus on several points: (i concepts and specific Terms; (ii elements to be considered in establishing accounting policies; (iii change and remodelling of accounting policies; (iv the influence of enterprise accounting policies on indicators level. We present the role of accounting policies to generate futures economic benefices and the intricacy of accounting valuation. We conclude about the process configuration and modelling accounting policies in terms of business engineering

  20. Design of a Competency-Based Assessment Model in the Field of Accounting

    Science.gov (United States)

    Ciudad-Gómez, Adelaida; Valverde-Berrocoso, Jesús

    2012-01-01

    This paper presents the phases involved in the design of a methodology to contribute both to the acquisition of competencies and to their assessment in the field of Financial Accounting, within the European Higher Education Area (EHEA) framework, which we call MANagement of COMpetence in the areas of Accounting (MANCOMA). Having selected and…

  1. Pharmacokinetic Modeling of Manganese III. Physiological Approaches Accounting for Background and Tracer Kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Teeguarden, Justin G.; Gearhart, Jeffrey; Clewell, III, H. J.; Covington, Tammie R.; Nong, Andy; Anderson, Melvin E.

    2007-01-01

    assessments (Dixit et al., 2003). With most exogenous compounds, there is often no background exposure and body concentrations are not under active control from homeostatic processes as occurs with essential nutrients. Any complete Mn PBPK model would include the homeostatic regulation as an essential nutritional element and the additional exposure routes by inhalation. Two companion papers discuss the kinetic complexities of the quantitative dose-dependent alterations in hepatic and intestinal processes that control uptake and elimination of Mn (Teeguarden et al., 2006a, b). Radioactive 54Mn has been to investigate the behavior of the more common 55Mn isotope in the body because the distribution and elimination of tracer doses reflects the overall distributional characteristics of Mn. In this paper, we take the first steps in developing a multi-route PBPK model for Mn. Here we develop a PBPK model to account for tissue concentrations and tracer kinetics of Mn under normal dietary intake. This model for normal levels of Mn will serve as the starting point for more complete model descriptions that include dose-dependencies in both oral uptake and and biliary excretion. Material and Methods Experimental Data Two studies using 54Mn tracer were employed in model development. (Furchner et al. 1966; Wieczorek and Oberdorster 1989). In Furchner et al. (1966) male Sprague-Dawley rats received an ip injection of carrier-free 54MnCl2 while maintained on standard rodent feed containing ~ 45 ppm Mn. Tissue radioactivity of 54Mn was measured by liquid scintillation counting between post injection days 1 to 89 and reported as percent of administered dose per kg tissue. 54Mn time courses were reported for liver, kidney, bone, brain, muscle, blood, lung and whole body. Because ip uptake is via the portal circulation to the liver, this data set had information on distribution and clearance behaviors of Mn entering the systemic circulation from liver.

  2. Materials measurement and accounting in an operating plutonium conversion and purification process. Phase I. Process modeling and simulation

    International Nuclear Information System (INIS)

    Thomas, C.C. Jr.; Ostenak, C.A.; Gutmacher, R.G.; Dayem, H.A.; Kern, E.A.

    1981-04-01

    A model of an operating conversion and purification process for the production of reactor-grade plutonium dioxide was developed as the first component in the design and evaluation of a nuclear materials measurement and accountability system. The model accurately simulates process operation and can be used to identify process problems and to predict the effect of process modifications

  3. Quantification of Uncertainties in Integrated Spacecraft System Models, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective for the Phase II effort will be to develop a comprehensive, efficient, and flexible uncertainty quantification (UQ) framework implemented within a...

  4. Fracture criteria under creep with strain history taken into account, and long-term strength modelling

    Science.gov (United States)

    Khokhlov, A. V.

    2009-08-01

    In the present paper, we continue to study the nonlinear constitutive relation (CR) between the stress and strain proposed in [1] to describe one-dimensional isothermal rheological processes in the case of monotone variation of the strain (in particular, relaxation, creep, plasticity, and superplasticity). We show that this CR together with the strain fracture criterion (FC) leads to theoretical long-term strength curves (LSC) with the same qualitative properties as the typical experimental LSC of viscoelastoplastic materials. We propose two parametric families of fracture criteria in the case of monotone uniaxial strain, which are related to the strain fracture criterion (SFC) but take into account the strain increase history and the dependence of the critical strain on the stress. Instead of the current strain, they use other measures of damage related to the strain history by time-dependent integral operators. For any values of the material parameters, analytic studies of these criteria allowed us to find several useful properties, which confirm that they can be used to describe the creep fracture of different materials. In particular, we prove that, together with the proposed constitutive relations, these FC lead to theoretical long-term strength curves (TLSC) with the same qualitative properties as the experimental LSC. It is important that each of the constructed families of FC forms a monotone and continuous scale of criteria (monotonously and continuously depending on a real parameter) that contains the SFC as the limit case. Moreover, the criteria in the first family always provide the fracture time greater than that given by the SFC, the criteria in the second family always provide a smaller fracture time, and the difference can be made arbitrarily small by choosing the values of the control parameter near the scale end. This property is very useful in finding a more accurate adjustment of the model to the existing experimental data describing the

  5. A thermoelectric power generating heat exchanger: Part II – Numerical modeling and optimization

    International Nuclear Information System (INIS)

    Sarhadi, Ali; Bjørk, Rasmus; Lindeburg, Niels; Viereck, Peter; Pryds, Nini

    2016-01-01

    Highlights: • A comprehensive model was developed to optimize the integrated TEG-heat exchanger. • The developed model was validated with the experimental data. • The effect of using different interface materials on the output power was assessed. • The influence of TEG arrangement on the power production was investigated. • Optimized geometrical parameters and proper interface materials were suggested. - Abstract: In Part I of this study, the performance of an experimental integrated thermoelectric generator (TEG)-heat exchanger was presented. In the current study, Part II, the obtained experimental results are compared with those predicted by a finite element (FE) model. In the simulation of the integrated TEG-heat exchanger, the thermal contact resistance between the TEG and the heat exchanger is modeled assuming either an ideal thermal contact or using a combined Cooper–Mikic–Yovanovich (CMY) and parallel plate gap formulation, which takes into account the contact pressure, roughness and hardness of the interface surfaces as well as the air gap thermal resistance at the interface. The combined CMY and parallel plate gap model is then further developed to simulate the thermal contact resistance for the case of an interface material. The numerical results show good agreement with the experimental data with an average deviation of 17% for the case without interface material and 12% in the case of including additional material at the interfaces. The model is then employed to evaluate the power production of the integrated system using different interface materials, including graphite, aluminum (Al), tin (Sn) and lead (Pb) in a form of thin foils. The numerical results show that lead foil at the interface has the best performance, with an improvement in power production of 34% compared to graphite foil. Finally, the model predicts that for a certain flow rate, increasing the parallel TEG channels for the integrated systems with 4, 8, and 12 TEGs

  6. Accounting for water management issues within hydrological simulation: Alternative modelling options and a network optimization approach

    Science.gov (United States)

    Efstratiadis, Andreas; Nalbantis, Ioannis; Rozos, Evangelos; Koutsoyiannis, Demetris

    2010-05-01

    In mixed natural and artificialized river basins, many complexities arise due to anthropogenic interventions in the hydrological cycle, including abstractions from surface water bodies, groundwater pumping or recharge and water returns through drainage systems. Typical engineering approaches adopt a multi-stage modelling procedure, with the aim to handle the complexity of process interactions and the lack of measured abstractions. In such context, the entire hydrosystem is separated into natural and artificial sub-systems or components; the natural ones are modelled individually, and their predictions (i.e. hydrological fluxes) are transferred to the artificial components as inputs to a water management scheme. To account for the interactions between the various components, an iterative procedure is essential, whereby the outputs of the artificial sub-systems (i.e. abstractions) become inputs to the natural ones. However, this strategy suffers from multiple shortcomings, since it presupposes that pure natural sub-systems can be located and that sufficient information is available for each sub-system modelled, including suitable, i.e. "unmodified", data for calibrating the hydrological component. In addition, implementing such strategy is ineffective when the entire scheme runs in stochastic simulation mode. To cope with the above drawbacks, we developed a generalized modelling framework, following a network optimization approach. This originates from the graph theory, which has been successfully implemented within some advanced computer packages for water resource systems analysis. The user formulates a unified system which is comprised of the hydrographical network and the typical components of a water management network (aqueducts, pumps, junctions, demand nodes etc.). Input data for the later include hydraulic properties, constraints, targets, priorities and operation costs. The real-world system is described through a conceptual graph, whose dummy properties

  7. Process Accounting

    OpenAIRE

    Gilbertson, Keith

    2002-01-01

    Standard utilities can help you collect and interpret your Linux system's process accounting data. Describes the uses of process accounting, standard process accounting commands, and example code that makes use of process accounting utilities.

  8. Accountability and non-proliferation nuclear regime: a review of the mutual surveillance Brazilian-Argentine model for nuclear safeguards

    International Nuclear Information System (INIS)

    Xavier, Roberto Salles

    2014-01-01

    The regimes of accountability, the organizations of global governance and institutional arrangements of global governance of nuclear non-proliferation and of Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards are the subject of research. The starting point is the importance of the institutional model of global governance for the effective control of non-proliferation of nuclear weapons. In this context, the research investigates how to structure the current arrangements of the international nuclear non-proliferation and what is the performance of model Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards in relation to accountability regimes of global governance. For that, was searched the current literature of three theoretical dimensions: accountability, global governance and global governance organizations. In relation to the research method was used the case study and the treatment technique of data the analysis of content. The results allowed: to establish an evaluation model based on accountability mechanisms; to assess how behaves the model Mutual Vigilance Brazilian-Argentine Nuclear Safeguards front of the proposed accountability regime; and to measure the degree to which regional arrangements that work with systems of global governance can strengthen these international systems. (author)

  9. Modeling transducer impulse responses for predicting calibrated pressure pulses with the ultrasound simulation program Field II

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2010-01-01

    FIELD II is a simulation software capable of predicting the field pressure in front of transducers having any complicated geometry. A calibrated prediction with this program is, however, dependent on an exact voltage-to-surface acceleration impulse response of the transducer. Such impulse response...... is not calculated by FIELD II. This work investigates the usability of combining a one-dimensional multilayer transducer modeling principle with the FIELD II software. Multilayer here refers to a transducer composed of several material layers. Measurements of pressure and current from Pz27 piezoceramic disks...... transducer model and the FIELD II software in combination give good agreement with measurements....

  10. Optimal dose selection accounting for patient subpopulations in a randomized Phase II trial to maximize the success probability of a subsequent Phase III trial.

    Science.gov (United States)

    Takahashi, Fumihiro; Morita, Satoshi

    2018-02-08

    Phase II clinical trials are conducted to determine the optimal dose of the study drug for use in Phase III clinical trials while also balancing efficacy and safety. In conducting these trials, it may be important to consider subpopulations of patients grouped by background factors such as drug metabolism and kidney and liver function. Determining the optimal dose, as well as maximizing the effectiveness of the study drug by analyzing patient subpopulations, requires a complex decision-making process. In extreme cases, drug development has to be terminated due to inadequate efficacy or severe toxicity. Such a decision may be based on a particular subpopulation. We propose a Bayesian utility approach (BUART) to randomized Phase II clinical trials which uses a first-order bivariate normal dynamic linear model for efficacy and safety in order to determine the optimal dose and study population in a subsequent Phase III clinical trial. We carried out a simulation study under a wide range of clinical scenarios to evaluate the performance of the proposed method in comparison with a conventional method separately analyzing efficacy and safety in each patient population. The proposed method showed more favorable operating characteristics in determining the optimal population and dose.

  11. Reference methodologies for radioactive controlled discharges an activity within the IAEA's Program Environmental Modelling for Radiation Safety II (EMRAS II)

    International Nuclear Information System (INIS)

    Stocki, T.J.; Bergman, L.; Tellería, D.M.; Proehl, G.; Amado, V.; Curti, A.; Bonchuk, I.; Boyer, P.; Mourlon, C.; Chyly, P.; Heling, R.; Sági, L.; Kliaus, V.; Krajewski, P.; Latouche, G.; Lauria, D.C.; Newsome, L.; Smith, J.

    2011-01-01

    In January 2009, the IAEA EMRAS II (Environmental Modelling for Radiation Safety II) program was launched. The goal of the program is to develop, compare and test models for the assessment of radiological impacts to the public and the environment due to radionuclides being released or already existing in the environment; to help countries build and harmonize their capabilities; and to model the movement of radionuclides in the environment. Within EMRAS II, nine working groups are active; this paper will focus on the activities of Working Group 1: Reference Methodologies for Controlling Discharges of Routine Releases. Within this working group environmental transfer and dose assessment models are tested under different scenarios by participating countries and the results compared. This process allows each participating country to identify characteristics of their models that need to be refined. The goal of this working group is to identify reference methodologies for the assessment of exposures to the public due to routine discharges of radionuclides to the terrestrial and aquatic environments. Several different models are being applied to estimate the transfer of radionuclides in the environment for various scenarios. The first phase of the project involves a scenario of nuclear power reactor with a coastal location which routinely (continuously) discharges 60Co, 85Kr, 131I, and 137Cs to the atmosphere and 60Co, 137Cs, and 90Sr to the marine environment. In this scenario many of the parameters and characteristics of the representative group were given to the modelers and cannot be altered. Various models have been used by the different participants in this inter-comparison (PC-CREAM, CROM, IMPACT, CLRP POSEIDON, SYMBIOSE and others). This first scenario is to enable a comparison of the radionuclide transport and dose modelling. These scenarios will facilitate the development of reference methodologies for controlled discharges. (authors)

  12. Single-arm phase II trial design under parametric cure models.

    Science.gov (United States)

    Wu, Jianrong

    2015-01-01

    The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.

  13. [Optimization of ecological footprint model based on environmental pollution accounts: a case study in Pearl River Delta urban agglomeration].

    Science.gov (United States)

    Bai, Yu; Zeng, Hui; Wei, Jian-bing; Zhang, Wen-juan; Zhao, Hong-wei

    2008-08-01

    To solve the problem of ignoring the calculation of environment pollution in traditional ecological footprint model accounts, this paper put forward an optimized ecological footprint (EF) model, taking the pollution footprint into account. In the meantime, the environmental capacity's calculation was also added into the system of ecological capacity, and further used to do ecological assessment of Pearl River Delta urban agglomeration in 2005. The results showed a perfect inosculation between the ecological footprint and the development characteristics and spatial pattern, and illustrated that the optimized EF model could make a better orientation for the environmental pollution in the system, and also, could roundly explain the environmental effects of human activity. The optimization of ecological footprint model had better integrality and objectivity than traditional models.

  14. (I) A Declarative Framework for ERP Systems(II) Reactors: A Data-Driven Programming Model for Distributed Applications

    DEFF Research Database (Denmark)

    Stefansen, Christian Oskar Erik

    , namely the general ledger and accounts receivable. The result is an event-based approach to designing ERP systems and an abstract-level sketch of the architecture. • Compositional Specification of Commercial Contracts. The paper describes the design, multiple semantics, and use of a domain...... on the idea of soft constraints the paper explains the design, semantics, and use of a language for allocating work in business processes. The language lets process designers express both hard constraints and soft constraints. (II) The Reactors programming model: • Reactors: A Data-Oriented Synchronous....../Asynchronous Programming Model for Distributed Applications. The paper motivates, explains, and defines a distributed data-driven programming model. In the model a reactor is a stateful unit of distribution. A reactor specifies constructive, declarative constraints on its data and the data of other reactors in the style...

  15. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  16. Bianchi Type-II inflationary models with constant deceleration ...

    Indian Academy of Sciences (India)

    Einstein's field equations are considered for a locally rotationally symmetric Bianchi Type-II space–time in the presence of a massless scalar field with a scalar potential. Exact solutions of scale factors and other physical parameters are obtained by using a special law of variation for Hubble's parameter that yields a constant ...

  17. Molecular Models of Ruthenium(II) Organometallic Complexes

    Science.gov (United States)

    Coleman, William F.

    2007-01-01

    This article presents the featured molecules for the month of March, which appear in the paper by Ozerov, Fafard, and Hoffman, and which are related to the study of the reactions of a number of "piano stool" complexes of ruthenium(II). The synthesis of compound 2a offers students an alternative to the preparation of ferrocene if they are only…

  18. Carbon footprint estimator, phase II : volume I - GASCAP model & volume II - technical appendices [technical brief].

    Science.gov (United States)

    2014-03-01

    This study resulted in the development of the GASCAP model (the Greenhouse Gas Assessment : Spreadsheet for Transportation Capital Projects). This spreadsheet model provides a user-friendly interface for determining the greenhouse gas (GHG) emissions...

  19. Closing the Gaps : Taking into Account the Effects of Heat stress and Fatique Modeling in an Operational Analysis

    NARCIS (Netherlands)

    Woodill, G.; Barbier, R.R.; Fiamingo, C.

    2010-01-01

    Traditional, combat model based analysis of Dismounted Combatant Operations (DCO) has focused on the ‘lethal’ aspects in an engagement, and to a limited extent the environment in which the engagement takes place. These are however only two of the factors that should be taken into account when

  20. Accounting for Test Variability through Sizing Local Domains in Sequential Design Optimization with Concurrent Calibration-Based Model Validation

    Science.gov (United States)

    2013-08-01

    release. 1 Proceedings of IDETC/ CIE 2013 ASME 2013 International Design Engineering Technical Conferences & Computers and Information in Engineering...in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...insufficient to achieve the desired validity level . In this paper, we introduce a technique to determine the number of tests required to account for their

  1. Structural equation models using partial least squares: an example of the application of SmartPLS® in accounting research

    Directory of Open Access Journals (Sweden)

    João Carlos Hipólito Bernardes do Nascimento

    2016-08-01

    Full Text Available In view of the Accounting academy’s increasing in the investigation of latent phenomena, researchers have used robust multivariate techniques. Although Structural Equation Models are frequently used in the international literature, however, the Accounting academy has made little use of the variant based on Partial Least Squares (PLS-SEM, mostly due to lack of knowledge on the applicability and benefits of its use for Accounting research. Even if the PLS-SEM approach is regularly used in surveys, this method is appropriate to model complex relations with multiple relationships of dependence and independence between latent variables. In that sense, it is very useful for application in experiments and file data. In that sense, a literature review is presented of Accounting studies that used the PLS-SEM technique. Next, as no specific publications were observed that exemplified the application of the technique in Accounting, a PLS-SEM application is developed to encourage exploratory research by means of the software SmartPLS®, being particularly useful to graduate students. Therefore, the main contribution of this article is methodological, given its objective to clearly identify the guidelines for the appropriate use of PLS. By presenting an example of how to conduct an exploratory research using PLS-SEM, the intention is to contribute to researchers’ enhanced understanding of how to use and report on the technique in their research.

  2. Accounting Department Chairpersons' Perceptions of Business School Performance Using a Market Orientation Model

    Science.gov (United States)

    Webster, Robert L.; Hammond, Kevin L.; Rothwell, James C.

    2013-01-01

    This manuscript is part of a stream of continuing research examining market orientation within higher education and its potential impact on organizational performance. The organizations researched are business schools and the data collected came from chairpersons of accounting departments of AACSB member business schools. We use a reworded Narver…

  3. 76 FR 29249 - Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications

    Science.gov (United States)

    2011-05-20

    ... application process and selection criteria are described in Section IV of the Request for Applications but in... suppliers with a mechanism for shared governance that have formed an Accountable Care Organization (ACO..., leadership, and commitment to outcomes-based contracts with non- Medicare purchasers. Final selection will be...

  4. A regional-scale, high resolution dynamical malaria model that accounts for population density, climate and surface hydrology.

    Science.gov (United States)

    Tompkins, Adrian M; Ermert, Volker

    2013-02-18

    The relative roles of climate variability and population related effects in malaria transmission could be better understood if regional-scale dynamical malaria models could account for these factors. A new dynamical community malaria model is introduced that accounts for the temperature and rainfall influences on the parasite and vector life cycles which are finely resolved in order to correctly represent the delay between the rains and the malaria season. The rainfall drives a simple but physically based representation of the surface hydrology. The model accounts for the population density in the calculation of daily biting rates. Model simulations of entomological inoculation rate and circumsporozoite protein rate compare well to data from field studies from a wide range of locations in West Africa that encompass both seasonal endemic and epidemic fringe areas. A focus on Bobo-Dioulasso shows the ability of the model to represent the differences in transmission rates between rural and peri-urban areas in addition to the seasonality of malaria. Fine spatial resolution regional integrations for Eastern Africa reproduce the malaria atlas project (MAP) spatial distribution of the parasite ratio, and integrations for West and Eastern Africa show that the model grossly reproduces the reduction in parasite ratio as a function of population density observed in a large number of field surveys, although it underestimates malaria prevalence at high densities probably due to the neglect of population migration. A new dynamical community malaria model is publicly available that accounts for climate and population density to simulate malaria transmission on a regional scale. The model structure facilitates future development to incorporate migration, immunity and interventions.

  5. Financial Accounting: Classifications and Standard Terminology for Local and State School Systems. State Educational Records and Reports Series: Handbook II, Revised.

    Science.gov (United States)

    Roberts, Charles T., Comp.; Lichtenberger, Allan R., Comp.

    This handbook has been prepared as a vehicle or mechanism for program cost accounting and as a guide to standard school accounting terminology for use in all types of local and intermediate education agencies. In addition to classification descriptions, program accounting definitions, and proration of cost procedures, some units of measure and…

  6. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    Energy Technology Data Exchange (ETDEWEB)

    Bengtsson, J.

    2010-10-08

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al

  7. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    International Nuclear Information System (INIS)

    Bengtsson, J.

    2010-01-01

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was ∼ 1 x 10 -5 for 1024 turns (to calibrate the linear optics) and ∼ 1 x 10 -4 for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is ∼0.1. Since the transverse damping time is ∼20 msec, i.e., ∼4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain (delta)ν ∼ 1 x 10 -5 . A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al since the 40s for that matter. Conclusion: what

  8. Reduced Order Aeroservoelastic Models with Rigid Body Modes, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Complex aeroelastic and aeroservoelastic phenomena can be modeled on complete aircraft configurations generating models with millions of degrees of freedom. Starting...

  9. Scaled Model Technology for Flight Research of General Aviation Aircraft, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Our proposed future Phase II activities are aimed at developing a scientifically based "tool box" for flight research using scaled models. These tools will be of...

  10. A simple model to quantitatively account for periodic outbreaks of the measles in the Dutch Bible Belt

    Science.gov (United States)

    Bier, Martin; Brak, Bastiaan

    2015-04-01

    In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.

  11. Accounting for Slipping and Other False Negatives in Logistic Models of Student Learning

    Science.gov (United States)

    MacLellan, Christopher J.; Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    Additive Factors Model (AFM) and Performance Factors Analysis (PFA) are two popular models of student learning that employ logistic regression to estimate parameters and predict performance. This is in contrast to Bayesian Knowledge Tracing (BKT) which uses a Hidden Markov Model formalism. While all three models tend to make similar predictions,…

  12. Alternative biosphere modeling for safety assessment of HLW disposal taking account of geosphere-biosphere interface of marine environment

    International Nuclear Information System (INIS)

    Kato, Tomoko; Ishiguro, Katsuhiko; Naito, Morimasa; Ikeda, Takao; Little, Richard

    2001-03-01

    In the safety assessment of a high-level radioactive waste (HLW) disposal system, it is required to estimate radiological impacts on future human beings arising from potential radionuclide releases from a deep repository into the surface environment. In order to estimated the impacts, a biosphere model is developed by reasonably assuming radionuclide migration processes in the surface environment and relevant human lifestyles. It is important to modify the present biosphere models or to develop alternative biosphere models applying the biosphere models according to quality and quantify of the information acquired through the siting process for constructing the repository. In this study, alternative biosphere models were developed taking geosphere-biosphere interface of marine environment into account. Moreover, the flux to dose conversion factors calculated by these alternative biosphere models was compared with those by the present basic biosphere models. (author)

  13. Theory of extended stellar atmospheres. II. A grid of static spherical models for O stars and planetary nebula nuclei

    International Nuclear Information System (INIS)

    Kunasz, P.B.; Hummer, D.G.; Mihalas, D.

    1975-01-01

    Spherical static non-LTE model atmospheres are presented for stars with M/M/sub sun/=30 and 60 at various points on their evolutionary tracks, and for some nuclei of planetary nebulae at two points of a modified Harman-Seaton sequence. The method of Mihalas and Hummer was employed, which uses a parametrized radiation force multiplier to simulate the force of radiation arising from the entire line spectrum. However, in the present work the density structure computed in the LTE models was held fixed in the calculation of the corresponding non-LTE models; in addition, the opacity of an ''average light ion'' was taken into account. The temperatures for the non-LTE models are generally lower, at a given depth, than for the corresponding LTE models when T/sub eff/<45,000 K, while the situation is reversed at higher temperatures. The continuous energy distributions are generally flattened by extension. The Lyman jump is in emission for extended models of massive stars, but never for the models of nuclei of planetary nebulae (this is primarily a temperature effect). The Balmer jumps are always in absorption. The Lyman lines are in emission, and the Balmer lines in absorption; He ii lambda4686 comes into emission in the most extended models without hydrogen line pumping, showing that it is an indicator of atmospheric extension. Very severe limb darkening is found for extended models, which have apparent angular sized significantly smaller than expected from the geometrical size of the star. Extensive tables are given of monochromatic magnitudes, continuum jumps and gradients, Stomgren-system colors, monochromatic extensions, and the profiles and equivalent widths of the hydrogen lines for all models, and of the He ii lines for some of the 60 M/sub X/ models

  14. A Model for a Level II Emergency Room

    Science.gov (United States)

    1989-05-02

    uni t Kramer 6 4) Technologist on call: Radiology, Laboratory, Blood Bank. 5) Trained personnel available to take electrocardiograms on call and...and ICU /CCU on the second floor. Flight medicine which has some collateral roles with the emergency department is located in the south basement and...II I I I I 1 19 [21] ICU /CCU I I I I I I I I I I I I I I I I I I I I 122 121 ICI.L’CCI I _I _I _I _I __I _I I I I I I IEl I I I I I I I 1 __ 12 [23

  15. Efficient modeling of sun/shade canopy radiation dynamics explicitly accounting for scattering

    Science.gov (United States)

    Bodin, P.; Franklin, O.

    2012-04-01

    The separation of global radiation (Rg) into its direct (Rb) and diffuse constituents (Rg) is important when modeling plant photosynthesis because a high Rd:Rg ratio has been shown to enhance Gross Primary Production (GPP). To include this effect in vegetation models, the plant canopy must be separated into sunlit and shaded leaves. However, because such models are often too intractable and computationally expensive for theoretical or large scale studies, simpler sun-shade approaches are often preferred. A widely used and computationally efficient sun-shade model was developed by Goudriaan (1977) (GOU). However, compared to more complex models, this model's realism is limited by its lack of explicit treatment of radiation scattering. Here we present a new model based on the GOU model, but which in contrast explicitly simulates radiation scattering by sunlit leaves and the absorption of this radiation by the canopy layers above and below (2-stream approach). Compared to the GOU model our model predicts significantly different profiles of scattered radiation that are in better agreement with measured profiles of downwelling diffuse radiation. With respect to these data our model's performance is equal to a more complex and much slower iterative radiation model while maintaining the simplicity and computational efficiency of the GOU model.

  16. Physiologically motivated time-delay model to account for mechanisms underlying enterohepatic circulation of piroxicam in human beings.

    Science.gov (United States)

    Tvrdonova, Martina; Dedik, Ladislav; Mircioiu, Constantin; Miklovicova, Daniela; Durisova, Maria

    2009-01-01

    The study was conducted to formulate a physiologically motivated time-delay (PM TD) mathematical model for human beings, which incorporates disintegration of a drug formulation, dissolution, discontinuous gastric emptying and enterohepatic circulation (EHC) of a drug. Piroxicam, administered to 24 European, healthy individuals in 20 mg capsules Feldene Pfizer, was used as a model drug. Plasma was analysed for piroxicam by a validated high-performance liquid chromatography method. The PM TD mathematical model was developed using measured plasma piroxicam concentration-time profiles of the individuals and tools of a computationally efficient mathematical analysis and modeling, based on the theory of linear dynamic systems. The constructed model was capable of (i) quantifying different fractions of the piroxicam dose sequentially disposable for absorption and (ii) estimating time delays between time when the piroxicam dose reaches stomach and time when individual of fractions of the piroxicam dose is disposable for absorption. The model verification was performed through a formal proof, based on comparisons of observed and model-predicted plasma piroxicam concentration-time profiles. The model verification showed an adequate model performance and agreement between the compared profiles. Accordingly, it confirmed that the developed model was an appropriate representative of the piroxicam fate in the individuals enrolled. The presented model provides valuable information on factors that control dynamic mechanisms of EHC, that is, information unobtainable with the models proposed for the EHC analysis previously.

  17. Internet accounting

    NARCIS (Netherlands)

    Pras, Aiko; van Beijnum, Bernhard J.F.; Sprenkels, Ron; Parhonyi, R.

    2001-01-01

    This article provides an introduction to Internet accounting and discusses the status of related work within the IETF and IRTF, as well as certain research projects. Internet accounting is different from accounting in POTS. To understand Internet accounting, it is important to answer questions like

  18. Factors accounting for youth suicide attempt in Hong Kong: a model building.

    Science.gov (United States)

    Wan, Gloria W Y; Leung, Patrick W L

    2010-10-01

    This study aimed at proposing and testing a conceptual model of youth suicide attempt. We proposed a model that began with family factors such as a history of physical abuse and parental divorce/separation. Family relationship, presence of psychopathology, life stressors, and suicide ideation were postulated as mediators, leading to youth suicide attempt. The stepwise entry of the risk factors to a logistic regression model defined their proximity as related to suicide attempt. Path analysis further refined our proposed model of youth suicide attempt. Our originally proposed model was largely confirmed. The main revision was dropping parental divorce/separation as a risk factor in the model due to lack of significant contribution when examined alongside with other risk factors. This model was cross-validated by gender. This study moved research on youth suicide from identification of individual risk factors to model building, integrating separate findings of the past studies.

  19. Accounting for subgrid scale topographic variations in flood propagation modeling using MODFLOW

    DEFF Research Database (Denmark)

    Milzow, Christian; Kinzelbach, W.

    2010-01-01

    To be computationally viable, grid-based spatially distributed hydrological models of large wetlands or floodplains must be set up using relatively large cells (order of hundreds of meters to kilometers). Computational costs are especially high when considering the numerous model runs or model time...

  20. Accounting for the influence of the Earth's sphericity in three-dimensional density modelling

    Science.gov (United States)

    Martyshko, P. S.; Byzov, D. D.; Chernoskutov, A. I.

    2017-11-01

    A method for transformation of the three-dimensional regional "flat" density models of the Earth's crust and upper mantle to the "spherical" models and vice versa is proposed. A computation algorithm and a method of meaningful comparison of the vertical component of the gravity field of both models are presented.

  1. Accounting for imperfect forward modeling in geophysical inverse problems — Exemplified for crosshole tomography

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Holm Jacobsen, Bo

    2014-01-01

    of the modeling error was inferred in the form of a correlated Gaussian probability distribution. The key to the method was the ability to generate many realizations from a statistical description of the source of the modeling error, which in this case is the a priori model. The methodology was tested for two...

  2. Cost accounting in radiation oncology: a computer-based model for reimbursement.

    Science.gov (United States)

    Perez, C A; Kobeissi, B; Smith, B D; Fox, S; Grigsby, P W; Purdy, J A; Procter, H D; Wasserman, T H

    1993-04-02

    The skyrocketing cost of medical care in the United States has resulted in multiple efforts in cost containment. The present work offers a rational computer-based cost accounting approach to determine the actual use of resources in providing a specific service in a radiation oncology center. A procedure-level cost accounting system was developed by using recorded information on actual time and effort spent by individual staff members performing various radiation oncology procedures, and analyzing direct and indirect costs related to staffing (labor), facilities and equipment, supplies, etc. Expenditures were classified as direct or indirect and fixed or variable. A relative value unit was generated to allocate specific cost factors to each procedure. Different costs per procedure were identified according to complexity. Whereas there was no significant difference in the treatment time between low-energy (4 and 6 MV) or high-energy (18 MV) accelerators, there were significantly higher costs identified in the operation of a high-energy linear accelerator, a reflection of initial equipment investment, quality assurance and calibration procedures, maintenance costs, service contract, and replacement parts. Utilization of resources was related to the complexity of the procedures performed and whether the treatments were delivered to inpatients or outpatients. In analyzing time motion for physicians and other staff, it was apparent that a greater effort must be made to train the staff to accurately record all times involved in a given procedure, and it is strongly recommended that each institution perform its own time motion studies to more accurately determine operating costs. Sixty-six percent of our facility's global costs were for labor, 20% for other operating expenses, 10% for space, and 4% for equipment. Significant differences were noted in the cost allocation for professional or technical functions, as labor, space, and equipment costs are higher in the latter

  3. ISLSCP II IGBP NPP Output from Terrestrial Biogeochemistry Models

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This data set contains modeled annual net primary production (NPP) for the land biosphere from seventeen different global models. Annual NPP is defined as...

  4. ISLSCP II IGBP NPP Output from Terrestrial Biogeochemistry Models

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set contains modeled annual net primary production (NPP) for the land biosphere from seventeen different global models. Annual NPP is defined as the net...

  5. Development of a coal shrinkage-swelling model accounting for water content in the micropores

    Energy Technology Data Exchange (ETDEWEB)

    Prob Thararoop; Zuleima T. Karpyn; Turgay Ertekin [Pennsylvania State University, University Park, PA (United States). Petroleum and Natural Gas Engineering

    2009-07-01

    Changes in cleat permeability of coal seams are influenced by internal stress, and release or adsorption of gas in the coal matrix during production/injection processes. Coal shrinkage-swelling models have been proposed to quantify such changes; however none of the existing models incorporates the effect of the presence of water in the micropores on the gas sorption of coalbeds. This paper proposes a model of coal shrinkage and swelling, incorporating the effect of water in the micropores. The proposed model was validated using field permeability data from San Juan basin coalbeds and compared with coal shrinkage and swelling models existing in the literature.

  6. Modeling FAMA ion beam diagnostics based on the Ptolemy II model

    Energy Technology Data Exchange (ETDEWEB)

    Balvanovic, R., E-mail: broman@vinca.rs [Laboratory of Physics, Vinca Institute of Nuclear Sciences, University of Belgrade, PO Box 522, 11001 Belgrade (Serbia); Belicev, P. [Laboratory of Physics, Vinca Institute of Nuclear Sciences, University of Belgrade, PO Box 522, 11001 Belgrade (Serbia); Radjenovic, B. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia)

    2012-10-21

    The previously developed model of ion beam transport control of the FAMA facility is further enhanced by equipping it with the model of ion beam diagnostics. The model of control, executing once, is adjusted so that it executes in iterative mode, where each iteration samples the input beam normally distributed over initial phase space and calculates a single trajectory through the facility beam lines. The model takes into account only the particles that manage to pass through all the beam line apertures, emulating in this way a Faraday cup and a beam profile meter. Generated are also beam phase space distributions and horizontal and vertical beam profiles at the end of the beam transport lines the FAMA facility consists of. By adding the model of ion beam diagnostics to the model of ion beam transport control, the process of determining optimal ion beam control parameters is eased and speeded up, and the understanding of influence of control parameters on the ion beam characteristics is improved.

  7. THE CURRENT ACCOUNT DEFICIT AND THE FIXED EXCHANGE RATE. ADJUSTING MECHANISMS AND MODELS.

    Directory of Open Access Journals (Sweden)

    HATEGAN D.B. Anca

    2010-07-01

    Full Text Available The main purpose of the paper is to explain what measures can be taken in order to fix the trade deficit, and the pressure that is upon a country by imposing such measures. The international and the national supply and demand conditions change rapidly, and if a country doesn’t succeed in keeping a tight control over its deficit, a lot of factors will affect its wellbeing. In order to reduce the external trade deficit, the government needs to resort to several techniques. The desired result is to have a balanced current account, and therefore, the government is free to use measures such as fixing its exchange rate, reducing government spending etc. We have shown that all these measures will have a certain impact upon an economy, by allowing its exports to thrive and eliminate the danger from excessive imports, or vice-versa. The main conclusion our paper is that government intervention is allowed in order to maintain the balance of the current account.

  8. Nonlinear analysis of a new car-following model accounting for the global average optimal velocity difference

    Science.gov (United States)

    Peng, Guanghan; Lu, Weizhen; He, Hongdi

    2016-09-01

    In this paper, a new car-following model is proposed by considering the global average optimal velocity difference effect on the basis of the full velocity difference (FVD) model. We investigate the influence of the global average optimal velocity difference on the stability of traffic flow by making use of linear stability analysis. It indicates that the stable region will be enlarged by taking the global average optimal velocity difference effect into account. Subsequently, the mKdV equation near the critical point and its kink-antikink soliton solution, which can describe the traffic jam transition, is derived from nonlinear analysis. Furthermore, numerical simulations confirm that the effect of the global average optimal velocity difference can efficiently improve the stability of traffic flow, which show that our new consideration should be taken into account to suppress the traffic congestion for car-following theory.

  9. Accounting for spatial effects in land use regression for urban air pollution modeling.

    Science.gov (United States)

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Dynamic modeling and simulation of EBR-II steam generator system

    International Nuclear Information System (INIS)

    Berkan, R.C.; Upadhyaya, B.R.

    1989-01-01

    This paper presents a low order dynamic model of the Experimental breeder Reactor-II (EBR-II) steam generator system. The model development includes the application of energy, mass and momentum balance equations in state-space form. The model also includes a three-element controller for the drum water level control problem. The simulation results for low-level perturbations exhibit the inherently stable characteristics of the steam generator. The predictions of test transients also verify the consistency of this low order model

  11. JOMAR - A model for accounting the environmental loads from building constructions

    Energy Technology Data Exchange (ETDEWEB)

    Roenning, Anne; Nereng, Guro; Vold, Mie; Bjoerberg, Svein; Lassen, Niels

    2008-07-01

    The objective for this project was to develop a model as a basis for calculation of environmental profile for whole building constructions, based upon data from databases and general LCA software, in addition to the model structure from the Nordic project on LCC assessment of buildings. The model has been tested on three building constructions; timber based, flexible and heavy as well as heavy. Total energy consumption and emissions contributing to climate change are calculated in a total life cycle perspective. The developed model and exemplifying case assessments have shown that a holistic model including operation phase is both important and possible to implement. The project has shown that the operation phase causes the highest environmental loads when it comes to the exemplified impact categories. A suggestion on further development of the model along two different axes in collaboration with a broader representation from the building sector is given in the report (author)(tk)

  12. Rock Fracture Toughness Under Mode II Loading: A Theoretical Model Based on Local Strain Energy Density

    Science.gov (United States)

    Rashidi Moghaddam, M.; Ayatollahi, M. R.; Berto, F.

    2018-01-01

    The values of mode II fracture toughness reported in the literature for several rocks are studied theoretically by using a modified criterion based on strain energy density averaged over a control volume around the crack tip. The modified criterion takes into account the effect of T-stress in addition to the singular terms of stresses/strains. The experimental results are related to mode II fracture tests performed on the semicircular bend and Brazilian disk specimens. There are good agreements between theoretical predictions using the generalized averaged strain energy density criterion and the experimental results. The theoretical results reveal that the value of mode II fracture toughness is affected by the size of control volume around the crack tip and also the magnitude and sign of T-stress.

  13. Hydrogeological modelling of the eastern region of Areco river locally detailed on Atucha I and II nuclear power plants area

    International Nuclear Information System (INIS)

    Grattone, Natalia I.; Fuentes, Nestor O.

    2009-01-01

    Water flow behaviour of Pampeano aquifer was modeled using Visual Mod-flow software Package 2.8.1 with the assumption of a free aquifer, within the region of the Areco river and extending to the rivers of 'Canada Honda' and 'de la Cruz'. Steady state regime was simulated and grid refinement allows obtaining locally detailed calculation in the area of Atucha I and II Nuclear power plants, in order to compute unsteady situations as the consequence of water flow variations from and to the aquifer, enabling the model to study the movement of possible contaminant particles in the hydrogeologic system. In this work the effects of rivers action, the recharge conditions and the flow lines are analyzed, taking always into account the range of reliability of obtained results, considering the incidence of uncertainties introduced by data input system, the estimates and interpolation of parameters used. (author)

  14. An extended two-lane car-following model accounting for inter-vehicle communication

    Science.gov (United States)

    Ou, Hui; Tang, Tie-Qiao

    2018-04-01

    In this paper, we develop a novel car-following model with inter-vehicle communication to explore each vehicle's movement in a two-lane traffic system when an incident occurs on a lane. The numerical results show that the proposed model can perfectly describe each vehicle's motion when an incident occurs, i.e., no collision occurs while the classical full velocity difference (FVD) model produces collision on each lane, which shows the proposed model is more reasonable. The above results can help drivers to reasonably adjust their driving behaviors when an incident occurs in a two-lane traffic system.

  15. Reconstruction of Arabidopsis metabolic network models accounting for subcellular compartmentalization and tissue-specificity.

    Science.gov (United States)

    Mintz-Oron, Shira; Meir, Sagit; Malitsky, Sergey; Ruppin, Eytan; Aharoni, Asaph; Shlomi, Tomer

    2012-01-03

    Plant metabolic engineering is commonly used in the production of functional foods and quality trait improvement. However, to date, computational model-based approaches have only been scarcely used in this important endeavor, in marked contrast to their prominent success in microbial metabolic engineering. In this study we present a computational pipeline for the reconstruction of fully compartmentalized tissue-specific models of Arabidopsis thaliana on a genome scale. This reconstruction involves automatic extraction of known biochemical reactions in Arabidopsis for both primary and secondary metabolism, automatic gap-filling, and the implementation of methods for determining subcellular localization and tissue assignment of enzymes. The reconstructed tissue models are amenable for constraint-based modeling analysis, and significantly extend upon previous model reconstructions. A set of computational validations (i.e., cross-validation tests, simulations of known metabolic functionalities) and experimental validations (comparison with experimental metabolomics datasets under various compartments and tissues) strongly testify to the predictive ability of the models. The utility of the derived models was demonstrated in the prediction of measured fluxes in metabolically engineered seed strains and the design of genetic manipulations that are expected to increase vitamin E content, a significant nutrient for human health. Overall, the reconstructed tissue models are expected to lay down the foundations for computational-based rational design of plant metabolic engineering. The reconstructed compartmentalized Arabidopsis tissue models are MIRIAM-compliant and are available upon request.

  16. Development of nonfibrotic left ventricular hypertrophy in an ANG II-induced chronic ovine hypertension model

    DEFF Research Database (Denmark)

    Klatt, Niklas; Scherschel, Katharina; Schad, Claudia

    2016-01-01

    setting. Therefore, the aim of this study was to establish a minimally invasive ovine hypertension model using chronic angiotensin II (ANG II) treatment and to characterize its effects on cardiac remodeling after 8 weeks. Sheep were implanted with osmotic minipumps filled with either vehicle control (n...... = 7) or ANG II (n = 9) for 8 weeks. Mean arterial blood pressure in the ANG II-treated group increased from 87.4 ± 5.3 to 111.8 ± 6.9 mmHg (P = 0.00013). Cardiovascular magnetic resonance imaging showed an increase in left ventricular mass from 112 ± 12.6 g to 131 ± 18.7 g after 7 weeks (P = 0...... any differences in epicardial conduction velocity and heterogeneity. These data demonstrate that chronic ANG II treatment using osmotic minipumps presents a reliable, minimally invasive approach to establish hypertension and nonfibrotic LVH in sheep....

  17. A Social Audit Model for Agro-biotechnology Initiatives in Developing Countries: Accounting for Ethical, Social, Cultural, and Commercialization Issues

    Directory of Open Access Journals (Sweden)

    Obidimma Ezezika

    2009-10-01

    Full Text Available There is skepticism and resistance to innovations associated with agro-biotechnology projects, leading to the possibility of failure. The source of the skepticism is complex, but partly traceable to how local communities view genetically engineered crops, public perception on the technology’s implications, and views on the role of the private sector in public health and agriculture, especially in the developing world. We posit that a governance and management model in which ethical, social, cultural, and commercialization issues are accounted for and addressed is important in mitigating risk of project failure and improving the appropriate adoption of agro-biotechnology in sub-Saharan Africa. We introduce a social audit model, which we term Ethical, Social, Cultural and Commercialization (ESC2 auditing and which we developed based on feedback from a number of stakeholders. We lay the foundation for its importance in agro-biotechnology development projects and show how the model can be applied to projects run by Public Private Partnerships. We argue that the implementation of the audit model can help to build public trust through facilitating project accountability and transparency. The model also provides evidence on how ESC2 issues are perceived by various stakeholders, which enables project managers to effectively monitor and improve project performance. Although this model was specifically designed for agro-biotechnology initiatives, we show how it can also be applied to other development projects.

  18. A constitutive model accounting for strain ageing effects on work-hardening. Application to a C-Mn steel

    Science.gov (United States)

    Ren, Sicong; Mazière, Matthieu; Forest, Samuel; Morgeneyer, Thilo F.; Rousselier, Gilles

    2017-12-01

    One of the most successful models for describing the Portevin-Le Chatelier effect in engineering applications is the Kubin-Estrin-McCormick model (KEMC). In the present work, the influence of dynamic strain ageing on dynamic recovery due to dislocation annihilation is introduced in order to improve the KEMC model. This modification accounts for additional strain hardening rate due to limited dislocation annihilation by the diffusion of solute atoms and dislocation pinning at low strain rate and/or high temperature. The parameters associated with this novel formulation are identified based on tensile tests for a C-Mn steel at seven temperatures ranging from 20 °C to 350 °C. The validity of the model and the improvement compared to existing models are tested using 2D and 3D finite element simulations of the Portevin-Le Chatelier effect in tension.

  19. Accounting for sex differences in PTSD: A multi-variable mediation model

    DEFF Research Database (Denmark)

    Christiansen, Dorte M.; Hansen, Maj

    2015-01-01

    ABSTRACT Background: Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD). However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used...... methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually...... and related variables in 73.3% of all Danish bank employees exposed to bank robbery during the period from April 2010 to April 2011. Participants filled out questionnaires 1 week (T1, Nﰁ450) and 6 months after the robbery (T2, Nﰁ368; 61.1% females). Mediation was examined using an analysis designed...

  20. Computational Models for Nonlinear Aeroelastic Systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  1. Physical Modeling for Anomaly Diagnostics and Prognostics, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Ridgetop developed an innovative, model-driven anomaly diagnostic and fault characterization system for electromechanical actuator (EMA) systems to mitigate...

  2. Model Updating Nonlinear System Identification Toolbox, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology (ZONA) proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology that utilizes flight data with...

  3. Taking individual scaling differences into account by analyzing profile data with the Mixed Assessor Model

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Schlich, Pascal; Skovgaard, Ib

    2015-01-01

    Scale range differences between individual assessors will often constitute a non-trivial part of the assessor-by-product interaction in sensory profile data (Brockhoff, 2003, 1998; Brockhoff and Skovgaard, 1994). We suggest a new mixed model ANOVA analysis approach, the Mixed Assessor Model (MAM...

  4. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  5. Assessing and accounting for time heterogeneity in stochastic actor oriented models

    NARCIS (Netherlands)

    Lospinoso, Joshua A.; Schweinberger, Michael; Snijders, Tom A. B.; Ripley, Ruth M.

    This paper explores time heterogeneity in stochastic actor oriented models (SAOM) proposed by Snijders (Sociological methodology. Blackwell, Boston, pp 361-395, 2001) which are meant to study the evolution of networks. SAOMs model social networks as directed graphs with nodes representing people,

  6. An individual-based model of Zebrafish population dynamics accounting for energy dynamics

    DEFF Research Database (Denmark)

    Beaudouin, Remy; Goussen, Benoit; Piccini, Benjamin

    2015-01-01

    Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model) w...

  7. Bioeconomic Modelling of Wetlands and Waterfowl in Western Canada: Accounting for Amenity Values

    NARCIS (Netherlands)

    Kooten, van G.C.; Whitey, P.; Wong, L.

    2011-01-01

    This study reexamines and updates an original bioeconomic model of optimal duck harvest and wetland retention by Hammack and Brown (1974, Waterfowl and Wetlands: Toward Bioeconomic Analysis. Washington, DC: Resources for the Future). It then extends the model to include the nonmarket (in situ) value

  8. Accounting for correlated observations in an age-based state-space stock assessment model

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders

    2016-01-01

    Fish stock assessment models often relyon size- or age-specific observations that are assumed to be statistically independent of each other. In reality, these observations are not raw observations, but rather they are estimates from a catch-standardization model or similar summary statistics based...

  9. Design, development, and application of LANDIS-II, a spatial landscape simulation model with flexible temporal and spatial resolution

    Science.gov (United States)

    Robert M. Scheller; James B. Domingo; Brian R. Sturtevant; Jeremy S. Williams; Arnold Rudy; Eric J. Gustafson; David J. Mladenoff

    2007-01-01

    We introduce LANDIS-II, a landscape model designed to simulate forest succession and disturbances. LANDIS-II builds upon and preserves the functionality of previous LANDIS forest landscape simulation models. LANDIS-II is distinguished by the inclusion of variable time steps for different ecological processes; our use of a rigorous development and testing process used...

  10. Two-Higgs-doublet model of type II confronted with the LHC run I and run II data

    Science.gov (United States)

    Wang, Lei; Zhang, Feng; Han, Xiao-Fang

    2017-06-01

    We examine the parameter space of the two-Higgs-doublet model of type II after imposing the relevant theoretical and experimental constraints from the precision electroweak data, B -meson decays, and the LHC run I and run II data. We find that the searches for Higgs bosons via the τ+τ- , W W , Z Z , γ γ , h h , h Z , H Z , and A Z channels can give strong constraints on the C P -odd Higgs A and heavy C P -even Higgs H , and the parameter space excluded by each channel is respectively carved out in detail assuming that either mA or mH are fixed to 600 or 700 GeV in the scans. The surviving samples are discussed in two different regions. (i) In the standard model-like coupling region of the 125 GeV Higgs, mA is allowed to be as low as 350 GeV, and a strong upper limit is imposed on tan β . mH is allowed to be as low as 200 GeV for the appropriate values of tan β , sin (β -α ), and mA, but is required to be larger than 300 GeV for mA=700 GeV . (ii) In the wrong-sign Yukawa coupling region of the 125 GeV Higgs, the b b ¯→A /H →τ+τ- channel can impose the upper limits on tan β and sin (β -α ), and the A →h Z channel can give the lower limits on tan β and sin (β -α ). mA and mH are allowed to be as low as 60 and 200 GeV, respectively, but 320 GeV

  11. A Nonlinear Transmission Line Model of the Cochlea With Temporal Integration Accounts for Duration Effects in Threshold Fine Structure

    DEFF Research Database (Denmark)

    Verhey, Jesko L.; Mauermann, Manfred; Epp, Bastian

    2017-01-01

    than for long signals. The present study demonstrates how this effect can be captured by a nonlinear and active model of the cochlear in combination with a temporal integration stage. Since this cochlear model also accounts for fine structure and connected level dependent effects, it is superior......For normal-hearing listeners, auditory pure-tone thresholds in quiet often show quasi periodic fluctuations when measured with a high frequency resolution, referred to as threshold fine structure. Threshold fine structure is dependent on the stimulus duration, with smaller fluctuations for short...

  12. Surface complexation modeling calculation of Pb(II) adsorption onto the calcined diatomite

    Science.gov (United States)

    Ma, Shu-Cui; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia

    2015-12-01

    Removal of noxious heavy metal ions (e.g. Pb(II)) by surface adsorption of minerals (e.g. diatomite) is an important means in the environmental aqueous pollution control. Thus, it is very essential to understand the surface adsorptive behavior and mechanism. In this work, the Pb(II) apparent surface complexation reaction equilibrium constants on the calcined diatomite and distributions of Pb(II) surface species were investigated through modeling calculations of Pb(II) based on diffuse double layer model (DLM) with three amphoteric sites. Batch experiments were used to study the adsorption of Pb(II) onto the calcined diatomite as a function of pH (3.0-7.0) and different ionic strengths (0.05 and 0.1 mol L-1 NaCl) under ambient atmosphere. Adsorption of Pb(II) can be well described by Freundlich isotherm models. The apparent surface complexation equilibrium constants (log K) were obtained by fitting the batch experimental data using the PEST 13.0 together with PHREEQC 3.1.2 codes and there is good agreement between measured and predicted data. Distribution of Pb(II) surface species on the diatomite calculated by PHREEQC 3.1.2 program indicates that the impurity cations (e.g. Al3+, Fe3+, etc.) in the diatomite play a leading role in the Pb(II) adsorption and dominant formation of complexes and additional electrostatic interaction are the main adsorption mechanism of Pb(II) on the diatomite under weak acidic conditions.

  13. The Adsorption of Cd(II on Manganese Oxide Investigated by Batch and Modeling Techniques

    Directory of Open Access Journals (Sweden)

    Xiaoming Huang

    2017-09-01

    Full Text Available Manganese (Mn oxide is a ubiquitous metal oxide in sub-environments. The adsorption of Cd(II on Mn oxide as function of adsorption time, pH, ionic strength, temperature, and initial Cd(II concentration was investigated by batch techniques. The adsorption kinetics showed that the adsorption of Cd(II on Mn oxide can be satisfactorily simulated by pseudo-second-order kinetic model with high correlation coefficients (R2 > 0.999. The adsorption of Cd(II on Mn oxide significantly decreased with increasing ionic strength at pH < 5.0, whereas Cd(II adsorption was independent of ionic strength at pH > 6.0, which indicated that outer-sphere and inner-sphere surface complexation dominated the adsorption of Cd(II on Mn oxide at pH < 5.0 and pH > 6.0, respectively. The maximum adsorption capacity of Mn oxide for Cd(II calculated from Langmuir model was 104.17 mg/g at pH 6.0 and 298 K. The thermodynamic parameters showed that the adsorption of Cd(II on Mn oxide was an endothermic and spontaneous process. According to the results of surface complexation modeling, the adsorption of Cd(II on Mn oxide can be satisfactorily simulated by ion exchange sites (X2Cd at low pH and inner-sphere surface complexation sites (SOCd+ and (SO2CdOH− species at high pH conditions. The finding presented herein plays an important role in understanding the fate and transport of heavy metals at the water–mineral interface.

  14. [Collaboration among health professionals (II). Usefulness of a model].

    Science.gov (United States)

    D'Amour, Danielle; San Martín Rodríguez, Leticia

    2006-09-01

    This second article provides a model which helps one to better understand the process of collaboration by interprofessional teams and makes it possible to evaluate the quality of the aforementioned collaboration. To this end, the authors first present a structural model of inter-professional collaboration followed by a typology of collaboration which is derived from the functionality of said model. This model is composed by four interrelated dimensions; the functionality of these has given rise to a typology of collaboration at three intensities: in action, in construction and collaboration during inertia. The model and the typology constitute a useful tool for managers and for health professionals since they help to better understand, manage and develop collaboration among the distinct professionals inside of the same organization as among those who belong to distinct organizations.

  15. Filament winding cylinders. II - Validation of the process model

    Science.gov (United States)

    Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.

    1990-01-01

    Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.

  16. An improved car-following model accounting for the preceding car's taillight

    Science.gov (United States)

    Zhang, Jian; Tang, Tie-Qiao; Yu, Shao-Wei

    2018-02-01

    During the deceleration process, the preceding car's taillight may have influences on its following car's driving behavior. In this paper, we propose an extended car-following model with consideration of the preceding car's taillight. Two typical situations are used to simulate each car's movement and study the effects of the preceding car's taillight on the driving behavior. Meanwhile, sensitivity analysis of the model parameter is in detail discussed. The numerical results show that the proposed model can improve the stability of traffic flow and the traffic safety can be enhanced without a decrease of efficiency especially when cars pass through a signalized intersection.

  17. An agent-based simulation model of patient choice of health care providers in accountable care organizations.

    Science.gov (United States)

    Alibrahim, Abdullah; Wu, Shinyi

    2018-03-01

    Accountable care organizations (ACO) in the United States show promise in controlling health care costs while preserving patients' choice of providers. Understanding the effects of patient choice is critical in novel payment and delivery models like ACO that depend on continuity of care and accountability. The financial, utilization, and behavioral implications associated with a patient's decision to forego local health care providers for more distant ones to access higher quality care remain unknown. To study this question, we used an agent-based simulation model of a health care market composed of providers able to form ACO serving patients and embedded it in a conditional logit decision model to examine patients capable of choosing their care providers. This simulation focuses on Medicare beneficiaries and their congestive heart failure (CHF) outcomes. We place the patient agents in an ACO delivery system model in which provider agents decide if they remain in an ACO and perform a quality improving CHF disease management intervention. Illustrative results show that allowing patients to choose their providers reduces the yearly payment per CHF patient by $320, reduces mortality rates by 0.12 percentage points and hospitalization rates by 0.44 percentage points, and marginally increases provider participation in ACO. This study demonstrates a model capable of quantifying the effects of patient choice in a theoretical ACO system and provides a potential tool for policymakers to understand implications of patient choice and assess potential policy controls.

  18. Accounting for partial sleep deprivation and cumulative sleepiness in the Three-Process Model of alertness regulation.

    Science.gov (United States)

    Akerstedt, Torbjörn; Ingre, Michael; Kecklund, Göran; Folkard, Simon; Axelsson, John

    2008-04-01

    Mathematical models designed to predict alertness or performance have been developed primarily as tools for evaluating work and/or sleep-wake schedules that deviate from the traditional daytime orientation. In general, these models cope well with the acute changes resulting from an abnormal sleep but have difficulties handling sleep restriction across longer periods. The reason is that the function representing recovery is too steep--usually exponentially so--and with increasing sleep loss, the steepness increases, resulting in too rapid recovery. The present study focused on refining the Three-Process Model of alertness regulation. We used an experiment with 4 h of sleep/night (nine participants) that included subjective self-ratings of sleepiness every hour. To evaluate the model at the individual subject level, a set of mixed-effect regression analyses were performed using subjective sleepiness as the dependent variable. These mixed models estimate a fixed effect (group mean) and a random effect that accounts for heterogeneity between participants in the overall level of sleepiness (i.e., a random intercept). Using this technique, a point was sought on the exponential recovery function that would explain maximum variance in subjective sleepiness by switching to a linear function. The resulting point explaining the highest amount of variance was 12.2 on the 1-21 unit scale. It was concluded that the accumulation of sleep loss effects on subjective sleepiness may be accounted for by making the recovery function linear below a certain point on the otherwise exponential function.

  19. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    Science.gov (United States)

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Toward a formalized account of attitudes: The Causal Attitude Network (CAN) Model

    NARCIS (Netherlands)

    Dalege, J.; Borsboom, D.; Harreveld, F. van; Berg, H. van den; Conner, M.; Maas, H.L.J. van der

    2016-01-01

    This article introduces the Causal Attitude Network (CAN) model, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions. Relevant evaluative reactions include beliefs, feelings, and behaviors toward the attitude object. Interactions

  1. Conceptual Modeling in the Time of the Revolution: Part II

    Science.gov (United States)

    Mylopoulos, John

    Conceptual Modeling was a marginal research topic at the very fringes of Computer Science in the 60s and 70s, when the discipline was dominated by topics focusing on programs, systems and hardware architectures. Over the years, however, the field has moved to centre stage and has come to claim a central role both in Computer Science research and practice in diverse areas, such as Software Engineering, Databases, Information Systems, the Semantic Web, Business Process Management, Service-Oriented Computing, Multi-Agent Systems, Knowledge Management, and more. The transformation was greatly aided by the adoption of standards in modeling languages (e.g., UML), and model-based methodologies (e.g., Model-Driven Architectures) by the Object Management Group (OMG) and other standards organizations. We briefly review the history of the field over the past 40 years, focusing on the evolution of key ideas. We then note some open challenges and report on-going research, covering topics such as the representation of variability in conceptual models, capturing model intentions, and models of laws.

  2. Mechanistic Physiologically Based Pharmacokinetic (PBPK) Model of the Heart Accounting for Inter-Individual Variability: Development and Performance Verification.

    Science.gov (United States)

    Tylutki, Zofia; Mendyk, Aleksander; Polak, Sebastian

    2018-04-01

    Modern model-based approaches to cardiac safety and efficacy assessment require accurate drug concentration-effect relationship establishment. Thus, knowledge of the active concentration of drugs in heart tissue is desirable along with inter-subject variability influence estimation. To that end, we developed a mechanistic physiologically based pharmacokinetic model of the heart. The models were described with literature-derived parameters and written in R, v.3.4.0. Five parameters were estimated. The model was fitted to amitriptyline and nortriptyline concentrations after an intravenous infusion of amitriptyline. The cardiac model consisted of 5 compartments representing the pericardial fluid, heart extracellular water, and epicardial intracellular, midmyocardial intracellular, and endocardial intracellular fluids. Drug cardiac metabolism, passive diffusion, active efflux, and uptake were included in the model as mechanisms involved in the drug disposition within the heart. The model accounted for inter-individual variability. The estimates of optimized parameters were within physiological ranges. The model performance was verified by simulating 5 clinical studies of amitriptyline intravenous infusion, and the simulated pharmacokinetic profiles agreed with clinical data. The results support the model feasibility. The proposed structure can be tested with the goal of improving the patient-specific model-based cardiac safety assessment and offers a framework for predicting cardiac concentrations of various xenobiotics. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  3. Accounting for scattering in the Landauer-Datta-Lundstrom transport model

    Directory of Open Access Journals (Sweden)

    Юрій Олексійович Кругляк

    2015-03-01

    Full Text Available Scattering of carriers in the LDL transport model during the changes of the scattering times in the collision processes is considered qualitatively. The basic relationship between the transmission coefficient T and the average mean free path  is derived for 1D conductor. As an example, the experimental data for Si MOSFET are analyzed with the use of various models of reliability.

  4. Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.

    Science.gov (United States)

    Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael

    2015-03-03

    Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.

  5. An extended continuum model accounting for the driver's timid and aggressive attributions

    International Nuclear Information System (INIS)

    Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng

    2017-01-01

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV–Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption. - Highlights: • A new continuum model is developed with the consideration of the driver's timid and aggressive behaviors simultaneously. • Applying the linear stability theory, the new model's linear stability is obtained. • Through nonlinear analysis, the KdV–Burgers equation is derived. • The energy consumption for this model is studied.

  6. An extended continuum model accounting for the driver's timid and aggressive attributions

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Rongjun; Ge, Hongxia [Faculty of Maritime and Transportation, Ningbo University, Ningbo 315211 (China); Jiangsu Province Collaborative Innovation Center for Modern Urban Traffic Technologies, Nanjing 210096 (China); National Traffic Management Engineering and Technology Research Centre Ningbo University Sub-centre, Ningbo 315211 (China); Wang, Jufeng, E-mail: wjf@nit.zju.edu.cn [Ningbo Institute of Technology, Zhejiang University, Ningbo 315100 (China)

    2017-04-18

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV–Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption. - Highlights: • A new continuum model is developed with the consideration of the driver's timid and aggressive behaviors simultaneously. • Applying the linear stability theory, the new model's linear stability is obtained. • Through nonlinear analysis, the KdV–Burgers equation is derived. • The energy consumption for this model is studied.

  7. Operational Modelling of the Aerospace Propagation Environment. Volume II

    Science.gov (United States)

    1978-11-01

    radiative transfer models are rarely available in a battlefield environment. tnly secondary ECNET parameters may be available. Hence, current modeling and...adopthe done lee traitements a 6tA de remplacer chaque valeur X4Dar son rang. Clest-h-diro quo Zj eet rermplacO pax Is nombre d’dohant~illons do X. qui...out uslub r -aItistial model relevant to thobe arcac If the Urt chooses a terraln typ- from the lIs- glvtn ,hov, a stetistlcol ’irregulal terr .’n

  8. Microscopic Analysis and Modeling of Airport Surface Sequencing, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Although a number of airportal surface models exist and have been successfully used for analysis of airportal operations, only recently has it become possible to...

  9. Fixed site neutralization model programmer's manual. Volume II

    International Nuclear Information System (INIS)

    Engi, D.; Chapman, L.D.; Judnick, W.; Blum, R.; Broegler, L.; Lenz, J.; Weinthraub, A.; Ballard, D.

    1979-12-01

    This report relates to protection of nuclear materials at nuclear facilities. This volume presents the source listings for the Fixed Site Neutralization Model and its supporting modules, the Plex Preprocessor and the Data Preprocessor

  10. Carbon footprint estimator, phase II : volume I - GASCAP model.

    Science.gov (United States)

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  11. Integrated Visualization Environment for Science Mission Modeling, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA is emphasizing the use of larger, more integrated models in conjunction with systems engineering tools and decision support systems. These tools place a...

  12. Physics-Based Pneumatic Hammer Instability Model, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project is to develop a physics-based pneumatic hammer instability model that accurately predicts the stability of hydrostatic bearings...

  13. Supersymmetric standard model from the heterotic string (II)

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, W. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Hamaguchi, K. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Tokyo Univ. (Japan). Dept. of Physics; Lebedev, O.; Ratz, M. [Bonn Univ. (Germany). Physikalisches Inst.

    2006-06-15

    We describe in detail a Z{sub 6} orbifold compactification of the heterotic E{sub 8} x E{sub 8} string which leads to the (supersymmetric) standard model gauge group and matter content. The quarks and leptons appear as three 16-plets of SO(10), two of which are localized at fixed points with local SO(10) symmetry. The model has supersymmetric vacua without exotics at low energies and is consistent with gauge coupling unification. Supersymmetry can be broken via gaugino condensation in the hidden sector. The model has large vacuum degeneracy. Certain vacua with approximate B-L symmetry have attractive phenomenological features. The top quark Yukawa coupling arises from gauge interactions and is of the order of the gauge couplings. The other Yukawa couplings are suppressed by powers of standard model singlet fields, similarly to the Froggatt-Nielsen mechanism. (Orig.)

  14. A Critical Examination of the Models Proposed to Account for Baryon-Antibaryon Segregation Following the Quark-Hadron Transition

    Science.gov (United States)

    Garfinkle, Moishe

    2015-04-01

    The major concern of the Standard Cosmological Model (SCM) is to account for the continuing existence of the universe in spite of the Standard Particle Model (SPM). According to the SPM below the quark-hadron temperature (~ 150 +/- 50 MeV) the rate of baryon-antibaryon pair creation from γ radiation is in equilibrium with rate of pair annihilation. At freeze-out (~ 20 +/- 10 MeV) the rate of pair creation ceases. Henceforth only annihilation occurs below this temperature, resulting in a terminal pair ratio B+/ γ = B-/ γ ~ 10-18, insufficient to account for the present universe which would require a pair ratio minimum of at least B+/ γ = B-/ γ ~ 10-10. The present universe could not exist according to the SPM unless a mechanism was devised to segregation baryons from antibaryon before freeze-out. The SPM can be tweaked to accommodate the first two conditions but all of the mechanisms proposed over the past sixty years for the third condition failed. All baryon-number excursions devised were found to be reversible. The major concern of the SCM is to account for the continuing existence of the universe in spite of the SPM. The present universe could not exist according to the SPM unless a mechanism was devised to segregation baryons from antibaryon before freeze-out. It is the examination of these possible mechanisms that is subject of this work.

  15. Artificial neural network (ANN) approach for modeling Zn(II) adsorption in batch process

    Energy Technology Data Exchange (ETDEWEB)

    Yildiz, Sayiter [Engineering Faculty, Cumhuriyet University, Sivas (Turkmenistan)

    2017-09-15

    Artificial neural networks (ANN) were applied to predict adsorption efficiency of peanut shells for the removal of Zn(II) ions from aqueous solutions. Effects of initial pH, Zn(II) concentrations, temperature, contact duration and adsorbent dosage were determined in batch experiments. The sorption capacities of the sorbents were predicted with the aid of equilibrium and kinetic models. The Zn(II) ions adsorption onto peanut shell was better defined by the pseudo-second-order kinetic model, for both initial pH, and temperature. The highest R{sup 2} value in isotherm studies was obtained from Freundlich isotherm for the inlet concentration and from Temkin isotherm for the sorbent amount. The high R{sup 2} values prove that modeling the adsorption process with ANN is a satisfactory approach. The experimental results and the predicted results by the model with the ANN were found to be highly compatible with each other.

  16. Accounting standards

    NARCIS (Netherlands)

    Stellinga, B.; Mügge, D.

    2014-01-01

    The European and global regulation of accounting standards have witnessed remarkable changes over the past twenty years. In the early 1990s, EU accounting practices were fragmented along national lines and US accounting standards were the de facto global standards. Since 2005, all EU listed

  17. Accounting outsourcing

    OpenAIRE

    Klečacká, Tereza

    2009-01-01

    This thesis gives a complex view on accounting outsourcing, deals with the outsourcing process from its beginning (condition of collaboration, making of contract), through collaboration to its possible ending. This work defines outsourcing, indicates the main advatages, disadvatages and arguments for its using. The main object of thesis is mainly practical side of accounting outsourcing and providing of first quality accounting services.

  18. Accounting for misclassification in electronic health records-derived exposures using generalized linear finite mixture models.

    Science.gov (United States)

    Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M

    2017-06-01

    Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.

  19. Mathematical modeling of Fe(II), Cu(II), Ni(II) and Zn(II) removal in a horizontal rotating tubular bioreactor.

    Science.gov (United States)

    Rezić, Tonči; Zeiner, Michaela; Santek, Božidar; Novak, Srđan

    2011-11-01

    Industrial wastewaters polluted with toxic heavy metals are serious ecological and environmental problem. Therefore, in this study multi-heavy metals (Fe(2+), Cu(2+), Ni(2+) and Zn(2+)) removal process with mixed microbial culture was examined in the horizontal rotating tubular bioreactor (HRTB) by different combinations of process parameters. Hydrodynamic conditions and biomass sorption capacity have main impact on the removal efficiency of heavy metals: Fe(2+) 95.5-79.0%, Ni(2+) 92.7-54.8%, Cu(2+) 87.7-54.9% and Zn(2+) 81.8-38.1%, respectively. On the basis of experimental results, integral mathematical model of removal heavy metals in the HRTB was established. It combines hydrodynamics (mixing), mass transfer and kinetics to define bioprocess conduction in the HRTB. Mixing in the HRTB was described by structured cascade model and metal ion removal by two combined diffusion-adsorption models, respectively. For Langmuir model, average variances between experimental and simulated concentrations of metal ions were in the range of 1.22-10.99 × 10(-3) and for the Freundlich model 0.12-3.98 × 10(-3), respectively. On the basis of previous facts, it is clear that developed integral bioprocess model with Freundlich model is more efficient in the prediction of concentration of metal ions in the HRTB. Furthermore, the results obtained also pointed out that the established model is at the same time accurate and robust and therefore it has great potential for use in the scale-up procedure.

  20. Accounting for misclassified outcomes in binary regression models using multiple imputation with internal validation data.

    Science.gov (United States)

    Edwards, Jessie K; Cole, Stephen R; Troester, Melissa A; Richardson, David B

    2013-05-01

    Outcome misclassification is widespread in epidemiology, but methods to account for it are rarely used. We describe the use of multiple imputation to reduce bias when validation data are available for a subgroup of study participants. This approach is illustrated using data from 308 participants in the multicenter Herpetic Eye Disease Study between 1992 and 1998 (48% female; 85% white; median age, 49 years). The odds ratio comparing the acyclovir group with the placebo group on the gold-standard outcome (physician-diagnosed herpes simplex virus recurrence) was 0.62 (95% confidence interval (CI): 0.35, 1.09). We masked ourselves to physician diagnosis except for a 30% validation subgroup used to compare methods. Multiple imputation (odds ratio (OR) = 0.60; 95% CI: 0.24, 1.51) was compared with naive analysis using self-reported outcomes (OR = 0.90; 95% CI: 0.47, 1.73), analysis restricted to the validation subgroup (OR = 0.57; 95% CI: 0.20, 1.59), and direct maximum likelihood (OR = 0.62; 95% CI: 0.26, 1.53). In simulations, multiple imputation and direct maximum likelihood had greater statistical power than did analysis restricted to the validation subgroup, yet all 3 provided unbiased estimates of the odds ratio. The multiple-imputation approach was extended to estimate risk ratios using log-binomial regression. Multiple imputation has advantages regarding flexibility and ease of implementation for epidemiologists familiar with missing data methods.

  1. Unsupervised machine learning account of magnetic transitions in the Hubbard model

    Science.gov (United States)

    Ch'ng, Kelvin; Vazquez, Nick; Khatami, Ehsan

    2018-01-01

    We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t -distributed stochastic neighboring ensemble (t -SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t -SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t -SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.

  2. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    OpenAIRE

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts, allowing the inclusion of a broader set of ecosystem services types such regulating services and cultural services. Consistent with the principles of national account, ecosystem accounting focuses on asse...

  3. A comparison of land use change accounting methods: seeking common grounds for key modeling choices in biofuel assessments

    DEFF Research Database (Denmark)

    de Bikuna Salinas, Koldo Saez; Hamelin, Lorie; Hauschild, Michael Zwicky

    2018-01-01

    Five currently used methods to account for the global warming (GW) impact of the induced land-use change (LUC) greenhouse gas (GHG) emissions have been applied to four biofuel case studies. Two of the investigated methods attempt to avoid the need of considering a definite occupation -thus...... amortization period by considering ongoing LUC trends as a dynamic baseline. This leads to the accounting of a small fraction (0.8%) of the related emissions from the assessed LUC, thus their validity is disputed. The comparison of methods and contrasting case studies illustrated the need of clearly...... distinguishing between the different time horizons involved in life cycle assessments (LCA) of land-demanding products like biofuels. Absent in ISO standards, and giving rise to several confusions, definitions for the following time horizons have been proposed: technological scope, inventory model, impact...

  4. Model of Environmental Development of the Urbanized Areas: Accounting of Ecological and other Factors

    Science.gov (United States)

    Abanina, E. N.; Pandakov, K. G.; Agapov, D. A.; Sorokina, Yu V.; Vasiliev, E. H.

    2017-05-01

    Modern cities and towns are often characterized by poor administration, which could be the reason of environmental degradation, the poverty growth, decline in economic growth and social isolation. In these circumstances it is really important to conduct fresh researches forming new ways of sustainable development of administrative districts. This development of the urban areas depends on many interdependent factors: ecological, economic, social. In this article we show some theoretical aspects of forming a model of environmental progress of the urbanized areas. We submit some model containing four levels including natural resources capacities of the territory, its social features, economic growth and human impact. The author describes the interrelations of elements of the model. In this article the program of environmental development of a city is offered and it could be used in any urban area.

  5. Mathematical modeling of pigment dispersion taking into account the full agglomerate particle size distribution

    DEFF Research Database (Denmark)

    Kiil, Søren

    2017-01-01

    . The only adjustable parameter used was an apparent rate constant for the linear agglomerate erosion rate. Model simulations, at selected values of time, for the full agglomerate particle size distribution were in good qualitative agreement with the measured values. A quantitative match of the experimental...... particle size distribution was simulated. Data from two previous experimental investigations were used for model validation. The first concerns two different yellow organic pigments dispersed in nitrocellulose/ethanol vehicles in a ball mill and the second a red organic pigment dispersed in a solvent...... particle size distributions could be obtained using time-dependent fragment distributions, but this resulted in a very slight improvement in the simulated transient mean diameter only. The model provides a mechanistic understanding of the agglomerate breakage process that can be used, e...

  6. Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality

    Energy Technology Data Exchange (ETDEWEB)

    Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis

    2015-03-01

    This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.

  7. Discriminating neutrino mass models using Type-II see-saw formula

    Indian Academy of Sciences (India)

    An attempt has been made to discriminate theoretically the three possible patterns of neutrino mass models,viz., degenerate, inverted hierarchical and normal hierachical models, within the framework of Type-II see-saw formula. From detailed numerical analysis we are able to arrive at a conclusion that the inverted ...

  8. Programming Models for Three-Dimensional Hydrodynamics on the CM-5 (Part II)

    International Nuclear Information System (INIS)

    Amala, P.A.K.; Rodrigue, G.H.

    1994-01-01

    This is a two-part presentation of a timing study on the Thinking Machines CORP. CM-5 computer. Part II is given in this study and represents domain-decomposition and message-passing models. Part I described computational problems using a SIMD model and connection machine FORTRAN (CMF)

  9. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Directory of Open Access Journals (Sweden)

    Rahmatollah Beheshti

    Full Text Available Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES, the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1 quantifying the influence of prior diet preferences when food budgets are increased and 2 simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs, or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP. Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  10. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Science.gov (United States)

    Beheshti, Rahmatollah; Jones-Smith, Jessica C; Igusa, Takeru

    2017-01-01

    Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES), the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1) quantifying the influence of prior diet preferences when food budgets are increased and 2) simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs), or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP). Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  11. Improved signal model for confocal sensors accounting for object depending artifacts.

    Science.gov (United States)

    Mauch, Florian; Lyda, Wolfram; Gronle, Marc; Osten, Wolfgang

    2012-08-27

    The conventional signal model of confocal sensors is well established and has proven to be exceptionally robust especially when measuring rough surfaces. Its physical derivation however is explicitly based on plane surfaces or point like objects, respectively. Here we show experimental results of a confocal point sensor measurement of a surface standard. The results illustrate the rise of severe artifacts when measuring curved surfaces. On this basis, we present a systematic extension of the conventional signal model that is proven to be capable of qualitatively explaining these artifacts.

  12. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models.

    Science.gov (United States)

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-12-15

    Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  13. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    Directory of Open Access Journals (Sweden)

    Koen Degeling

    2017-12-01

    Full Text Available Abstract Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Methods Two approaches, 1 using non-parametric bootstrapping and 2 using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Results Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500, the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25, yielding infeasible modeling outcomes. Conclusions Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  14. New hexadentate macrocyclic ligand and their copper(II) and nickel(II) complexes: Spectral, magnetic, electrochemical, thermal, molecular modeling and antimicrobial studies

    Science.gov (United States)

    Chandra, Sulekh; Ruchi; Qanungo, Kushal; Sharma, Saroj. K.

    Ni(II) and Cu(II) complexes were synthesized with a hexadentate macrocyclic ligand [3,4,8,9tetraoxo-2,5,7,10tetraaza-1,6dithio-(3,4,8,9) dipyridinedodecane(L)] and characterized by elemental analysis, molar conductance measurements, mass, NMR, IR, electronic, EPR spectral, thermal and molecular modeling studies. All the complexes are 1:2 electrolytes in nature and may be formulated as [M(L)]X2 [where, M = Ni(II) and Cu(II) and X = Cl-, NO3-, ½SO42-, CH3COO-]. On the basis of IR, electronic and EPR spectral studies an octahedral geometry has been assigned for Ni(II) complexes and tetragonal geometry for Cu(II) complexes. The antimicrobial activities and LD50 values of the ligand and its complexes, as growth inhibiting agents, have been screened in vitro against two different species of bacteria and plant pathogenic fungi.

  15. Shunted-Josephson-junction model. II. The nonautonomous case

    DEFF Research Database (Denmark)

    Belykh, V. N.; Pedersen, Niels Falsig; Sørensen, O. H.

    1977-01-01

    The shunted-Josephson-junction model with a monochromatic ac current drive is discussed employing the qualitative methods of the theory of nonlinear oscillations. As in the preceding paper dealing with the autonomous junction, the model includes a phase-dependent conductance and a shunt capacitance....... The mathematical discussion makes use of the phase-space representation of the solutions to the differential equation. The behavior of the trajectories in phase space is described for different characteristic regions in parameter space and the associated features of the junction IV curve to be expected are pointed...... out. The main objective is to provide a qualitative understanding of the junction behavior, to clarify which kinds of properties may be derived from the shunted-junction model, and to specify the relative arrangement of the important domains in the parameter-space decomposition....

  16. Marginal production in the Gulf of Mexico - II. Model results

    International Nuclear Information System (INIS)

    Kaiser, Mark J.; Yu, Yunke

    2010-01-01

    In the second part of this two-part article on marginal production in the Gulf of Mexico, we estimate the number of committed assets in water depth less than 1000 ft that are expected to be marginal over a 60-year time horizon. We compute the expected quantity and value of the production and gross revenue streams of the gulf's committed asset inventory circa. January 2007 using a probabilistic model framework. Cumulative hydrocarbon production from the producing inventory is estimated to be 1056 MMbbl oil and 13.3 Tcf gas. Marginal production from the committed asset inventory is expected to contribute 4.1% of total oil production and 5.4% of gas production. A meta-evaluation procedure is adapted to present the results of sensitivity analysis. Model results are discussed along with a description of the model framework and limitations of the analysis. (author)

  17. Marginal production in the Gulf of Mexico - II. Model results

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, Mark J.; Yu, Yunke [Center for Energy Studies, Louisiana State University, Baton Rouge, LA 70803 (United States)

    2010-08-15

    In the second part of this two-part article on marginal production in the Gulf of Mexico, we estimate the number of committed assets in water depth less than 1000 ft that are expected to be marginal over a 60-year time horizon. We compute the expected quantity and value of the production and gross revenue streams of the gulf's committed asset inventory circa. January 2007 using a probabilistic model framework. Cumulative hydrocarbon production from the producing inventory is estimated to be 1056 MMbbl oil and 13.3 Tcf gas. Marginal production from the committed asset inventory is expected to contribute 4.1% of total oil production and 5.4% of gas production. A meta-evaluation procedure is adapted to present the results of sensitivity analysis. Model results are discussed along with a description of the model framework and limitations of the analysis. (author)

  18. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia (P.197-207

    Directory of Open Access Journals (Sweden)

    Supriyati Supriyati

    2017-01-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the criteria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may differ, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, the Audit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  19. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia

    Directory of Open Access Journals (Sweden)

    Supriyati

    2015-12-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the crite-ria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may dif-fer, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, theAudit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  20. Current-account effects of a devaluation in an optimizing model with capital accumulation

    DEFF Research Database (Denmark)

    Nielsen, Søren Bo

    1991-01-01

    This article explores the consequences of a devaluation in the context of a ‘real', optimizing model of a small open economy. What provides for real effects of the devaluation is the existence of nominal wage stickiness during a contract period. We show that if this contract period is relatively...... assets and of capital...

  1. Summary of model to account for inhibition of CAM corrosion by porous ceramic coating

    Energy Technology Data Exchange (ETDEWEB)

    Hopper, R., LLNL

    1998-03-31

    Corrosion occurs during five characteristic periods or regimes. These are summarized below. For more detailed discussion, see the attached Memorandum by Robert Hopper entitled `Ceramic Barrier Performance Model, Version 1.0, Description of Initial PA Input` and dated March 30, 1998.

  2. Methods for Accounting for Co-Teaching in Value-Added Models. Working Paper

    Science.gov (United States)

    Hock, Heinrich; Isenberg, Eric

    2012-01-01

    Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching.…

  3. Accounting for false-positive acoustic detections of bats using occupancy models

    Science.gov (United States)

    Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.

    2014-01-01

    1. Acoustic surveys have become a common survey method for bats and other vocal taxa. Previous work shows that bat echolocation may be misidentified, but common analytic methods, such as occupancy models, assume that misidentifications do not occur. Unless rare, such misidentifications could lead to incorrect inferences with significant management implications.

  4. Practical Model for First Hyperpolarizability Dispersion Accounting for Both Homogeneous and Inhomogeneous Broadening Effects.

    Science.gov (United States)

    Campo, Jochen; Wenseleers, Wim; Hales, Joel M; Makarov, Nikolay S; Perry, Joseph W

    2012-08-16

    A practical yet accurate dispersion model for the molecular first hyperpolarizability β is presented, incorporating both homogeneous and inhomogeneous line broadening because these affect the β dispersion differently, even if they are indistinguishable in linear absorption. Consequently, combining the absorption spectrum with one free shape-determining parameter Ginhom, the inhomogeneous line width, turns out to be necessary and sufficient to obtain a reliable description of the β dispersion, requiring no information on the homogeneous (including vibronic) and inhomogeneous line broadening mechanisms involved, providing an ideal model for practical use in extrapolating experimental nonlinear optical (NLO) data. The model is applied to the efficient NLO chromophore picolinium quinodimethane, yielding an excellent fit of the two-photon resonant wavelength-dependent data and a dependable static value β0 = 316 × 10(-30) esu. Furthermore, we show that including a second electronic excited state in the model does yield an improved description of the NLO data at shorter wavelengths but has only limited influence on β0.

  5. Accountability in Training Transfer: Adapting Schlenker's Model of Responsibility to a Persistent but Solvable Problem

    Science.gov (United States)

    Burke, Lisa A.; Saks, Alan M.

    2009-01-01

    Decades have been spent studying training transfer in organizational environments in recognition of a transfer problem in organizations. Theoretical models of various antecedents, empirical studies of transfer interventions, and studies of best practices have all been advanced to address this continued problem. Yet a solution may not be so…

  6. Small strain multiphase-field model accounting for configurational forces and mechanical jump conditions

    Science.gov (United States)

    Schneider, Daniel; Schoof, Ephraim; Tschukin, Oleg; Reiter, Andreas; Herrmann, Christoph; Schwab, Felix; Selzer, Michael; Nestler, Britta

    2017-08-01

    Computational models based on the phase-field method have become an essential tool in material science and physics in order to investigate materials with complex microstructures. The models typically operate on a mesoscopic length scale resolving structural changes of the material and provide valuable information about the evolution of microstructures and mechanical property relations. For many interesting and important phenomena, such as martensitic phase transformation, mechanical driving forces play an important role in the evolution of microstructures. In order to investigate such physical processes, an accurate calculation of the stresses and the strain energy in the transition region is indispensable. We recall a multiphase-field elasticity model based on the force balance and the Hadamard jump condition at the interface. We show the quantitative characteristics of the model by comparing the stresses, strains and configurational forces with theoretical predictions in two-phase cases and with results from sharp interface calculations in a multiphase case. As an application, we choose the martensitic phase transformation process in multigrain systems and demonstrate the influence of the local homogenization scheme within the transition regions on the resulting microstructures.

  7. Shadow Segmentation and Augmentation Using á-overlay Models that Account for Penumbra

    DEFF Research Database (Denmark)

    Nielsen, Michael; Madsen, Claus B.

    2006-01-01

    This paper introduces a new concept within shadow segmentation. Previously, an image is considered to consist of shadow and non-shadow regions. Thus, a binary mask is estimated using various heuristics regarding structural and retinex/color constancy theories. We wish to model natural shadows so...

  8. An Exemplar-Model Account of Feature Inference from Uncertain Categorizations

    Science.gov (United States)

    Nosofsky, Robert M.

    2015-01-01

    In a highly systematic literature, researchers have investigated the manner in which people make feature inferences in paradigms involving uncertain categorizations (e.g., Griffiths, Hayes, & Newell, 2012; Murphy & Ross, 1994, 2007, 2010a). Although researchers have discussed the implications of the results for models of categorization and…

  9. Evaluation of alternative surface runoff accounting procedures using the SWAT model

    Science.gov (United States)

    For surface runoff estimation in the Soil and Water Assessment Tool (SWAT) model, the curve number (CN) procedure is commonly adopted to calculate surface runoff by utilizing antecedent soil moisture condition (SCSI) in field. In the recent version of SWAT (SWAT2005), an alternative approach is ava...

  10. Using state-and-transition modeling to account for imperfect detection in invasive species management

    Science.gov (United States)

    Frid, Leonardo; Holcombe, Tracy; Morisette, Jeffrey T.; Olsson, Aaryn D.; Brigham, Lindy; Bean, Travis M.; Betancourt, Julio L.; Bryan, Katherine

    2013-01-01

    Buffelgrass, a highly competitive and flammable African bunchgrass, is spreading rapidly across both urban and natural areas in the Sonoran Desert of southern and central Arizona. Damages include increased fire risk, losses in biodiversity, and diminished revenues and quality of life. Feasibility of sustained and successful mitigation will depend heavily on rates of spread, treatment capacity, and cost–benefit analysis. We created a decision support model for the wildland–urban interface north of Tucson, AZ, using a spatial state-and-transition simulation modeling framework, the Tool for Exploratory Landscape Scenario Analyses. We addressed the issues of undetected invasions, identifying potentially suitable habitat and calibrating spread rates, while answering questions about how to allocate resources among inventory, treatment, and maintenance. Inputs to the model include a state-and-transition simulation model to describe the succession and control of buffelgrass, a habitat suitability model, management planning zones, spread vectors, estimated dispersal kernels for buffelgrass, and maps of current distribution. Our spatial simulations showed that without treatment, buffelgrass infestations that started with as little as 80 ha (198 ac) could grow to more than 6,000 ha by the year 2060. In contrast, applying unlimited management resources could limit 2060 infestation levels to approximately 50 ha. The application of sufficient resources toward inventory is important because undetected patches of buffelgrass will tend to grow exponentially. In our simulations, areas affected by buffelgrass may increase substantially over the next 50 yr, but a large, upfront investment in buffelgrass control could reduce the infested area and overall management costs.

  11. The Adsorption of Cd(II) on Manganese Oxide Investigated by Batch and Modeling Techniques.

    Science.gov (United States)

    Huang, Xiaoming; Chen, Tianhu; Zou, Xuehua; Zhu, Mulan; Chen, Dong; Pan, Min

    2017-09-28

    Manganese (Mn) oxide is a ubiquitous metal oxide in sub-environments. The adsorption of Cd(II) on Mn oxide as function of adsorption time, pH, ionic strength, temperature, and initial Cd(II) concentration was investigated by batch techniques. The adsorption kinetics showed that the adsorption of Cd(II) on Mn oxide can be satisfactorily simulated by pseudo-second-order kinetic model with high correlation coefficients (R² > 0.999). The adsorption of Cd(II) on Mn oxide significantly decreased with increasing ionic strength at pH adsorption was independent of ionic strength at pH > 6.0, which indicated that outer-sphere and inner-sphere surface complexation dominated the adsorption of Cd(II) on Mn oxide at pH 6.0, respectively. The maximum adsorption capacity of Mn oxide for Cd(II) calculated from Langmuir model was 104.17 mg/g at pH 6.0 and 298 K. The thermodynamic parameters showed that the adsorption of Cd(II) on Mn oxide was an endothermic and spontaneous process. According to the results of surface complexation modeling, the adsorption of Cd(II) on Mn oxide can be satisfactorily simulated by ion exchange sites (X₂Cd) at low pH and inner-sphere surface complexation sites (SOCd⁺ and (SO)₂CdOH - species) at high pH conditions. The finding presented herein plays an important role in understanding the fate and transport of heavy metals at the water-mineral interface.

  12. Making Collaborative Innovation Accountable

    DEFF Research Database (Denmark)

    Sørensen, Eva

    The public sector is increasingly expected to be innovative, but the prize for a more innovative public sector might be that it becomes difficult to hold public authorities to account for their actions. The article explores the tensions between innovative and accountable governance, describes the...... the foundation for these tensions in different accountability models, and suggest directions to take in analyzing the accountability of collaborative innovation processes.......The public sector is increasingly expected to be innovative, but the prize for a more innovative public sector might be that it becomes difficult to hold public authorities to account for their actions. The article explores the tensions between innovative and accountable governance, describes...

  13. Modeling multibody systems with uncertainties. Part II: Numerical applications

    Energy Technology Data Exchange (ETDEWEB)

    Sandu, Corina, E-mail: csandu@vt.edu; Sandu, Adrian; Ahmadian, Mehdi [Virginia Polytechnic Institute and State University, Mechanical Engineering Department (United States)

    2006-04-15

    This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects .In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties.

  14. Modeling multibody systems with uncertainties. Part II: Numerical applications

    International Nuclear Information System (INIS)

    Sandu, Corina; Sandu, Adrian; Ahmadian, Mehdi

    2006-01-01

    This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects .In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties

  15. Causality in 1+1-dimensional Yukawa model-II

    Indian Academy of Sciences (India)

    2013-10-01

    Oct 1, 2013 ... shown that the effective model can be interpreted as a field theory of a bound state. We study causality in such a ... the motivation pertaining to causality violation in the bound states. In §3 condition of .... Consider a diagram with n external scalars, L fermion loops, V vertices, IF internal fermion lines and IB ...

  16. Simplicial models for trace spaces II: General higher dimensional automata

    DEFF Research Database (Denmark)

    Raussen, Martin

    Higher Dimensional Automata (HDA) are topological models for the study of concurrency phenomena. The state space for an HDA is given as a pre-cubical complex in which a set of directed paths (d-paths) is singled out. The aim of this paper is to describe a general method that determines the space...

  17. Bianchi Type-II inflationary models with constant deceleration ...

    Indian Academy of Sciences (India)

    C P Singh and S Kumar mechanism at the early stages of evolution to explain the flat, homogeneous and isotropic nature of the present day Universe. In these models, the Universe un- dergoes a phase transition characterized by the evolution of a Higg's field φ. The inflation will take place if the potential V (φ) has a 'flat' ...

  18. Algebraic models of hadron structure II. Strange baryons

    International Nuclear Information System (INIS)

    Bijker, R.; Iachello, F.; Leviatan, A.

    2000-01-01

    The algebraic treatment of baryons is extended to strange resonances. Within this framework we study a collective string-like model in which the radial excitations are interpreted as rotations and vibrations of the strings. We derive a mass formula and closed expressions for strong and electromagnetic decay widths and use these to analyze the available experimental data

  19. Open Business Models (Latin America) - Phase II | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Open business is a different way of doing business related to information, knowledge and culture, in which intellectual property does not play the role of being either the primary incentive or the primary source of remuneration. Open business models include, for example, making content or services available free of charge ...

  20. Conditional models accounting for regression to the mean in observational multi-wave panel studies on alcohol consumption.

    Science.gov (United States)

    Ripatti, Samuli; Mäkelä, Pia

    2008-01-01

    To develop statistical methodology needed for studying whether effects of an acute-onset intervention differ by consumption group that accounts correctly for the effect of regression to the mean (RTM) in observational panel studies with three or more measurement waves. A general statistical modelling framework, based on conditional models, is presented for analysing alcohol panel data with three or more measurements, that models the dependence between initial drinking level and change in consumption controlling for RTM. The method is illustrated by panel data from Finland, southern Sweden and Denmark, where the effects of large changes in alcohol taxes and travellers' allowances were studied. The suggested model allows for drawing statistical inference of the parameters of interest and also the identification of non-linear effects of an intervention by initial consumption using standard statistical software modelling tools. There was no evidence in any of the countries of the changes being larger among heavy drinkers, but in southern Sweden there was evidence that light drinkers raised their level of consumption. Conditional models are a versatile modelling framework that offers a flexible tool for modelling and testing changes due to intervention in consumption by initial consumption while controlling simultaneously for RTM.

  1. Tree biomass in the Swiss landscape: nationwide modelling for improved accounting for forest and non-forest trees.

    Science.gov (United States)

    Price, B; Gomez, A; Mathys, L; Gardi, O; Schellenberger, A; Ginzler, C; Thürig, E

    2017-03-01

    Trees outside forest (TOF) can perform a variety of social, economic and ecological functions including carbon sequestration. However, detailed quantification of tree biomass is usually limited to forest areas. Taking advantage of structural information available from stereo aerial imagery and airborne laser scanning (ALS), this research models tree biomass using national forest inventory data and linear least-square regression and applies the model both inside and outside of forest to create a nationwide model for tree biomass (above ground and below ground). Validation of the tree biomass model against TOF data within settlement areas shows relatively low model performance (R 2 of 0.44) but still a considerable improvement on current biomass estimates used for greenhouse gas inventory and carbon accounting. We demonstrate an efficient and easily implementable approach to modelling tree biomass across a large heterogeneous nationwide area. The model offers significant opportunity for improved estimates on land use combination categories (CC) where tree biomass has either not been included or only roughly estimated until now. The ALS biomass model also offers the advantage of providing greater spatial resolution and greater within CC spatial variability compared to the current nationwide estimates.

  2. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    NARCIS (Netherlands)

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts,

  3. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    NARCIS (Netherlands)

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts,

  4. Accountability and non-proliferation nuclear regime: a review of the mutual surveillance Brazilian-Argentine model for nuclear safeguards; Accountability e regime de nao proliferacao nuclear: uma avaliacao do modelo de vigilancia mutua brasileiro-argentina de salvaguardas nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Xavier, Roberto Salles

    2014-08-01

    The regimes of accountability, the organizations of global governance and institutional arrangements of global governance of nuclear non-proliferation and of Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards are the subject of research. The starting point is the importance of the institutional model of global governance for the effective control of non-proliferation of nuclear weapons. In this context, the research investigates how to structure the current arrangements of the international nuclear non-proliferation and what is the performance of model Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards in relation to accountability regimes of global governance. For that, was searched the current literature of three theoretical dimensions: accountability, global governance and global governance organizations. In relation to the research method was used the case study and the treatment technique of data the analysis of content. The results allowed: to establish an evaluation model based on accountability mechanisms; to assess how behaves the model Mutual Vigilance Brazilian-Argentine Nuclear Safeguards front of the proposed accountability regime; and to measure the degree to which regional arrangements that work with systems of global governance can strengthen these international systems. (author)

  5. Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.

    2016-01-01

    Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.

  6. Why does placing the question before an arithmetic word problem improve performance? A situation model account.

    Science.gov (United States)

    Thevenot, Catherine; Devidal, Michel; Barrouillet, Pierre; Fayol, Michel

    2007-01-01

    The aim of this paper is to investigate the controversial issue of the nature of the representation constructed by individuals to solve arithmetic word problems. More precisely, we consider the relevance of two different theories: the situation or mental model theory (Johnson-Laird, 1983; Reusser, 1989) and the schema theory (Kintsch & Greeno, 1985; Riley, Greeno, & Heller, 1983). Fourth-graders who differed in their mathematical skills were presented with problems that varied in difficulty and with the question either before or after the text. We obtained the classic effect of the position of the question, with better performance when the question was presented prior to the text. In addition, this effect was more marked in the case of children who had poorer mathematical skills and in the case of more difficult problems. We argue that this pattern of results is compatible only with the situation or mental model theory, and not with the schema theory.

  7. Mathematical modeling of pigment dispersion taking into account the full agglomerate particle size distribution

    DEFF Research Database (Denmark)

    Kiil, Søren

    2017-01-01

    The purpose of this work is to develop a mathematical model that can quantify the dispersion of pigments, with a focus on the mechanical breakage of pigment agglomerates. The underlying physical mechanism was assumed to be surface erosion of spherical pigment agglomerates. The full agglomerate......-based acrylic vehicle in a three-roll mill. When the linear rate of agglomerate surface erosion was taken to be proportional to the external agglomerate surface area, simulations of the volume-moment mean diameter over time were in good quantitative agreement with experimental data for all three pigments....... The only adjustable parameter used was an apparent rate constant for the linear agglomerate erosion rate. Model simulations, at selected values of time, for the full agglomerate particle size distribution were in good qualitative agreement with the measured values. A quantitative match of the experimental...

  8. Modelling of saturated soil slopes equilibrium with an account of the liquid phase bearing capacity

    Directory of Open Access Journals (Sweden)

    Maltseva Tatyana

    2017-01-01

    Full Text Available The paper presents an original method of solving the problem of uniformly distributed load action on a two-phase elastic half-plane with the use of a kinematic model. The kinematic model (Maltsev L.E. of two-phase medium is based on two new hypotheses according to which the stress and strain state of the two-phase body is described by a system of linear elliptic equations. These equations differ from the Lame equations of elasticity theory with two terms in each equation. The terms describe the bearing capacity of the liquid phase or a decrease in stress in the solid phase. The finite element method has been chosen as a solution method.

  9. Model application of Murabahah financing acknowledgement statement of Sharia accounting standard No 59 Year 2002

    Science.gov (United States)

    Muda, Iskandar; Panjaitan, Rohdearni; Erlina; Ginting, Syafruddin; Maksum, Azhar; Abubakar

    2018-03-01

    The purpose of this research is to observe murabahah financing implantation model. Observations were made on one of the sharia banks going public in Indonesia. Form of implementation of such implementation in the form of financing given the exact facilities and maximum financing, then the provision of financing should be adjusted to the type, business conditions and business plans prospective mudharib. If the financing provided is too low with the mudharib requirement not reaching the target and the financing is not refundable.

  10. Reductive dechlorination of DNAPL mixtures with Fe(II/III)-L and Fe(II)-C: Evaluation using a kinetic model for the competitions.

    Science.gov (United States)

    Do, Si-Hyun; Jo, Se-Hee; Roh, Ji Soo; Im, Hye Jin; Park, Ho Bum; Batchelor, Bill

    2018-05-15

    A kinetic model for the competitions was applied to understand the reductive dechlorination of tertiary DNAPL mixtures containing PCE, TCE, and 1,1,1-TCA. The model assumed that the mass transfer rates were sufficiently rapid that the target compounds in the solution and the DNAPL mixture were in phase equilibrium. Dechlorination was achieved using either a mixture of Fe(II), Fe(III), and Ca(OH) 2 (Fe(II/III)-L) or a mixture of Fe(II) and Portland cement (Fe(II)-C). PCE in the DNAPL mixtures was gradually reduced and it was reduced more rapidly using Fe(II)-C than Fe(II/III)-L. A constant total TCE concentration in the DNAPL mixtures was observed, which implied that the rate of loss of TCE by dechlorination and possibly other processes was equal to the rate of production of TCE by PCE dechlorination. On the other hand, 1,1,1-TCA in the DNAPL mixtures was removed rapidly and its degradation rate by Fe(II/III)-L was faster than by Fe(II)-C. The coefficients in the kinetic model (k i , K i ) were observed to decrease in the order 1,1,1-TCA>PCE>TCE, for both Fe(II/III)-L and Fe(II)-C. The concentrations of target compounds in solution were the effective solubilities, because of the assumption of phase equilibrium and were calculated with Rault's Law. The concentration changes observed were an increase and then a decrease for PCE, a sharp and then gradual increase for TCE, and a dramatic decrease for 1,1,1-TCA. The fraction of initial and theoretical reductive capacity revealed that Fe(II)-C had ability to degrade target compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Homology modeling and docking of AahII-Nanobody complexes reveal the epitope binding site on AahII scorpion toxin.

    Science.gov (United States)

    Ksouri, Ayoub; Ghedira, Kais; Ben Abderrazek, Rahma; Shankar, B A Gowri; Benkahla, Alia; Bishop, Ozlem Tastan; Bouhaouala-Zahar, Balkiss

    2018-02-19

    Scorpion envenoming and its treatment is a public health problem in many parts of the world due to highly toxic venom polypeptides diffusing rapidly within the body of severely envenomed victims. Recently, 38 AahII-specific Nanobody sequences (Nbs) were retrieved from which the performance of NbAahII10 nanobody candidate, to neutralize the most poisonous venom compound namely AahII acting on sodium channels, was established. Herein, structural computational approach is conducted to elucidate the Nb-AahII interactions that support the biological characteristics, using Nb multiple sequence alignment (MSA) followed by modeling and molecular docking investigations (RosettaAntibody, ZDOCK software tools). Sequence and structural analysis showed two dissimilar residues of NbAahII10 CDR1 (Tyr27 and Tyr29) and an inserted polar residue Ser30 that appear to play an important role. Indeed, CDR3 region of NbAahII10 is characterized by a specific Met104 and two negatively charged residues Asp115 and Asp117. Complex dockings reveal that NbAahII17 and NbAahII38 share one common binding site on the surface of the AahII toxin divergent from the NbAahII10 one's. At least, a couple of NbAahII10 - AahII residue interactions (Gln38 - Asn44 and Arg62, His64, respectively) are mainly involved in the toxic AahII binding site. Altogether, this study gives valuable insights in the design and development of next generation of antivenom. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. An extended heterogeneous car-following model accounting for anticipation driving behavior and mixed maximum speeds

    Science.gov (United States)

    Sun, Fengxin; Wang, Jufeng; Cheng, Rongjun; Ge, Hongxia

    2018-02-01

    The optimal driving speeds of the different vehicles may be different for the same headway. In the optimal velocity function of the optimal velocity (OV) model, the maximum speed vmax is an important parameter determining the optimal driving speed. A vehicle with higher maximum speed is more willing to drive faster than that with lower maximum speed in similar situation. By incorporating the anticipation driving behavior of relative velocity and mixed maximum speeds of different percentages into optimal velocity function, an extended heterogeneous car-following model is presented in this paper. The analytical linear stable condition for this extended heterogeneous traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulted from the cooperation between anticipation driving behavior and heterogeneous maximum speeds in the optimal velocity function. The analytical and numerical results all demonstrate that strengthening driver's anticipation effect can improve the stability of heterogeneous traffic flow, and increasing the lowest value in the mixed maximum speeds will result in more instability, but increasing the value or proportion of the part already having higher maximum speed will cause different stabilities at high or low traffic densities.

  13. Taking error into account when fitting models using Approximate Bayesian Computation.

    Science.gov (United States)

    van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M

    2018-03-01

    Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.

  14. A new lattice model accounting for multiple optimal current differences' anticipation effect in two-lane system

    Science.gov (United States)

    Li, Xiaoqin; Fang, Kangling; Peng, Guanghan

    2017-11-01

    This paper extends a two-lane lattice hydrodynamic traffic flow model to take into account the driver's anticipation effect in sensing the multiple optimal current differences. Based on the proposed model, we derive analytically the effect of driver's anticipation of multiple optimal current differences on the instability of traffic dynamics. The phase diagrams have been plotted and discussed that the stability region enhances with anticipation effect in sensing multiple optimal current differences. Through simulation, it is found that the oscillation of density wave around critical density decreases with an increase in lattice number and anticipation time for transient and steady state. The simulation results are in good agreement with the theoretical analysis, which show that considering the driver's anticipation of multiple optimal current differences in two-lane lattice model stabilizes the traffic flow and suppresses the traffic jam efficiently.

  15. Differential geometry based solvation model II: Lagrangian formulation.

    Science.gov (United States)

    Chen, Zhan; Baker, Nathan A; Wei, G W

    2011-12-01

    Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation models. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The optimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and PB equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of

  16. Mathematical model II. Basic particle and special relativity

    Directory of Open Access Journals (Sweden)

    Nitin Ramchandra Gadre

    2011-03-01

    Full Text Available The basic particle electron obeys various theories like electrodynamics, quantum mechanics and special relativity. Particle under different experimental conditions behaves differently, allowing us to observe different characteristics which become basis for these theories. In this paper, we try to find out the requirements of the special relativity and suggest a mathematical particle model which can satisfy these requirements. The basic presumption is that the particle should have some structural characteristics which make the particle obey the postulates of these theories. As it is experimentally ‘difficult’ to find the structure of basic particle electron we make a mathematical attempt. We call this model as logically and mathematically probable structure of the basic particle, electron.

  17. MODELING OF TARGETED DRUG DELIVERY PART II. MULTIPLE DRUG ADMINISTRATION

    Directory of Open Access Journals (Sweden)

    A. V. Zaborovskiy

    2017-01-01

    Full Text Available In oncology practice, despite significant advances in early cancer detection, surgery, radiotherapy, laser therapy, targeted therapy, etc., chemotherapy is unlikely to lose its relevance in the near future. In this context, the development of new antitumor agents is one of the most important problems of cancer research. In spite of the importance of searching for new compounds with antitumor activity, the possibilities of the “old” agents have not been fully exhausted. Targeted delivery of antitumor agents can give them a “second life”. When developing new targeted drugs and their further introduction into clinical practice, the change in their pharmacodynamics and pharmacokinetics plays a special role. The paper describes a pharmacokinetic model of the targeted drug delivery. The conditions under which it is meaningful to search for a delivery vehicle for the active substance were described. Primary screening of antitumor agents was undertaken to modify them for the targeted delivery based on underlying assumptions of the model.

  18. Solving seismological problems using sgraph program: II-waveform modeling

    International Nuclear Information System (INIS)

    Abdelwahed, Mohamed F.

    2012-01-01

    One of the seismological programs to manipulate seismic data is SGRAPH program. It consists of integrated tools to perform advanced seismological techniques. SGRAPH is considered a new system for maintaining and analyze seismic waveform data in a stand-alone Windows-based application that manipulate a wide range of data formats. SGRAPH was described in detail in the first part of this paper. In this part, I discuss the advanced techniques including in the program and its applications in seismology. Because of the numerous tools included in the program, only SGRAPH is sufficient to perform the basic waveform analysis and to solve advanced seismological problems. In the first part of this paper, the application of the source parameters estimation and hypocentral location was given. Here, I discuss SGRAPH waveform modeling tools. This paper exhibits examples of how to apply the SGRAPH tools to perform waveform modeling for estimating the focal mechanism and crustal structure of local earthquakes.

  19. Mathematical model II. Basic particle and special relativity

    OpenAIRE

    Nitin Ramchandra Gadre

    2011-01-01

    The basic particle electron obeys various theories like electrodynamics, quantum mechanics and special relativity. Particle under different experimental conditions behaves differently, allowing us to observe different characteristics which become basis for these theories. In this paper, we try to find out the requirements of the special relativity and suggest a mathematical particle model which can satisfy these requirements. The basic presumption is that the particle should have some structu...

  20. Model of comet comae. II. Effects of solar photodissociative ionization

    International Nuclear Information System (INIS)

    Huebner, W.F.; Giguere, P.T.

    1980-01-01

    Improvements to our computer model of coma plotochemistry are described. These include an expansion of the chemical reactions network and new rate constants that have been measured only recently. Photolytic reactions of additional molecules are incorporated, and photolytic branching ratios are treated in far greater detail than in our previous work. A total of 25 photodissociative ionization (PDI) reactions are now considered (as compared to only 3 PDI reactions previously). Solar PDI of the mother molecule CO 2 is shown to compete effectively with photoionization of CO in the production of observed CO + . The CO + density peak predicted by our improved model, for COP 2 or CO mother molecules, is deep in the inner coma, in better agreement with observation than our old CO 2 model. However, neither CO 2 nor CO mother molecule calculations reproduce the CO + /H 2 O + ratio observed in comet Kohoutek. PDI products of CO 2 , CO, CH 4 , and NH 3 mother molecules fuel a complex chemistry scheme, producing inner coma abundances of CN, C 2 , and C 3 much greater than previously calculated

  1. Slag Behavior in Gasifiers. Part II: Constitutive Modeling of Slag

    Directory of Open Access Journals (Sweden)

    Mehrdad Massoudi

    2013-02-01

    Full Text Available The viscosity of slag and the thermal conductivity of ash deposits are among two of the most important constitutive parameters that need to be studied. The accurate formulation or representations of the (transport properties of coal present a special challenge of modeling efforts in computational fluid dynamics applications. Studies have indicated that slag viscosity must be within a certain range of temperatures for tapping and the membrane wall to be accessible, for example, between 1,300 °C and 1,500 °C, the viscosity is approximately 25 Pa·s. As the operating temperature decreases, the slag cools and solid crystals begin to form. Since slag behaves as a non-linear fluid, we discuss the constitutive modeling of slag and the important parameters that must be studied. We propose a new constitutive model, where the stress tensor not only has a yield stress part, but it also has a viscous part with a shear rate dependency of the viscosity, along with temperature and concentration dependency, while allowing for the possibility of the normal stress effects. In Part I, we reviewed, identify and discuss the key coal ash properties and the operating conditions impacting slag behavior.

  2. Slag Behavior in Gasifiers. Part II: Constitutive Modeling of Slag

    Energy Technology Data Exchange (ETDEWEB)

    Massoudi, Mehrdad [National Energy Technology Laboratory; Wang, Ping

    2013-02-07

    The viscosity of slag and the thermal conductivity of ash deposits are among two of the most important constitutive parameters that need to be studied. The accurate formulation or representations of the (transport) properties of coal present a special challenge of modeling efforts in computational fluid dynamics applications. Studies have indicated that slag viscosity must be within a certain range of temperatures for tapping and the membrane wall to be accessible, for example, between 1,300 °C and 1,500 °C, the viscosity is approximately 25 Pa·s. As the operating temperature decreases, the slag cools and solid crystals begin to form. Since slag behaves as a non-linear fluid, we discuss the constitutive modeling of slag and the important parameters that must be studied. We propose a new constitutive model, where the stress tensor not only has a yield stress part, but it also has a viscous part with a shear rate dependency of the viscosity, along with temperature and concentration dependency, while allowing for the possibility of the normal stress effects. In Part I, we reviewed, identify and discuss the key coal ash properties and the operating conditions impacting slag behavior.

  3. Biosphere modelling for safety assessment of geological disposal taking account of denudation of contaminated soils. Research document

    International Nuclear Information System (INIS)

    Kato, Tomoko

    2003-03-01

    Biosphere models for safety assessment of geological disposal have been developed on the assumption that the repository-derived radionuclides reach surface environment by groundwater. In the modelling, river, deep well and marine have been considered as geosphere-biosphere (GBIs) and some Japanese-specific ''reference biospheres'' have been developed using an approach consistent with the BIOMOVS II/BIOMASS Reference Biosphere Methodology. In this study, it is assumed that the repository-derived radionuclide would reach surface environment in the form of solid phase by uplift and erosion of contaminated soil and sediment. The radionuclides entered into the surface environment by these processes could be distributed between solid and liquid phases and could spread within the biosphere via solid phase and also liquid phase. Based on these concepts, biosphere model that considers variably saturated zone under surface soil (VSZ) as a GBI was developed for calculating the flux-to-dose conversion factors of three exposure groups (farming, freshwater fishing, marine fishing) based on the Reference Biosphere Methodology. The flux-to-dose conversion factors for faming exposure group were the highest, and ''inhalation of dust'', external irradiation from soil'' and ''ingestion of soil'' were the dominant exposure pathways for most of radionuclides considered in this model. It is impossible to compare the flux-to-dose conversion factors calculated by the biosphere model in this study with those calculated by the biosphere models developed in the previous studies because the migration processes considered when the radionuclides entered the surface environment through the aquifer are different among the models; i.e. it has been assumed that the repository-derived radionuclides entered the GBIs such as river, deep well and marine via groundwater without dilution and retardation at the aquifer in the previous biosphere models. Consequently, it must be modelled the migration of

  4. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    International Nuclear Information System (INIS)

    Beckon, William N.

    2016-01-01

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  5. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    Energy Technology Data Exchange (ETDEWEB)

    Beckon, William N., E-mail: William_Beckon@fws.gov

    2016-07-15

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  6. The occupant response to autonomous braking: a modeling approach that accounts for active musculature.

    Science.gov (United States)

    Östh, Jonas; Brolin, Karin; Carlsson, Stina; Wismans, Jac; Davidsson, Johan

    2012-01-01

    The aim of this study is to model occupant kinematics in an autonomous braking event by using a finite element (FE) human body model (HBM) with active muscles as a step toward HBMs that can be used for injury prediction in integrated precrash and crash simulations. Trunk and neck musculature was added to an existing FE HBM. Active muscle responses were achieved using a simplified implementation of 3 feedback controllers for head angle, neck angle, and angle of the lumbar spine. The HBM was compared with volunteer responses in sled tests with 10 ms(-2) deceleration over 0.2 s and in 1.4-s autonomous braking interventions with a peak deceleration of 6.7 ms(-2). The HBM captures the characteristics of the kinematics of volunteers in sled tests. Peak forward displacements have the same timing as for the volunteers, and lumbar muscle activation timing matches data from one of the volunteers. The responses of volunteers in autonomous braking interventions are mainly small head rotations and translational motions. This is captured by the HBM controller objective, which is to maintain the initial angular positions. The HBM response with active muscles is within ±1 standard deviation of the average volunteer response with respect to head displacements and angular rotation. With the implementation of feedback control of active musculature in an FE HBM it is possible to model the occupant response to autonomous braking interventions. The lumbar controller is important for the simulations of lap belt-restrained occupants; it is less important for the kinematics of occupants with a modern 3-point seat belt. Increasing head and neck controller gains provides a better correlation for head rotation, whereas it reduces the vertical head displacement and introduces oscillations.

  7. What Time is Your Sunset? Accounting for Refraction in Sunrise/set Prediction Models

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer Lynn; Chizek Frouard, Malynda; Hilton, James; Phlips, Alan; Edgar, Roman

    2018-01-01

    Algorithms that predict sunrise and sunset times currently have an uncertainty of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, including difficulties determining whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction.We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We, then, compared these predictions with data sets of observed rise/set times taken from Mount Wilson Observatory in California, University of Alberta in Edmonton, Alberta, and onboard the SS James Franco in the Atlantic. A thorough investigation of the problem requires a more substantial data set of observed rise/set times and corresponding meteorological data from around the world.We have developed a mobile application, Sunrise & Sunset Observer, so that anyone can capture this astronomical and meteorological data using their smartphone video recorder as part of a citizen science project. The Android app for this project is available in the Google Play store. Videos can also be submitted through the project website (riseset.phy.mtu.edu). Data analysis will lead to more complete models that will provide higher accuracy rise/set predictions to benefit astronomers, navigators, and outdoorsmen everywhere.

  8. An extended macro traffic flow model accounting for multiple optimal velocity functions with different probabilities

    Science.gov (United States)

    Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng

    2017-08-01

    Due to the maximum velocity and safe headway distance of the different vehicles are not exactly the same, an extended macro model of traffic flow with the consideration of multiple optimal velocity functions with probabilities is proposed in this paper. By means of linear stability theory, the new model's linear stability condition considering multiple probabilities optimal velocity is obtained. The KdV-Burgers equation is derived to describe the propagating behavior of traffic density wave near the neutral stability line through nonlinear analysis. The numerical simulations of influences of multiple maximum velocities and multiple safety distances on model's stability and traffic capacity are carried out. The cases of two different kinds of maximum speeds with same safe headway distance, two different types of safe headway distances with same maximum speed and two different max velocities and two different time-gaps are all explored by numerical simulations. First cases demonstrate that when the proportion of vehicles with a larger vmax increase, the traffic tends to unstable, which also means that jerk and brakes is not conducive to traffic stability and easier to result in stop and go phenomenon. Second cases show that when the proportion of vehicles with greater safety spacing increases, the traffic tends to be unstable, which also means that too cautious assumptions or weak driving skill is not conducive to traffic stability. Last cases indicate that increase of maximum speed is not conducive to traffic stability, while reduction of the safe headway distance is conducive to traffic stability. Numerical simulation manifests that the mixed driving and traffic diversion does not have effect on the traffic capacity when traffic density is low or heavy. Numerical results also show that mixed driving should be chosen to increase the traffic capacity when the traffic density is lower, while the traffic diversion should be chosen to increase the traffic capacity when

  9. Where's the problem? Considering Laing and Esterson's account of schizophrenia, social models of disability, and extended mental disorder.

    Science.gov (United States)

    Cooper, Rachel

    2017-08-01

    In this article, I compare and evaluate R. D. Laing and A. Esterson's account of schizophrenia as developed in Sanity, Madness and the Family (1964), social models of disability, and accounts of extended mental disorder. These accounts claim that some putative disorders (schizophrenia, disability, certain mental disorders) should not be thought of as reflecting biological or psychological dysfunction within the afflicted individual, but instead as external problems (to be located in the family, or in the material and social environment). In this article, I consider the grounds on which such claims might be supported. I argue that problems should not be located within an individual putative patient in cases where there is some acceptable test environment in which there is no problem. A number of cases where such an argument can show that there is no internal disorder are discussed. I argue, however, that Laing and Esterson's argument-that schizophrenia is not within diagnosed patients-does not work. The problem with their argument is that they fail to show that the diagnosed women in their study function adequately in any environment.

  10. Method for determining the duration of construction basing on evolutionary modeling taking into account random organizational expectations

    Directory of Open Access Journals (Sweden)

    Alekseytsev Anatoliy Viktorovich

    2016-10-01

    Full Text Available One of the problems of construction planning is failure to meet time constraints and increase of workflow duration. In the recent years informational technologies are efficiently used to solve the problem of estimation of construction period. The issue of optimal estimate of the duration of construction, taking into account the possible organizational expectations is considered in the article. In order to solve this problem the iteration scheme of evolutionary modeling, in which random values of organizational expectations are used as variable parameters is developed. Adjustable genetic operators are used to improve the efficiency of the search for solutions. The reliability of the proposed approach is illustrated by an example of formation of construction schedules of monolithic foundations for buildings, taking into account possible disruptions of supply of concrete and reinforcement cages. Application of the presented methodology enables automated acquisition of several alternative scheduling of construction in accordance with standard or directive duration. Application of this computational procedure has the prospects of taking into account of construction downtime due to weather, accidents related to construction machinery breakdowns or local emergency collapses of the structures being erected.

  11. Optimization model of energy mix taking into account the environmental impact

    International Nuclear Information System (INIS)

    Gruenwald, O.; Oprea, D.

    2012-01-01

    At present, the energy system in the Czech Republic needs to decide some important issues regarding limited fossil resources, greater efficiency in producing of electrical energy and reducing emission levels of pollutants. These problems can be decided only by formulating and implementing an energy mix that will meet these conditions: rational, reliable, sustainable and competitive. The aim of this article is to find a new way of determining an optimal mix for the energy system in the Czech Republic. To achieve the aim, the linear optimization model comprising several economics, environmental and technical aspects will be applied. (Authors)

  12. Research destruction ice under dynamic loading. Part 1. Modeling explosive ice cover into account the temperature

    Directory of Open Access Journals (Sweden)

    Bogomolov Gennady N.

    2017-01-01

    Full Text Available In the research, the behavior of ice under shock and explosive loads is analyzed. Full-scale experiments were carried out. It is established that the results of 2013 practically coincide with the results of 2017, which is explained by the temperature of the formation of river ice. Two research objects are considered, including freshwater ice and river ice cover. The Taylor test was simulated numerically. The results of the Taylor test are presented. Ice is described by an elastoplastic model of continuum mechanics. The process of explosive loading of ice by emulsion explosives is numerically simulated. The destruction of the ice cover under detonation products is analyzed in detail.

  13. Modeling liquid-vapor equilibria with an equation of state taking into account dipolar interactions and association by hydrogen bonding

    International Nuclear Information System (INIS)

    Perfetti, E.

    2006-11-01

    Modelling fluid-rock interactions as well as mixing and unmixing phenomena in geological processes requires robust equations of state (EOS) which must be applicable to systems containing water, gases over a broad range of temperatures and pressures. Cubic equations of state based on the Van der Waals theory (e. g. Soave-Redlich-Kwong or Peng-Robinson) allow simple modelling from the critical parameters of the studied fluid components. However, the accuracy of such equations becomes poor when water is a major component of the fluid since neither association trough hydrogen bonding nor dipolar interactions are accounted for. The Helmholtz energy of a fluid may be written as the sum of different energetic contributions by factorization of partition function. The model developed in this thesis for the pure H 2 O and H 2 S considers three contributions. The first contribution represents the reference Van der Waals fluid which is modelled by the SRK cubic EOS. The second contribution accounts for association through hydrogen bonding and is modelled by a term derived from Cubic Plus Association (CPA) theory. The third contribution corresponds to the dipolar interactions and is modelled by the Mean Spherical Approximation (MSA) theory. The resulting CPAMSA equation has six adjustable parameters, which three represent physical terms whose values are close to their experimental counterpart. This equation results in a better reproduction of the thermodynamic properties of pure water than obtained using the classical CPA equation along the vapour-liquid equilibrium. In addition, extrapolation to higher temperatures and pressure is satisfactory. Similarly, taking into account dipolar interactions together with the SRK cubic equation of state for calculating molar volume of H 2 S as a function of pressure and temperature results in a significant improvement compared to the SRK equation alone. Simple mixing rules between dipolar molecules are proposed to model the H 2 O-H 2 S

  14. Photoproduction of pions on nuclear in chiral bag model with account of motion effects of recoil nucleon

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Kanokov, Z.; Musakhanov, M.M.; Rakhimov, A.M.

    1989-01-01

    Pion production on a nucleon is studied in the chiral bag model (CBM). A CBM version is investigated in which the pions get into the bag and interact with quarks in a pseudovector way in the entire volume. Charged pion photoproduction amplitudes are found taking into account the recoil nucleon motion effects. Angular and energy distributions of charged pions, polarization of the recoil nucleon, multipoles are calculated. The recoil effects are shon to give an additional contribution to the static approximation of order of 10-20%. At bag radius value R=1 in the calculations are consistent with the experimental data

  15. Accounting emergy flows to determine the best production model of a coffee plantation

    Energy Technology Data Exchange (ETDEWEB)

    Giannetti, B.F.; Ogura, Y.; Bonilla, S.H. [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil); Almeida, C.M.V.B., E-mail: cmvbag@terra.com.br [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil)

    2011-11-15

    Cerrado, a savannah region, is Brazil's second largest ecosystem after the Amazon rainforest and is also threatened with imminent destruction. In the present study emergy synthesis was applied to assess the environmental performance of a coffee farm located in Coromandel, Minas Gerais, in the Brazilian Cerrado. The effects of land use on sustainability were evaluated by comparing the emergy indices along ten years in order to assess the energy flows driving the production process, and to determine the best production model combining productivity and environmental performance. The emergy indices are presented as a function of the annual crop. Results show that Santo Inacio farm should produce approximately 20 bags of green coffee per hectare to accomplish its best performance regarding both the production efficiency and the environment. The evaluation of coffee trade complements those obtained by contrasting productivity and environmental performance, and despite of the market prices variation, the optimum interval for Santo Inacio's farm is between 10 and 25 coffee bags/ha. - Highlights: > Emergy synthesis is used to assess the environmental performance of a coffee farm in Brazil. > The effects of land use on sustainability were evaluated along ten years. > The energy flows driving the production process were assessed. > The best production model combining productivity and environmental performance was determined.

  16. Accounting emergy flows to determine the best production model of a coffee plantation

    International Nuclear Information System (INIS)

    Giannetti, B.F.; Ogura, Y.; Bonilla, S.H.; Almeida, C.M.V.B.

    2011-01-01

    Cerrado, a savannah region, is Brazil's second largest ecosystem after the Amazon rainforest and is also threatened with imminent destruction. In the present study emergy synthesis was applied to assess the environmental performance of a coffee farm located in Coromandel, Minas Gerais, in the Brazilian Cerrado. The effects of land use on sustainability were evaluated by comparing the emergy indices along ten years in order to assess the energy flows driving the production process, and to determine the best production model combining productivity and environmental performance. The emergy indices are presented as a function of the annual crop. Results show that Santo Inacio farm should produce approximately 20 bags of green coffee per hectare to accomplish its best performance regarding both the production efficiency and the environment. The evaluation of coffee trade complements those obtained by contrasting productivity and environmental performance, and despite of the market prices variation, the optimum interval for Santo Inacio's farm is between 10 and 25 coffee bags/ha. - Highlights: → Emergy synthesis is used to assess the environmental performance of a coffee farm in Brazil. → The effects of land use on sustainability were evaluated along ten years. → The energy flows driving the production process were assessed. → The best production model combining productivity and environmental performance was determined.

  17. The Influence of Feedback on Task-Switching Performance: A Drift Diffusion Modeling Account

    Directory of Open Access Journals (Sweden)

    Russell Cohen Hoffing

    2018-02-01

    Full Text Available Task-switching is an important cognitive skill that facilitates our ability to choose appropriate behavior in a varied and changing environment. Task-switching training studies have sought to improve this ability by practicing switching between multiple tasks. However, an efficacious training paradigm has been difficult to develop in part due to findings that small differences in task parameters influence switching behavior in a non-trivial manner. Here, for the first time we employ the Drift Diffusion Model (DDM to understand the influence of feedback on task-switching and investigate how drift diffusion parameters change over the course of task switch training. We trained 316 participants on a simple task where they alternated sorting stimuli by color or by shape. Feedback differed in six different ways between subjects groups, ranging from No Feedback (NFB to a variety of manipulations addressing trial-wise vs. Block Feedback (BFB, rewards vs. punishments, payment bonuses and different payouts depending upon the trial type (switch/non-switch. While overall performance was found to be affected by feedback, no effect of feedback was found on task-switching learning. Drift Diffusion Modeling revealed that the reductions in reaction time (RT switch cost over the course of training were driven by a continually decreasing decision boundary. Furthermore, feedback effects on RT switch cost were also driven by differences in decision boundary, but not in drift rate. These results reveal that participants systematically modified their task-switching performance without yielding an overall gain in performance.

  18. Modeling co-occurrence of northern spotted and barred owls: accounting for detection probability differences

    Science.gov (United States)

    Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.

    2009-01-01

    Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.

  19. On the influence of debris in glacier melt modelling: a new temperature-index model accounting for the debris thickness feedback

    Science.gov (United States)

    Carenzo, Marco; Mabillard, Johan; Pellicciotti, Francesca; Reid, Tim; Brock, Ben; Burlando, Paolo

    2013-04-01

    The increase of rockfalls from the surrounding slopes and of englacial melt-out material has led to an increase of the debris cover extent on Alpine glaciers. In recent years, distributed debris energy-balance models have been developed to account for the melt rate enhancing/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya. Some of the input data such as wind or temperature are also of difficult extrapolation from station measurements. Due to their lower data requirement, empirical models have been used in glacier melt modelling. However, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of debris thickness on melt. In this paper, we present a new temperature-index model accounting for the debris thickness feedback in the computation of melt rates at the debris-ice interface. The empirical parameters (temperature factor, shortwave radiation factor, and lag factor accounting for the energy transfer through the debris layer) are optimized at the point scale for several debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter has been validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. The new model is developed on Miage Glacier, Italy, a debris cover glacier in which the ablation area is mantled in near-continuous layer of rock. Subsequently, its transferability is tested on Haut Glacier d'Arolla, Switzerland, where debris is thinner and its extension has been seen to expand in the last decades. The results show that the performance of the new debris temperature-index model (DETI) in simulating the glacier melt rate at the point scale

  20. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections

    Directory of Open Access Journals (Sweden)

    Fei Guo

    2016-06-01

    Full Text Available The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015, more datasets (a time span of almost two years were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the

  1. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  2. Modeling of the core of Atucha II nuclear power plant

    International Nuclear Information System (INIS)

    Blanco, Anibal

    2007-01-01

    This work is part of a Nuclear Engineer degree thesis of the Instituto Balseiro and it is carried out under the development of an Argentinean Nuclear Power Plant Simulator. To obtain the best representation of the reactor physical behavior using the state of the art tools this Simulator should couple a 3D neutronics core calculation code with a thermal-hydraulics system code. Focused in the neutronic nature of this job, using PARCS, we modeled and performed calculations of the nuclear power plant Atucha 2 core. Whenever it is possible, we compare our results against results obtained with PUMA (the official core code for Atucha 2). (author) [es

  3. PIO I-II tendencies. Part 2. Improving the pilot modeling

    Directory of Open Access Journals (Sweden)

    Ioan URSU

    2011-03-01

    Full Text Available The study is conceived in two parts and aims to get some contributions to the problem ofPIO aircraft susceptibility analysis. Part I, previously published in this journal, highlighted the mainsteps of deriving a complex model of human pilot. The current Part II of the paper considers a properprocedure of the human pilot mathematical model synthesis in order to analyze PIO II typesusceptibility of a VTOL-type aircraft, related to the presence of position and rate-limited actuator.The mathematical tools are those of semi global stability theory developed in recent works.

  4. Tropospheric ozone and the environment II. Effects, modeling and control

    International Nuclear Information System (INIS)

    Berglund, R.L.

    1992-01-01

    This was the sixth International Specialty Conference on ozone for the Air ampersand Waste Management Association since 1978 and the first to be held in the Southeast. Of the preceding five conferences, three were held in Houston, one in New England, and one in Los Angeles. The changing location continues to support the understanding that tropospheric ozone is a nationwide problem, requiring understanding and participation by representatives of all regions. Yet, questions such as the following continue to be raised over all aspects of the nation's efforts to control ozone. Are the existing primary and secondary National Ambient Air Quality Standards (NAAQS) for ozone the appropriate targets for the ozone control strategy, or should they be modified to more effectively accommodate new health or ecological effects information, or better fit statistical analyses of ozone modeling data? Are the modeling tools presently available adequate to predict ozone concentrations for future precursor emission trends? What ozones attainment strategy will be the best means of meeting the ozone standard? To best answer these and other questions there needs to be a continued sharing of information among researchers working on these and other questions. While answers to these questions will often be qualitative and location specific, they will help focus future research programs and assist in developing future regulatory strategies

  5. MODELING THE 1958 LITUYA BAY MEGA-TSUNAMI, II

    Directory of Open Access Journals (Sweden)

    Charles L. Mader

    2002-01-01

    Full Text Available Lituya Bay, Alaska is a T-Shaped bay, 7 miles long and up to 2 miles wide. The two arms at the head of the bay, Gilbert and Crillon Inlets, are part of a trench along the Fairweather Fault. On July 8, 1958, an 7.5 Magnitude earthquake occurred along the Fairweather fault with an epicenter near Lituya Bay.A mega-tsunami wave was generated that washed out trees to a maximum altitude of 520 meters at the entrance of Gilbert Inlet. Much of the rest of the shoreline of the Bay was denuded by the tsunami from 30 to 200 meters altitude.In the previous study it was determined that if the 520 meter high run-up was 50 to 100 meters thick, the observed inundation in the rest of Lituya Bay could be numerically reproduced. It was also concluded that further studies would require full Navier-Stokes modeling similar to those required for asteroid generated tsunami waves.During the Summer of 2000, Hermann Fritz conducted experiments that reproduced the Lituya Bay 1958 event. The laboratory experiments indicated that the 1958 Lituya Bay 524 meter run-up on the spur ridge of Gilbert Inlet could be caused by a landslide impact.The Lituya Bay impact landslide generated tsunami was modeled with the full Navier- Stokes AMR Eulerian compressible hydrodynamic code called SAGE with includes the effect of gravity.

  6. Model of investment appraisal of high-rise construction with account of cost of land resources

    Science.gov (United States)

    Okolelova, Ella; Shibaeva, Marina; Trukhina, Natalya

    2018-03-01

    The article considers problems and potential of high-rise construction as a global urbanization. The results of theoretical and practical studies on the appraisal of investments in high-rise construction are provided. High-rise construction has a number of apparent upsides in modern terms of development of megapolises and primarily it is economically efficient. Amid serious lack of construction sites, skyscrapers successfully deal with the need of manufacturing, office and living premises. Nevertheless, there are plenty issues, which are related with high-rise construction, and only thorough scrutiny of them allow to estimate the real economic efficiency of this branch. The article focuses on the question of economic efficiency of high-rise construction. The suggested model allows adjusting the parameters of a facility under construction, setting the tone for market value as well as the coefficient for appreciation of the construction net cost, that depends on the number of storey's, in the form of function or discrete values.

  7. A coupled surface/subsurface flow model accounting for air entrapment and air pressure counterflow

    DEFF Research Database (Denmark)

    Delfs, Jens Olaf; Wang, Wenqing; Kalbacher, Thomas

    2013-01-01

    the mass exchange between compartments. A benchmark test, which is based on a classic experimental data set on infiltration excess (Horton) overland flow, identified a feedback mechanism between surface runoff and soil air pressures. Our study suggests that air compression in soils amplifies surface runoff......This work introduces the soil air system into integrated hydrology by simulating the flow processes and interactions of surface runoff, soil moisture and air in the shallow subsurface. The numerical model is formulated as a coupled system of partial differential equations for hydrostatic (diffusive...... wave) shallow flow and two-phase flow in a porous medium. The simultaneous mass transfer between the soil, overland, and atmosphere compartments is achieved by upgrading a fully established leakance concept for overland-soil liquid exchange to an air exchange flux between soil and atmosphere. In a new...

  8. Does Reading Cause Later Intelligence? Accounting for Stability in Models of Change.

    Science.gov (United States)

    Bailey, Drew H; Littlefield, Andrew K

    2017-11-01

    This study reanalyzes data presented by Ritchie, Bates, and Plomin (2015) who used a cross-lagged monozygotic twin differences design to test whether reading ability caused changes in intelligence. The authors used data from a sample of 1,890 monozygotic twin pairs tested on reading ability and intelligence at five occasions between the ages of 7 and 16, regressing twin differences in intelligence on twin differences in prior intelligence and twin differences in prior reading ability. Results from a state-trait model suggest that reported effects of reading ability on later intelligence may be artifacts of previously uncontrolled factors, both environmental in origin and stable during this developmental period, influencing both constructs throughout development. Implications for cognitive developmental theory and methods are discussed. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  9. Modelling the distribution of fish accounting for spatial correlation and overdispersion

    DEFF Research Database (Denmark)

    Lewy, Peter; Kristensen, Kasper

    2009-01-01

    The spatial distribution of cod (Gadus morhua) in the North Sea and the Skagerrak was analysed over a 24-year period using the Log Gaussian Cox Process (LGCP). In contrast to other spatial models of the distribution of fish, LGCP avoids problems with zero observations and includes the spatial...... correlation between observations. It is therefore possible to predict and interpolate unobserved densities at any location in the area. This is important for obtaining unbiased estimates of stock concentration and other measures depending on the distribution in the entire area. Results show that the spatial...... correlation and dispersion of cod catches remained unchanged during winter throughout the period, in spite of a drastic decline in stock abundance and a movement of the centre of gravity of the distribution towards the northeast in the same period. For the age groups considered, the concentration of the stock...

  10. The application of multilevel modelling to account for the influence of walking speed in gait analysis.

    Science.gov (United States)

    Keene, David J; Moe-Nilssen, Rolf; Lamb, Sarah E

    2016-01-01

    Differences in gait performance can be explained by variations in walking speed, which is a major analytical problem. Some investigators have standardised speed during testing, but this can result in an unnatural control of gait characteristics. Other investigators have developed test procedures where participants walking at their self-selected slow, preferred and fast speeds, with computation of gait characteristics at a standardised speed. However, this analysis is dependent upon an overlap in the ranges of gait speed observed within and between participants, and this is difficult to achieve under self-selected conditions. In this report a statistical analysis procedure is introduced that utilises multilevel modelling to analyse data from walking tests at self-selected speeds, without requiring an overlap in the range of speeds observed or the routine use of data transformations. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation

    KAUST Repository

    De La Garza Martinez, Pablo

    2016-05-01

    Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.

  12. Accounting for disturbance history in models: using remote sensing to constrain carbon and nitrogen pool spin-up.

    Science.gov (United States)

    Hanan, Erin J; Tague, Christina; Choate, Janet; Liu, Mingliang; Kolden, Crystal; Adam, Jennifer

    2018-03-24

    Disturbances such as wildfire, insect outbreaks, and forest clearing, play an important role in regulating carbon, nitrogen, and hydrologic fluxes in terrestrial watersheds. Evaluating how watersheds respond to disturbance requires understanding mechanisms that interact over multiple spatial and temporal scales. Simulation modeling is a powerful tool for bridging these scales; however, model projections are limited by uncertainties in the initial state of plant carbon and nitrogen stores. Watershed models typically use one of two methods to initialize these stores: spin-up to steady state, or remote sensing with allometric relationships. Spin-up involves running a model until vegetation reaches equilibrium based on climate; this approach assumes that vegetation across the watershed has reached maturity and is of uniform age, which fails to account for landscape heterogeneity and non-steady state conditions. By contrast, remote sensing, can provide data for initializing such conditions. However, methods for assimilating remote sensing into model simulations can also be problematic. They often rely on empirical allometric relationships between a single vegetation variable and modeled carbon and nitrogen stores. Because allometric relationships are species- and region-specific, they do not account for the effects of local resource limitation, which can influence carbon allocation (to leaves, stems, roots, etc.). To address this problem, we developed a new initialization approach using the catchment-scale ecohydrologic model RHESSys. The new approach merges the mechanistic stability of spin-up with the spatial fidelity of remote sensing. It uses remote sensing to define spatially explicit targets for one, or several vegetation state variables, such as leaf area index, across a watershed. The model then simulates the growth of carbon and nitrogen stores until the defined targets are met for all locations. We evaluated this approach in a mixed pine-dominated watershed in

  13. Retinal ganglion cell neuroprotection by an angiotensin II blocker in an ex vivo retinal explant model.

    Science.gov (United States)

    White, Andrew J R; Heller, Janosch P; Leung, Johahn; Tassoni, Alessia; Martin, Keith R

    2015-12-01

    An ex vivo organotypic retinal explant model was developed to examine retinal survival mechanisms relevant to glaucoma mediated by the renin angiotensin system in the rodent eye. Eyes from adult Sprague Dawley rats were enucleated immediately post-mortem and used to make four retinal explants per eye. Explants were treated either with irbesartan (10 µM), vehicle or angiotensin II (2 μM) for four days. Retinal ganglion cell density was estimated by βIII tubulin immunohistochemistry. Live imaging of superoxide formation with dihydroethidium (DHE) was performed. Protein expression was determined by Western blotting, and mRNA expression was determined by RT-PCR. Irbesartan (10 µM) almost doubled ganglion cell survival after four days. Angiotensin II (2 µM) reduced cell survival by 40%. Sholl analysis suggested that irbesartan improved ganglion cell dendritic arborisation compared to control and angiotensin II reduced it. Angiotensin-treated explants showed an intense DHE fluorescence not seen in irbesartan-treated explants. Analysis of protein and mRNA expression determined that the angiotensin II receptor At1R was implicated in modulation of the NADPH-dependent pathway of superoxide generation. Angiotensin II blockers protect retinal ganglion cells in this model and may be worth further investigation as a neuroprotective treatment in models of eye disease. © The Author(s) 2015.

  14. Neurologic abnormalities in mouse models of the lysosomal storage disorders mucolipidosis II and mucolipidosis III γ.

    Directory of Open Access Journals (Sweden)

    Rachel A Idol

    Full Text Available UDP-GlcNAc:lysosomal enzyme N-acetylglucosamine-1-phosphotransferase is an α2β2γ2 hexameric enzyme that catalyzes the synthesis of the mannose 6-phosphate targeting signal on lysosomal hydrolases. Mutations in the α/β subunit precursor gene cause the severe lysosomal storage disorder mucolipidosis II (ML II or the more moderate mucolipidosis III alpha/beta (ML III α/β, while mutations in the γ subunit gene cause the mildest disorder, mucolipidosis III gamma (ML III γ. Here we report neurologic consequences of mouse models of ML II and ML III γ. The ML II mice have a total loss of acid hydrolase phosphorylation, which results in depletion of acid hydrolases in mesenchymal-derived cells. The ML III γ mice retain partial phosphorylation. However, in both cases, total brain extracts have normal or near normal activity of many acid hydrolases reflecting mannose 6-phosphate-independent lysosomal targeting pathways. While behavioral deficits occur in both models, the onset of these changes occurs sooner and the severity is greater in the ML II mice. The ML II mice undergo progressive neurodegeneration with neuronal loss, astrocytosis, microgliosis and Purkinje cell depletion which was evident at 4 months whereas ML III γ mice have only mild to moderate astrocytosis and microgliosis at 12 months. Both models accumulate the ganglioside GM2, but only ML II mice accumulate fucosylated glycans. We conclude that in spite of active mannose 6-phosphate-independent targeting pathways in the brain, there are cell types that require at least partial phosphorylation function to avoid lysosomal dysfunction and the associated neurodegeneration and behavioral impairments.

  15. Neurologic abnormalities in mouse models of the lysosomal storage disorders mucolipidosis II and mucolipidosis III γ.

    Science.gov (United States)

    Idol, Rachel A; Wozniak, David F; Fujiwara, Hideji; Yuede, Carla M; Ory, Daniel S; Kornfeld, Stuart; Vogel, Peter

    2014-01-01

    UDP-GlcNAc:lysosomal enzyme N-acetylglucosamine-1-phosphotransferase is an α2β2γ2 hexameric enzyme that catalyzes the synthesis of the mannose 6-phosphate targeting signal on lysosomal hydrolases. Mutations in the α/β subunit precursor gene cause the severe lysosomal storage disorder mucolipidosis II (ML II) or the more moderate mucolipidosis III alpha/beta (ML III α/β), while mutations in the γ subunit gene cause the mildest disorder, mucolipidosis III gamma (ML III γ). Here we report neurologic consequences of mouse models of ML II and ML III γ. The ML II mice have a total loss of acid hydrolase phosphorylation, which results in depletion of acid hydrolases in mesenchymal-derived cells. The ML III γ mice retain partial phosphorylation. However, in both cases, total brain extracts have normal or near normal activity of many acid hydrolases reflecting mannose 6-phosphate-independent lysosomal targeting pathways. While behavioral deficits occur in both models, the onset of these changes occurs sooner and the severity is greater in the ML II mice. The ML II mice undergo progressive neurodegeneration with neuronal loss, astrocytosis, microgliosis and Purkinje cell depletion which was evident at 4 months whereas ML III γ mice have only mild to moderate astrocytosis and microgliosis at 12 months. Both models accumulate the ganglioside GM2, but only ML II mice accumulate fucosylated glycans. We conclude that in spite of active mannose 6-phosphate-independent targeting pathways in the brain, there are cell types that require at least partial phosphorylation function to avoid lysosomal dysfunction and the associated neurodegeneration and behavioral impairments.

  16. Hysteresis modelling of GO laminations for arbitrary in-plane directions taking into account the dynamics of orthogonal domain walls

    Energy Technology Data Exchange (ETDEWEB)

    Baghel, A.P.S.; Sai Ram, B. [Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai 400076 (India); Chwastek, K. [Department of Electrical Engineering Czestochowa University of Technology (Poland); Daniel, L. [Group of Electrical Engineering-Paris (GeePs), CNRS(UMR8507)/CentraleSupelec/UPMC/Univ Paris-Sud, 11 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Kulkarni, S.V. [Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai 400076 (India)

    2016-11-15

    The anisotropy of magnetic properties in grain-oriented steels is related to their microstructure. It results from the anisotropy of the single crystal properties combined to crystallographic texture. The magnetization process along arbitrary directions can be explained using phase equilibrium for domain patterns, which can be described using Neel's phase theory. According to the theory the fractions of 180° and 90° domain walls depend on the direction of magnetization. This paper presents an approach to model hysteresis loops of grain-oriented steels along arbitrary in-plane directions. The considered description is based on a modification of the Jiles–Atherton model. It includes a modified expression for the anhysteretic magnetization which takes into account contributions of two types of domain walls. The computed hysteresis curves for different directions are in good agreement with experimental results. - Highlights: • An extended Jiles–Atherton description is used to model hysteresis loops in GO steels. • The model stresses the role of material anisotropy and different contributions of the two types of domain walls. • Hysteresis loops can be modeled along arbitrary in-plane directions. • Modeling results are in good agreement with experiments.

  17. Modelling of a mecanum wheel taking into account the geometry of road rollers

    Science.gov (United States)

    Hryniewicz, P.; Gwiazda, A.; Banaś, W.; Sękala, A.; Foit, K.

    2017-08-01

    During the process planning in a company one of the basic factors associated with the production costs is the operation time for particular technological jobs. The operation time consists of time units associated with the machining tasks of a workpiece as well as the time associated with loading and unloading and the transport operations of this workpiece between machining stands. Full automation of manufacturing in industry companies tends to a maximal reduction in machine downtimes, thereby the fixed costs simultaneously decreasing. The new construction of wheeled vehicles, using Mecanum wheels, reduces the transport time of materials and workpieces between machining stands. These vehicles have the ability to simultaneously move in two axes and thus more rapid positioning of the vehicle relative to the machining stand. The Mecanum wheel construction implies placing, around the wheel free rollers that are mounted at an angle 450, which allow the movement of the vehicle not only in its axis but also perpendicular thereto. The improper selection of the rollers can cause unwanted vertical movement of the vehicle, which may cause difficulty in positioning of the vehicle in relation to the machining stand and the need for stabilisation. Hence the proper design of the free rollers is essential in designing the whole Mecanum wheel construction. It allows avoiding the disadvantageous and unwanted vertical vibrations of a whole vehicle with these wheels. In the article the process of modelling the free rollers, in order to obtain the desired shape of unchanging, horizontal trajectory of the vehicle is presented. This shape depends on the desired diameter of the whole Mecanum wheel, together with the road rollers, and the width of the drive wheel. Another factor related with the curvature of the trajectory shape is the length of the road roller and its diameter decreases depending on the position with respect to its centre. The additional factor, limiting construction of

  18. PENERAPAN MODEL THINK-PAIR-SHARE UNTUK MENINGKATKAN KETERAMPILAN MENULIS KELAS II SDN 3 BANJAR JAWA

    Directory of Open Access Journals (Sweden)

    Ningsi Soisana Lakilaf

    2017-12-01

    Full Text Available Penelitian ini bertujuan untuk meningkatkan keterampilan menulis siswa  setelah penerapan model pembelajaran Think-Pear-Share bermediakan gambar pada siswa kelas II Semester I di SD Negeri 3 Banjar Jawa, Tahun Pelajaran 2017/2018.Pelaksanaan penelitian ini menggunakan penelitian tindakan kelas (PTK yang dilaksanakan dalam 2 silklus,  setiap siklus  terdiri dari 2 pertemua, dengan tahapan yang terdiri dari (1 perencanaan, (2 pelaksanaan, (3 pengamatan, dan (4 refleksi. Subjek penelitian ini adalah guru dan siswa kelas II SD Negeri 3 Banjar Jawa  dalam penelitian ini adalah teknik tes dan nontes.Hasil penelitian ini menunjukan bahwa dengan menggunakan model pembelajaran Think-Pair-Share bermedia gamabar diketahui bahwa ketuntasan hasil belajar siswa mengalami peningkatan dalam pembelajaran dengan hasil presentasi mendeskripsikan secara tertulis sebelum pelaksanaan tindakan 27%, siklus I 77% dan Siklus II 90 %. Pembelajaran dengan menerapkan model Think-Pair-Share bermedia gambar dapat meningkatkan keterampilan menulis. Kesimpulan dari penelitian ini adalah melalui penerapan model Think- Pair-Share bermedia gambar dapat meningkatkan keterampilan  menulis siswa kelas II SD Negeri 3 Banjar Jawa,. Saran yang dapat diberikan adalah sebaiknya guru lebih aktif dan kreatif dalam melaksanakan pembelajaran yang inovatif dan menyenangkan.   Kata Kunci : Keterampilan menulis, model Think-Pair-Share

  19. Probabilistic modelling in urban drainage – two approaches that explicitly account for temporal variation of model errors

    DEFF Research Database (Denmark)

    Löwe, Roland; Del Giudice, Dario; Mikkelsen, Peter Steen

    of input uncertainties observed in the models. The explicit inclusion of such variations in the modelling process will lead to a better fulfilment of the assumptions made in formal statistical frameworks, thus reducing the need to resolve to informal methods. The two approaches presented here...

  20. Prediction of the binding affinities of peptides to class II MHC using a regularized thermodynamic model

    Directory of Open Access Journals (Sweden)

    Mittelmann Hans D

    2010-01-01

    Full Text Available Abstract Background The binding of peptide fragments of extracellular peptides to class II MHC is a crucial event in the adaptive immune response. Each MHC allotype generally binds a distinct subset of peptides and the enormous number of possible peptide epitopes prevents their complete experimental characterization. Computational methods can utilize the limited experimental data to predict the binding affinities of peptides to class II MHC. Results We have developed the Regularized Thermodynamic Average, or RTA, method for predicting the affinities of peptides binding to class II MHC. RTA accounts for all possible peptide binding conformations using a thermodynamic average and includes a parameter constraint for regularization to improve accuracy on novel data. RTA was shown to achieve higher accuracy, as measured by AUC, than SMM-align on the same data for all 17 MHC allotypes examined. RTA also gave the highest accuracy on all but three allotypes when compared with results from 9 different prediction methods applied to the same data. In addition, the method correctly predicted the peptide binding register of 17 out of 18 peptide-MHC complexes. Finally, we found that suboptimal peptide binding registers, which are often ignored in other prediction methods, made significant contributions of at least 50% of the total binding energy for approximately 20% of the peptides. Conclusions The RTA method accurately predicts peptide binding affinities to class II MHC and accounts for multiple peptide binding registers while reducing overfitting through regularization. The method has potential applications in vaccine design and in understanding autoimmune disorders. A web server implementing the RTA prediction method is available at http://bordnerlab.org/RTA/.

  1. Modeling of Cd(II) sorption on mixed oxide

    International Nuclear Information System (INIS)

    Waseem, M.; Mustafa, S.; Naeem, A.; Shah, K.H.; Hussain, S.Y.; Safdar, M.

    2011-01-01

    Mixed oxide of iron and silicon (0.75 M Fe(OH)3:0.25 M SiO/sub 2/) was synthesized and characterized by various techniques like surface area analysis, point of zero charge (PZC), energy dispersive X-rays (EDX) spectroscopy, Thermogravimetric and differential thermal analysis (TG-DTA), Fourier transform infrared spectroscopy (FTIR) and X-rays diffraction (XRD) analysis. The uptake of Cd/sup 2+/ ions on mixed oxide increased with pH, temperature and metal ion concentration. Sorption data have been interpreted in terms of both Langmuir and Freundlich models. The Xm values at pH 7 are found to be almost twice as compared to pH 5. The values of both DH and DS were found to be positive indicating that the sorption process was endothermic and accompanied by the dehydration of Cd/sup 2+/. Further, the negative value of DG confirms the spontaneity of the reaction. The ion exchange mechanism was suggested to take place for each Cd/sup 2+/ ions at pH 5, whereas ion exchange was found coupled with non specific adsorption of metal cations at pH 7. (author)

  2. Biomimetic model systems of rigid hair beds: Part II - Experiment

    Science.gov (United States)

    Jammalamadaka, Mani S. S.; Hood, Kaitlyn; Hosoi, Anette

    2017-11-01

    Crustaceans - such as lobsters, crabs and stomapods - have hairy appendages that they use to recognize and track odorants in the surrounding fluid. An array of rigid hairs impedes flow at different rates depending on the spacing between hairs and the Reynolds number, Re. At larger Reynolds number (Re>1), fluid travels through the hairs rather than around them, a phenomenon called leakiness. Crustaceans flick their appendages at different speeds in order to manipulate the leakiness between the hairs, allowing the hairs to either detect the odors in a sample of fluid or collect a new sample. Theoretical and numerical studies predict that there is a fast flow region near the hairs that moves closer to the hairs as Re increases. Here, we test this theory experimentally. We 3D printed rigid hairs with an aspect ratio of 30:1 in rectangular arrays with different hair packing fractions. We custom built an experimental setup which establishes poiseuille flow at intermediate Re, Re <=200. We track the flow dynamics through the hair beds using tracer particles and Particle Imaging Velocimetry. We will then compare the modelling predictions with the experimental outcomes.

  3. Control of gravitropic orientation. II. Dual receptor model for gravitropism

    Science.gov (United States)

    LaMotte, Clifford E.; Pickard, Barbara G.

    2004-01-01

    Gravitropism of vascular plants has been assumed to require a single gravity receptor mechanism. However, based on the evidence in Part I of this study, we propose that maize roots require two. The first mechanism is without a directional effect and, by itself, cannot give rise to tropism. Its role is quantitative facilitation of the second mechanism, which is directional like the gravitational force itself and provides the impetus for tropic curvature. How closely coupled the two mechanisms may be is, as yet, unclear. The evidence for dual receptors supports a general model for roots. When readiness for gravifacilitation, or gravifacilitation itself, is constitutive, orthogravitropic curvature can go to completion. If not constitutively enabled, gravifacilitation can be weak in the absence of light and water deficit or strong in the presence of light and water deficit. In either case, it can decay and permit roots to assume reproducible non-vertical orientations (plagiogravitropic or plagiotropic orientations) without using non-vertical setpoints. In this way roots are deployed in a large volume of soil. Gravitropic behaviours in shoots are more diverse than in roots, utilising oblique and horizontal as well as vertical setpoints. As a guide to future experiments, we assess how constitutive v. non-constitutive modes of gravifacilitation might contribute to behaviours based on each kind of setpoint.

  4. A 2D finite element procedure for magnetic field analysis taking into account a vector Preisach model

    Directory of Open Access Journals (Sweden)

    Dupré Luc R.

    1997-01-01

    Full Text Available The main purpose of this paper is to incorporate a refined hysteresis model, viz. a vector Preisach model, in 2D magnetic field computations. To this end the governing Maxwell equations are rewritten in a suitable way, which allows to take into account the proper magnetic material parameters and, moreover, to pass to a variational formulation. The variational problem is solved numerically by a FE approximation, using a quadratic mesh, followed by the time discretisation based upon a modified Cranck Nicholson algorithm. The latter includes a suitable iteration procedure to deal with the nonlinear hysteresis behaviour. Finally, the effectiveness of the presented mathematical tool has been confirmed by several numerical experiments.

  5. An EMG-driven biomechanical model that accounts for the decrease in moment generation capacity during a dynamic fatigued condition.

    Science.gov (United States)

    Rao, Guillaume; Berton, Eric; Amarantini, David; Vigouroux, Laurent; Buchanan, Thomas S

    2010-07-01

    Although it is well known that fatigue can greatly reduce muscle forces, it is not generally included in biomechanical models. The aim of the present study was to develop an electromyographic-driven (EMG-driven) biomechanical model to estimate the contributions of flexor and extensor muscle groups to the net joint moment during a nonisokinetic functional movement (squat exercise) performed in nonfatigued and in fatigued conditions. A methodology that aims at balancing the decreased muscle moment production capacity following fatigue was developed. During an isometric fatigue session, a linear regression was created linking the decrease in force production capacity of the muscle (normalized force/EMG ratio) to the EMG mean frequency. Using the decrease in mean frequency estimated through wavelet transforms between dynamic squats performed before and after the fatigue session as input to the previous linear regression, a coefficient accounting for the presence of fatigue in the quadriceps group was computed. This coefficient was used to constrain the moment production capacity of the fatigued muscle group within an EMG-driven optimization model dedicated to estimate the contributions of the knee flexor and extensor muscle groups to the net joint moment. During squats, our results showed significant increases in the EMG amplitudes with fatigue (+23.27% in average) while the outputs of the EMG-driven model were similar. The modifications of the EMG amplitudes following fatigue were successfully taken into account while estimating the contributions of the flexor and extensor muscle groups to the net joint moment. These results demonstrated that the new procedure was able to estimate the decrease in moment production capacity of the fatigued muscle group.

  6. Accounting for intracell flow in models with emphasis on water table recharge and stream-aquifer interaction: 2. A procedure

    Science.gov (United States)

    Jorgensen, Donald G.; Signor, Donald C.; Imes, Jeffrey L.

    1989-01-01

    Intercepted intracell flow, especially if cell includes water table recharge and a stream ((sink), can result in significant model error if not accounted for. A procedure utilizing net flow per cell (Fn) that accounts for intercepted intracell flow can be used for both steady state and transient simulations. Germane to the procedure is the determination of the ratio of area of influence of the interior sink to the area of the cell (Ai/Ac). Ai is the area in which water table recharge has the potential to be intercepted by the sink. Determining Ai/Ac requires either a detailed water table map or observation of stream conditions within the cell. A proportioning parameter M, which is equal to 1 or slightly less and is a function of cell geometry, is used to determine how much of the water that has potential for interception is intercepted by the sink within the cell. Also germane to the procedure is the determination of the flow across the streambed (Fs), which is not directly a function of cell size, due to difference in head between the water level in the stream and the potentiometric surface of the aquifer underlying the streambed. The use of Fn for steady state simulations allows simulation of water levels without utilizing head-dependent or constant head boundary conditions which tend to constrain the model-calculated water levels, an undesirable result if a comparison of measured and calculated water levels is being made. Transient simulations of streams usually utilize a head-dependent boundary condition and a leakance value to model a stream. Leakance values for each model cell can be determined from a steady state simulation, which used the net flow per cell procedure. For transient simulation, Fn would not include Fs. Also, for transient simulation it is necessary to check Fn at different time intervals because M and Ai/Ac are not constant and change with time. The procedure was used successfully in two different models of the aquifer system

  7. Modeling the dynamic behavior of railway track taking into account the occurrence of defects in the system wheel-rail

    Directory of Open Access Journals (Sweden)

    Loktev Alexey

    2017-01-01

    Full Text Available This paper investigates the influence of wheel defects on the development of rail defects up to a state where rail prompt replacement becomes necessary taking into account different models of the dynamic contact between a wheel and a rail. In particular, the quasistatic Hertz model, the linear elastic model and the elastoplastic Aleksandrov-Kadomtsev model. Based on the model of the wheel-rail contact the maximum stresses are determined which take place in the rail in the presence of wheel defects (e.g. flat spot, weld-on deposit, etc.. In this paper, the solution of the inverse problem is presented, i.e., investigation of the influence of the strength of a wheel impact upon rails on wheel defects as well as evaluation of the stresses emerging in rails. During the motion of a railway vehicle, the wheel pair position in relation to rails changes significantly, which causes various combinations of wheel-rail contact areas. Even provided the constant axial load, the normal stresses will substantially change due to the differences in the radii of curvature of contact surfaces of these areas, as well as movement velocities of railway vehicles.

  8. Diarrhea Morbidities in Small Areas: Accounting for Non-Stationarity in Sociodemographic Impacts using Bayesian Spatially Varying Coefficient Modelling.

    Science.gov (United States)

    Osei, F B; Stein, A

    2017-08-30

    Model-based estimation of diarrhea risk and understanding the dependency on sociodemographic factors is important for prioritizing interventions. It is unsuitable to calibrate regression model with a single set of coefficients, especially for large spatial domains. For this purpose, we developed a Bayesian hierarchical varying coefficient model to account for non-stationarity in the covariates. We used the integrated nested Laplace approximation for parameter estimation. Diarrhea morbidities in Ghana motivated our empirical study. Results indicated improvement regarding model fit and epidemiological benefits. The findings highlighted substantial spatial, temporal, and spatio-temporal heterogeneities in both diarrhea risk and the coefficients of the sociodemographic factors. Diarrhea risk in peri-urban and urban districts were 13.2% and 10.8% higher than rural districts, respectively. The varying coefficient model indicated further details, as the coefficients varied across districts. A unit increase in the proportion of inhabitants with unsafe liquid waste disposal was found to increase diarrhea risk by 11.5%, with higher percentages within the south-central parts through to the south-western parts. Districts with safe and unsafe drinking water sources unexpectedly had a similar risk, as were districts with safe and unsafe toilets. The findings show that site-specific interventions need to consider the varying effects of sociodemographic factors.

  9. Low Energy Atomic Models Suggesting a Pilus Structure that could Account for Electrical Conductivity of Geobacter sulfurreducens Pili.

    Science.gov (United States)

    Xiao, Ke; Malvankar, Nikhil S; Shu, Chuanjun; Martz, Eric; Lovley, Derek R; Sun, Xiao

    2016-03-22

    The metallic-like electrical conductivity of Geobacter sulfurreducens pili has been documented with multiple lines of experimental evidence, but there is only a rudimentary understanding of the structural features which contribute to this novel mode of biological electron transport. In order to determine if it was feasible for the pilin monomers of G. sulfurreducens to assemble into a conductive filament, theoretical energy-minimized models of Geobacter pili were constructed with a previously described approach, in which pilin monomers are assembled using randomized structural parameters and distance constraints. The lowest energy models from a specific group of predicted structures lacked a central channel, in contrast to previously existing pili models. In half of the no-channel models the three N-terminal aromatic residues of the pilin monomer are arranged in a potentially electrically conductive geometry, sufficiently close to account for the experimentally observed metallic like conductivity of the pili that has been attributed to overlapping pi-pi orbitals of aromatic amino acids. These atomic resolution models capable of explaining the observed conductive properties of Geobacter pili are a valuable tool to guide further investigation of the metallic-like conductivity of the pili, their role in biogeochemical cycling, and applications in bioenergy and bioelectronics.

  10. Diagnostic and prognostic simulations with a full Stokes model accounting for superimposed ice of Midtre Lovénbreen, Svalbard

    Directory of Open Access Journals (Sweden)

    T. Zwinger

    2009-11-01

    Full Text Available We present steady state (diagnostic and transient (prognostic simulations of Midtre Lovénbreen, Svalbard performed with the thermo-mechanically coupled full-Stokes code Elmer. This glacier has an extensive data set of geophysical measurements available spanning several decades, that allow for constraints on model descriptions. Consistent with this data set, we included a simple model accounting for the formation of superimposed ice. Diagnostic results indicated that a dynamic adaptation of the free surface is necessary, to prevent non-physically high velocities in a region of under determined bedrock depths. Observations from ground penetrating radar of the basal thermal state agree very well with model predictions, while the dip angles of isochrones in radar data also match reasonably well with modelled isochrones, despite the numerical deficiencies of estimating ages with a steady state model.

    Prognostic runs for 53 years, using a constant accumulation/ablation pattern starting from the steady state solution obtained from the configuration of the 1977 DEM show that: 1 the unrealistic velocities in the under determined parts of the DEM quickly damp out; 2 the free surface evolution matches well measured elevation changes; 3 the retreat of the glacier under this scenario continues with the glacier tongue in a projection to 2030 being situated ≈500 m behind the position in 1977.

  11. Collaborative Russian-US work in nuclear material protection, control and accounting at the Institute of Physics and Power Engineering. II. extension to additional facilities

    International Nuclear Information System (INIS)

    Kuzin, V.V.; Pshakin, G.M.; Belov, A.P.

    1996-01-01

    During 1995, collaborative Russian-US nuclear material protection, control and accounting (MPC ampersand A) tasks at the Institute of Physics and Power Engineering (IPPE) in Obninsk, Russia focused on improving the protection of nuclear materials at the BFS Fast Critical Facility. BFS has thousands of fuel disks containing highly enriched uranium and weapons-grade plutonium that are used to simulate the core configurations of experimental reactors in two critical assemblies. Completed tasks culminated in demonstrations of newly implemented equipment and methods that enhanced the MPC ampersand A at BFS through computerized accounting, nondestructive inventory verification measurements, personnel identification and assess control, physical inventory taking, physical protection, and video surveillance. The collaborative work is now being extended. The additional tasks encompass communications and tamper-indicating devices; new storage alternatives; and systemization of the MPC ampersand A elements that are being implemented

  12. A Parameter Study for Modeling Mg ii h and k Emission during Solar Flares

    Energy Technology Data Exchange (ETDEWEB)

    Rubio da Costa, Fatima [Department of Physics, Stanford University, Stanford, CA 94305 (United States); Kleint, Lucia, E-mail: frubio@stanford.edu [University of Applied Sciences and Arts Northwestern Switzerland, 5210, Windisch (Switzerland)

    2017-06-20

    Solar flares show highly unusual spectra in which the thermodynamic conditions of the solar atmosphere are encoded. Current models are unable to fully reproduce the spectroscopic flare observations, especially the single-peaked spectral profiles of the Mg ii h and k lines. We aim to understand the formation of the chromospheric and optically thick Mg ii h and k lines in flares through radiative transfer calculations. We take a flare atmosphere obtained from a simulation with the radiative hydrodynamic code RADYN as input for a radiative transfer modeling with the RH code. By iteratively changing this model atmosphere and varying thermodynamic parameters such as temperature, electron density, and velocity, we study their effects on the emergent intensity spectra. We reproduce the typical single-peaked Mg ii h and k flare spectral shape and approximate the intensity ratios to the subordinate Mg ii lines by increasing either densities, temperatures, or velocities at the line core formation height range. Additionally, by combining unresolved upflows and downflows up to ∼250 km s{sup −1} within one resolution element, we reproduce the widely broadened line wings. While we cannot unambiguously determine which mechanism dominates in flares, future modeling efforts should investigate unresolved components, additional heat dissipation, larger velocities, and higher densities and combine the analysis of multiple spectral lines.

  13. Understanding variability of the Southern Ocean overturning circulation in CORE-II models

    Science.gov (United States)

    Downes, S. M.; Spence, P.; Hogg, A. M.

    2018-03-01

    The current generation of climate models exhibit a large spread in the steady-state and projected Southern Ocean upper and lower overturning circulation, with mechanisms for deep ocean variability remaining less well understood. Here, common Southern Ocean metrics in twelve models from the Coordinated Ocean-ice Reference Experiment Phase II (CORE-II) are assessed over a 60 year period. Specifically, stratification, surface buoyancy fluxes, and eddies are linked to the magnitude of the strengthening trend in the upper overturning circulation, and a decreasing trend in the lower overturning circulation across the CORE-II models. The models evolve similarly in the upper 1 km and the deep ocean, with an almost equivalent poleward intensification trend in the Southern Hemisphere westerly winds. However, the models differ substantially in their eddy parameterisation and surface buoyancy fluxes. In general, models with a larger heat-driven water mass transformation where deep waters upwell at the surface ( ∼ 55°S) transport warmer waters into intermediate depths, thus weakening the stratification in the upper 2 km. Models with a weak eddy induced overturning and a warm bias in the intermediate waters are more likely to exhibit larger increases in the upper overturning circulation, and more significant weakening of the lower overturning circulation. We find the opposite holds for a cool model bias in intermediate depths, combined with a more complex 3D eddy parameterisation that acts to reduce isopycnal slope. In summary, the Southern Ocean overturning circulation decadal trends in the coarse resolution CORE-II models are governed by biases in surface buoyancy fluxes and the ocean density field, and the configuration of the eddy parameterisation.

  14. Accounting rigid support at the border in a mixed model the finite element method in problems of ice cover destruction

    Directory of Open Access Journals (Sweden)

    V. V. Knyazkov

    2014-01-01

    Full Text Available To evaluate the force to damage the ice covers is necessary for estimation of icebreaking capability of vessels, as well as of hull strength of icebreakers, and navigation of ships in ice conditions. On the other hand, the use of ice cover support to arrange construction works from the ice is also of practical interest.By the present moment a great deal of investigations of ice cover deformation have been carried out to result, usually, in approximate calculations formula which was obtained after making a variety of assumptions. Nevertheless, we believe that it is possible to make further improvement in calculations. Application numerical methods, and, for example, FEM, makes possible to avoid numerous drawbacks of analytical methods dealing with both complex boundaries and load application areas and other problem peculiarities.The article considers an application of mixed models of FEM for investigating ice cover deformation. A simple flexible triangle element of mixed type was taken to solve this problem. Vector of generalized coordinates of the element contains apices flexures and normal bending moments in the middle of its sides. Compared to other elements mixed models easily satisfy compatibility requirements on the boundary of adjacent elements and do not require numerical displacement differentiation to define bending moments, because bending moments are included in vector of element generalized coordinates.The method of account of rigid support plate is proposed. The resulting ratio, taking into account the "stiffening", reduces the number of resolving systems of equations by the number of elements on the plate contour.To evaluate further the results the numerical realization of ice cover stress-strained problem it becomes necessary and correct to check whether calculation results correspond to accurate solution. Using an example of circular plate the convergence of numerical solutions to analytical solutions is showed.The article

  15. Model cortical association fields account for the time course and dependence on target complexity of human contour perception.

    Directory of Open Access Journals (Sweden)

    Vadas Gintautas

    2011-10-01

    Full Text Available Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas distributed among groups of randomly rotated fragments (clutter. The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms, followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least [Formula: see text] ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas.

  16. Accounting for geochemical alterations of caprock fracture permeability in basin-scale models of leakage from geologic CO2 reservoirs

    Science.gov (United States)

    Guo, B.; Fitts, J. P.; Dobossy, M.; Bielicki, J. M.; Peters, C. A.

    2012-12-01

    Climate mitigation, public acceptance and energy, markets demand that the potential CO2 leakage rates from geologic storage reservoirs are predicted to be low and are known to a high level of certainty. Current approaches to predict CO2 leakage rates assume constant permeability of leakage pathways (e.g., wellbores, faults, fractures). A reactive transport model was developed to account for geochemical alterations that result in permeability evolution of leakage pathways. The one-dimensional reactive transport model was coupled with the basin-scale Estimating Leakage Semi-Analytical (ELSA) model to simulate CO2 and brine leakage through vertical caprock pathways for different CO2 storage reservoir sites and injection scenarios within the Mt. Simon and St. Peter sandstone formations of the Michigan basin. Mineral dissolution in the numerical reactive transport model expands leakage pathways and increases permeability as a result of calcite dissolution by reactions driven by CO2-acidified brine. A geochemical model compared kinetic and equilibrium treatments of calcite dissolution within each grid block for each time step. For a single fracture, we investigated the effect of the reactions on leakage by performing sensitivity analyses of fracture geometry, CO2 concentration, calcite abundance, initial permeability, and pressure gradient. Assuming that calcite dissolution reaches equilibrium at each time step produces unrealistic scenarios of buffering and permeability evolution within fractures. Therefore, the reactive transport model with a kinetic treatment of calcite dissolution was coupled to the ELSA model and used to compare brine and CO2 leakage rates at a variety of potential geologic storage sites within the Michigan basin. The results are used to construct maps based on the susceptibility to geochemically driven increases in leakage rates. These maps should provide useful and easily communicated inputs into decision-making processes for siting geologic CO2

  17. Predictive models of adjuvant chemotherapy for patients with stage ii colorectal cancer: A retrospective study.

    Science.gov (United States)

    Wei, Bo; Zheng, Xiao-Ming; Lei, Pu-Run; Huang, Yong; Zheng, Zong-Heng; Chen, Tu-Feng; Huang, Jiang-Long; Fang, Jia-Feng; Liang, Cheng-Hua; Wei, Hong-Bo

    2017-09-05

    It remains controversial whether patients with Stage II colorectal cancer would benefit from adjuvant chemotherapy after radical resection. The aim of this study was to establish two mathematical models to identify the suitable patients for adjuvant chemotherapy. The current study comprised of two steps. In the first step, 353 patients with Stage II colorectal cancer who underwent surgical procedures at the Third Affiliated Hospital of Sun Yat-sen University between June 2006 and December 2015 were entered and followed up for 6-120 months. Their clinical data were collected and enrolled into the database. We established two mathematical models by univariate and multivariate Cox regression analysis to identify the target patients; in the second step, 230 patients under the same standard between January 2012 and December 2016 were entered and followed up for 3-62 months to verify the two models' validation. In the first step, totally 340 surgical patients with Stage II colorectal cancer were finally enrolled in this study. Statistical analysis showed that tumor differentiation (TD) (P models: (1) OS risk score = 1.116 × TD + 2.202 × LVI + 3.676 × UPM + 1.438 × LN - 0.493; (2) DFS risk score = 0.789 × TD + 2.074 × LVI + 3.183 × UPM + 1.329 × LN - 0.432. According to the models and cutoff points [(0.07, 1.33) and (-0.04, 1.30), respectively], patients can be divided into three groups: low-risk, moderate-risk, and high-risk. Moreover, the high-risk group patients could benefit from adjuvant chemotherapy. In the second step, totally 221 patients were finally used to verify the models' validation. The results proved that the models were accurate and feasible (Ppredictive models, patients with Stage II colorectal cancer in the high-risk group are strongly recommended for adjuvant chemotherapy, thus facilitating the individualized and precise treatment.

  18. Serviceability limit state related to excessive lateral deformations to account for infill walls in the structural model

    Directory of Open Access Journals (Sweden)

    G. M. S. ALVA

    Full Text Available Brazilian Codes NBR 6118 and NBR 15575 provide practical values for interstory drift limits applied to conventional modeling in order to prevent negative effects in masonry infill walls caused by excessive lateral deformability, however these codes do not account for infill walls in the structural model. The inclusion of infill walls in the proposed model allows for a quantitative evaluation of structural stresses in these walls and an assessment of cracking in these elements (sliding shear diagonal tension and diagonal compression cracking. This paper presents the results of simulations of single-story one-bay infilled R/C frames. The main objective is to show how to check the serviceability limit states under lateral loads when the infill walls are included in the modeling. The results of numerical simulations allowed for an evaluation of stresses and the probable cracking pattern in infill walls. The results also allowed an identification of some advantages and limitations of the NBR 6118 practical procedure based on interstory drift limits.

  19. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Peixin; Chai, Feng [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Bi, Yunlong [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Pei, Yulong, E-mail: peiyulong1@163.com [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Cheng, Shukang [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China)

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization. - Highlights: • The no-load magnetic field of poke-type motors is firstly calculated by analytical method. • The magnetic circuit model and iterative method are employed to calculate the permeability. • The analytical expression of each subdomain is derived.. • The proposed method can effectively reduce the predesign stages duration.

  20. Ensemble-based flash-flood modelling: Taking into account hydrodynamic parameters and initial soil moisture uncertainties

    Science.gov (United States)

    Edouard, Simon; Vincendon, Béatrice; Ducrocq, Véronique

    2018-05-01

    Intense precipitation events in the Mediterranean often lead to devastating flash floods (FF). FF modelling is affected by several kinds of uncertainties and Hydrological Ensemble Prediction Systems (HEPS) are designed to take those uncertainties into account. The major source of uncertainty comes from rainfall forcing and convective-scale meteorological ensemble prediction systems can manage it for forecasting purpose. But other sources are related to the hydrological modelling part of the HEPS. This study focuses on the uncertainties arising from the hydrological model parameters and initial soil moisture with aim to design an ensemble-based version of an hydrological model dedicated to Mediterranean fast responding rivers simulations, the ISBA-TOP coupled system. The first step consists in identifying the parameters that have the strongest influence on FF simulations by assuming perfect precipitation. A sensitivity study is carried out first using a synthetic framework and then for several real events and several catchments. Perturbation methods varying the most sensitive parameters as well as initial soil moisture allow designing an ensemble-based version of ISBA-TOP. The first results of this system on some real events are presented. The direct perspective of this work will be to drive this ensemble-based version with the members of a convective-scale meteorological ensemble prediction system to design a complete HEPS for FF forecasting.

  1. A statistical human rib cage geometry model accounting for variations by age, sex, stature and body mass index.

    Science.gov (United States)

    Shi, Xiangnan; Cao, Libo; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2014-07-18

    In this study, we developed a statistical rib cage geometry model accounting for variations by age, sex, stature and body mass index (BMI). Thorax CT scans were obtained from 89 subjects approximately evenly distributed among 8 age groups and both sexes. Threshold-based CT image segmentation was performed to extract the rib geometries, and a total of 464 landmarks on the left side of each subject׳s ribcage were collected to describe the size and shape of the rib cage as well as the cross-sectional geometry of each rib. Principal component analysis and multivariate regression analysis were conducted to predict rib cage geometry as a function of age, sex, stature, and BMI, all of which showed strong effects on rib cage geometry. Except for BMI, all parameters also showed significant effects on rib cross-sectional area using a linear mixed model. This statistical rib cage geometry model can serve as a geometric basis for developing a parametric human thorax finite element model for quantifying effects from different human attributes on thoracic injury risks. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  3. Multivariate space-time modelling of multiple air pollutants and their health effects accounting for exposure uncertainty.

    Science.gov (United States)

    Huang, Guowen; Lee, Duncan; Scott, E Marian

    2018-03-30

    The long-term health effects of air pollution are often estimated using a spatio-temporal ecological areal unit study, but this design leads to the following statistical challenges: (1) how to estimate spatially representative pollution concentrations for each areal unit; (2) how to allow for the uncertainty in these estimated concentrations when estimating their health effects; and (3) how to simultaneously estimate the joint effects of multiple correlated pollutants. This article proposes a novel 2-stage Bayesian hierarchical model for addressing these 3 challenges, with inference based on Markov chain Monte Carlo simulation. The first stage is a multivariate spatio-temporal fusion model for predicting areal level average concentrations of multiple pollutants from both monitored and modelled pollution data. The second stage is a spatio-temporal model for estimating the health impact of multiple correlated pollutants simultaneously, which accounts for the uncertainty in the estimated pollution concentrations. The novel methodology is motivated by a new study of the impact of both particulate matter and nitrogen dioxide concentrations on respiratory hospital admissions in Scotland between 2007 and 2011, and the results suggest that both pollutants exhibit substantial and independent health effects. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. MODEL OF DISTRIBUTION OF THE BUDGET OF THE PORTFOLIO OF IT PROJECTS TAKING IN-TO ACCOUNT THEIR PRIORITY

    Directory of Open Access Journals (Sweden)

    Anita V. Sotnikova

    2015-01-01

    Full Text Available Article is devoted to a problem of effective distribution of the general budget of a portfolio between the IT projects which are its part taking into ac-count their priority. The designated problem is actual in view of low results of activity of the consulting companies in the sphere of information technologies.For determination of priority of IT projects the method of analytical networks developed by T. Saati is used. For the purpose of application of this method the system of criteria (indicators reflecting influence of IT projects of a portfolio on the most significant purposes of implementation of IT projects of a portfolio is developed. As system of criteria the key indicators of efficiency defined when developing the Balanced system of indicators which meet above-mentioned requirements are used. The essence of a method of analytical net-works consists in paired comparison of key indicators of efficiency concerning the purpose of realization of a portfolio and IT projects which are a part of a portfolio. Result of use of a method of analytical networks are coefficients of priority of each IT project of a portfolio. The received coefficients of priority of IT projects are used in the offered model of distribution of the budget of a portfolio between IT projects. Thus, the budget of a portfolio of IT projects is distributed between them taking into account not only the income from implementation of each IT project, but also other criteria, important for the IT company, for example: the degree of compliance of the IT project to strategic objectives of the IT company defining expediency of implementation of the IT project; the term of implementation of the IT project determined by the customer. The developed model of distribution of the budget of a portfolio between IT projects is approved on the example of distribution of the budget between IT projects of the portfolio consisting of three IT projects. Taking into account the received

  5. Population Modeling of Modified Risk Tobacco Products Accounting for Smoking Reduction and Gradual Transitions of Relative Risk.

    Science.gov (United States)

    Poland, Bill; Teischinger, Florian

    2017-11-01

    As suggested by the Food and Drug Administration (FDA) Modified Risk Tobacco Product (MRTP) Applications Draft Guidance, we developed a statistical model based on public data to explore the effect on population mortality of an MRTP resulting in reduced conventional cigarette smoking. Many cigarette smokers who try an MRTP persist as dual users while smoking fewer conventional cigarettes per day (CPD). Lower-CPD smokers have lower mortality risk based on large cohort studies. However, with little data on the effect of smoking reduction on mortality, predictive modeling is needed. We generalize prior assumptions of gradual, exponential decay of Excess Risk (ER) of death, relative to never-smokers, after quitting or reducing CPD. The same age-dependent slopes are applied to all transitions, including initiation to conventional cigarettes and to a second product (MRTP). A Monte Carlo simulation model generates random individual product use histories, including CPD, to project cumulative deaths through 2060 in a population with versus without the MRTP. Transitions are modeled to and from dual use, which affects CPD and cigarette quit rates, and to MRTP use only. Results in a hypothetical scenario showed high sensitivity of long-run mortality to CPD reduction levels and moderate sensitivity to ER transition rates. Models to project population effects of an MRTP should account for possible mortality effects of reduced smoking among dual users. In addition, studies should follow dual-user CPD histories and quit rates over long time periods to clarify long-term usage patterns and thereby improve health impact projections. We simulated mortality effects of a hypothetical MRTP accounting for cigarette smoking reduction by smokers who add MRTP use. Data on relative mortality risk versus CPD suggest that this reduction may have a substantial effect on mortality rates, unaccounted for in other models. This effect is weighed with additional hypothetical effects in an example.

  6. Tritium accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.; Spannagel, G.

    1995-01-01

    Conventional accountancy means that for a given material balance area and a given interval of time the tritium balance is established so that at the end of that interval of time the book inventory is compared with the measured inventory. In this way, an optimal effectiveness of accountancy is achieved. However, there are still further objectives of accountancy, namely the timely detection of anomalies as well as the localization of anomalies in a major system. It can be shown that each of these objectives can be optimized only at the expense of the others. Recently, Near-Real-Time Accountancy procedures have been studied; their methodological background as well as their merits will be discussed. (orig.)

  7. Flight Model Development of Tokyo Tech Nano-Satellite Cute-1.7 + APD II

    Science.gov (United States)

    Ashida, Hiroki; Nishida, Junichi; Omagari, Kuniyuki; Fujiwara, Ken; Konda, Yasumi; Yamanaka, Tomio; Tanaka, Yohei; Maeno, Masaki; Fujihashi, Kota; Inagawa, Shinichi; Miura, Yoshiyuki; Matunaga, Saburo

    The Laboratory for Space Systems at the Tokyo Institute of Technology has developed the nano-satellite Cute-1.7+APD. The satellite was launched by JAXA M-V-8 rocket on February 22, 2006 and operated for about a month. A successor to the Cute-1.7+APD was developed and is named Cute-1.7+APD II. This new satellite is based on its predecessor but has some modifications. In this paper an overview of the Cute-1.7 series and flight model development of Cute-1.7+APD II are introduced.

  8. LRS Bianchi type-II dark energy model in a scalar-tensor theory of gravitation

    Science.gov (United States)

    Naidu, R. L.; Satyanarayana, B.; Reddy, D. R. K.

    2012-04-01

    A locally rotationally symmetric Bianchi type-II (LRS B-II) space-time with variable equation of state (EoS) parameter and constant deceleration parameter have been investigated in the scalar-tensor theory proposed by Saez and Ballester (Phys. Lett. A 113:467, 1986). The scalar-tensor field equations have been solved by applying variation law for generalized Hubble's parameter given by Bermann (Nuovo Cimento 74:182, 1983). The physical and kinematical properties of the model are also discussed.

  9. DIVWAG Model Documentation. Volume II. Programmer/Analyst Manual. Part 1.

    Science.gov (United States)

    1976-07-01

    Routine GCLAST IV-7-B-58 IV-7-B-15 Routine GUPDAT IV-7-B-63 IV-7-C- la Unit Geometry and Target Acquisition Sample Output From Ground Combat Model IV-7-C-2...OMTDATFLE 52, EQUI1HENT ON TRAIKS DATA AND INDEXES LISTRI EDEQUIPIIENT -DIATA Figure II-3-B-3. Routine DMPTOE. (Concluded) II-3-B-19 LA (6) Block L206...READ A CARD 4 EOF YES CARD ? AC LL ENTRIES YES ON CARDTA PROCE S NO L14 ERROR YES PRINT ® IN ENTRY? ERROR MSSAGE NO 7 ENTRY CELL NUBE TERRAIN DATA

  10. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    Science.gov (United States)

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, J. [Pennsylvania U.; Guy, J. [LBL, Berkeley; Kessler, R. [Chicago U., KICP; Astier, P. [Paris U., VI-VII; Marriner, J. [Fermilab; Betoule, M. [Paris U., VI-VII; Sako, M. [Pennsylvania U.; El-Hage, P. [Paris U., VI-VII; Biswas, R. [Argonne; Pain, R. [Paris U., VI-VII; Kuhlmann, S. [Argonne; Regnault, N. [Paris U., VI-VII; Frieman, J. A. [Fermilab; Schneider, D. P. [Penn State U.

    2014-08-29

    We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ~120 low-redshift (z < 0.1) SNe Ia, ~255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ~290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w (input) – w (recovered)) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty, the average bias on w is –0.014 ± 0.007.

  12. Prediction of MHC class II binding peptides based on an iterative learning model

    Science.gov (United States)

    Murugan, Naveen; Dai, Yang

    2005-01-01

    Background Prediction of the binding ability of antigen peptides to major histocompatibility complex (MHC) class II molecules is important in vaccine development. The variable length of each binding peptide complicates this prediction. Motivated by a text mining model designed for building a classifier from labeled and unlabeled examples, we have developed an iterative supervised learning model for the prediction of MHC class II binding peptides. Results A linear programming (LP) model was employed for the learning task at each iteration, since it is fast and can re-optimize the previous classifier when the training sets are altered. The performance of the new model has been evaluated with benchmark datasets. The outcome demonstrates that the model achieves an accuracy of prediction that is competitive compared to the advanced predictors (the Gibbs sampler and TEPITOPE). The average areas under the ROC curve obtained from one variant of our model are 0.753 and 0.715 for the original and homology reduced benchmark sets, respectively. The corresponding values are respectively 0.744 and 0.673 for the Gibbs sampler and 0.702 and 0.667 for TEPITOPE. Conclusion The iterative learning procedure appears to be effective in prediction of MHC class II binders. It offers an alternative approach to this important predictionproblem. PMID:16351712

  13. Accounting assessment

    Directory of Open Access Journals (Sweden)

    Kafka S.М.

    2017-03-01

    Full Text Available The proper evaluation of accounting objects influences essentially upon the reliability of assessing the financial situation of a company. Thus, the problem in accounting estimate is quite relevant. The works of home and foreign scholars on the issues of assessment of accounting objects, regulatory and legal acts of Ukraine controlling the accounting and compiling financial reporting are a methodological basis for the research. The author uses the theoretical methods of cognition (abstraction and generalization, analysis and synthesis, induction and deduction and other methods producing conceptual knowledge for the synthesis of theoretical and methodological principles in the evaluation of assets accounting, liabilities and equity. The tabular presentation and information comparison methods are used for analytical researches. The article considers the modern approaches to the issue of evaluation of accounting objects and financial statements items. The expedience to keep records under historical value is proved and the articles of financial statements are to be presented according to the evaluation on the reporting date. In connection with the evaluation the depreciation of fixed assets is considered as a process of systematic return into circulation of the before advanced funds on the purchase (production, improvement of fixed assets and intangible assets by means of including the amount of wear in production costs. Therefore it is proposed to amortize only the actual costs incurred, i.e. not to depreciate the fixed assets received free of charge and surplus valuation of different kinds.

  14. LRS Bianchi Type II Massive String Cosmological Models with Magnetic Field in Lyra's Geometry

    Directory of Open Access Journals (Sweden)

    Raj Bali

    2013-01-01

    Full Text Available Bianchi type II massive string cosmological models with magnetic field and time dependent gauge function ( in the frame work of Lyra's geometry are investigated. The magnetic field is in -plane. To get the deterministic solution, we have assumed that the shear ( is proportional to the expansion (. This leads to , where and are metric potentials and is a constant. We find that the models start with a big bang at initial singularity and expansion decreases due to lapse of time. The anisotropy is maintained throughout but the model isotropizes when . The physical and geometrical aspects of the model in the presence and absence of magnetic field are also discussed.

  15. Controlling FAMA by the Ptolemy II model of ion beam transport

    Energy Technology Data Exchange (ETDEWEB)

    Balvanovic, R. [Vinca Institute of Nuclear Sciences, P.O. Box 522, Belgrade 11001 (Serbia)], E-mail: broman@vinca.rs; Radenovic, B. [Institute of Physics, Pregrevica 118, Belgrade 11080 (Serbia); Belicev, P.; Neskovic, N. [Vinca Institute of Nuclear Sciences, P.O. Box 522, Belgrade 11001 (Serbia)

    2009-08-11

    FAMA is a facility for modification and analysis of materials with ion beams. Due to the wide range of ion beams and energies used in the facility and its future expansion, the need has arisen for faster tuning of ion beams transport control parameters. With this aim, a new approach to modeling ion-beam transport system was developed, based on the Ptolemy II modeling and design framework. A model in Ptolemy II is a hierarchical aggregation of components called actors, which communicate with other actors using tokens, or pieces of data. Each ion optical element is modeled by a composite actor implementing beam matrix transformation function, while tokens carry beam matrix data. A basic library of models of typical ion optical elements is developed, and a complex model of FAMA ion beam transport system is hierarchically integrated with bottom-up approach. The model is extended to include control functions. The developed model is modular, flexible and extensible. The results obtained by simulation on the model demonstrate easy and efficient tuning of beam line control parameters. Fine tuning of control parameters, due to uncertainties inherent to modeling, still has to be performed on-line.

  16. The theoretical and computational models of the GASFLOW-II code

    International Nuclear Information System (INIS)

    Travis, J.R.

    1999-01-01

    GASFLOW-II is a finite-volume computer code that solves the time-dependent compressible Navier-Stokes equations for multiple gas species in a dispersed liquid water two-phase medium. The fluid-dynamics algorithm is coupled to the chemical kinetics of combusting gases to simulate diffusion or propagating flames in complex geometries of nuclear containments. GASFLOW-II is therefore able to predict gaseous distributions and thermal and pressure loads on containment structures and safety related equipment in the event combustion occurs. Current developments of GASFLOW-II are focused on hydrogen distribution, mitigation measures including carbon dioxide inerting, and possible combustion events in nuclear reactor containments. Fluid turbulence is calculated to enhance the transport and mixing of gases in rooms and volumes that may be connected by a ventilation system. Condensation, vaporization, and heat transfer to walls, floors, ceilings, internal structures, and within the fluid are calculated to model the appropriate mass and energy sinks. (author)

  17. Modeling the Photoelectron Spectra of MoNbO2(-) Accounting for Spin Contamination in Density Functional Theory.

    Science.gov (United States)

    Thompson, Lee M; Hratchian, Hrant P

    2015-08-13

    Spin contamination in density functional studies has been identified as a cause of discrepancies between theoretical and experimental spectra of metal oxide clusters such as MoNbO2. We perform calculations to simulate the photoelectron spectra of the MoNbO2 anion using broken-symmetry density functional theory incorporating recently developed approximate projection methods. These calculations are able to account for the presence of contaminating spin states at single-reference computational cost. Results using these new tools demonstrate the significant effect of spin-contamination on geometries and force constants and show that the related errors in simulated spectra may be largely overcome by using an approximate projection model.

  18. Mathematical Modeling and a Hybrid NSGA-II Algorithm for Process Planning Problem Considering Machining Cost and Carbon Emission

    Directory of Open Access Journals (Sweden)

    Jin Huang

    2017-09-01

    Full Text Available Process planning is an important function in a manufacturing system; it specifies the manufacturing requirements and details for the shop floor to convert a part from raw material to the finished form. However, considering only economical criterion with technological constraints is not enough in sustainable manufacturing practice; formerly, criteria about low carbon emission awareness have seldom been taken into account in process planning optimization. In this paper, a mathematical model that considers both machining costs reduction as well as carbon emission reduction is established for the process planning problem. However, due to various flexibilities together with complex precedence constraints between operations, the process planning problem is a non-deterministic polynomial-time (NP hard problem. Aiming at the distinctive feature of the multi-objectives process planning optimization, we then developed a hybrid non-dominated sorting genetic algorithm (NSGA-II to tackle this problem. A local search method that considers both the total cost criterion and the carbon emission criterion are introduced into the proposed algorithm to avoid being trapped into local optima. Moreover, the technique for order preference by similarity to an ideal solution (TOPSIS method is also adopted to determine the best solution from the Pareto front. Experiments have been conducted using Kim’s benchmark. Computational results show that process plan schemes with low carbon emission can be captured, and, more importantly, the proposed hybrid NSGA-II algorithm can obtain more promising optimal Pareto front than the plain NSGA-II algorithm. Meanwhile, according to the computational results of Kim’s benchmark, we find that both of the total machining cost and carbon emission are roughly proportional to the number of operations, and a process plan with less operation may be more satisfactory. This study will draw references for the further research on green

  19. Accounting for pH heterogeneity and variability in modelling human health risks from cadmium in contaminated land

    International Nuclear Information System (INIS)

    Gay, J. Rebecca; Korre, Anna

    2009-01-01

    The authors have previously published a methodology which combines quantitative probabilistic human health risk assessment and spatial statistical methods (geostatistics) to produce an assessment, incorporating uncertainty, of risks to human health from exposure to contaminated land. The model assumes a constant soil to plant concentration factor (CF veg ) when calculating intake of contaminants. This model is modified here to enhance its use in a situation where CF veg varies according to soil pH, as is the case for cadmium. The original methodology uses sequential indicator simulation (SIS) to map soil concentration estimates for one contaminant across a site. A real, age-stratified population is mapped across the contaminated area, and intake of soil contaminants by individuals is calculated probabilistically using an adaptation of the Contaminated Land Exposure Assessment (CLEA) model. The proposed improvement involves not only the geostatistical estimation of the contaminant concentration, but also that of soil pH, which in turn leads to a variable CF veg estimate which influences the human intake results. The results presented demonstrate that taking pH into account can influence the outcome of the risk assessment greatly. It is proposed that a similar adaptation could be used for other combinations of soil variables which influence CF veg .

  20. Explanations, mechanisms, and developmental models: Why the nativist account of early perceptual learning is not a proper mechanistic model

    Directory of Open Access Journals (Sweden)

    Radenović Ljiljana

    2013-01-01

    Full Text Available In the last several decades a number of studies on perceptual learning in early infancy have suggested that even infants seem to be sensitive to the way objects move and interact in the world. In order to explain the early emergence of infants’ sensitivity to causal patterns in the world some psychologists have proposed that core knowledge of objects and causal relations is innate (Leslie & Keeble 1987, Carey & Spelke, 1994; Keil, 1995; Spelke et al., 1994. The goal of this paper is to examine the nativist developmental model by investigating the criteria that a mechanistic model needs to fulfill if it is to be explanatory. Craver (2006 put forth a number of such criteria and developed a few very useful distinctions between explanation sketches and proper mechanistic explanations. By applying these criteria to the nativist developmental model I aim to show, firstly, that nativists only partially characterize the phenomenon at stake without giving us the details of when and under which conditions perception and attention in early infancy take place. Secondly, nativist start off with a description of the phenomena to be explained (even if it is only a partial description but import into it a particular theory of perception that requires further empirical evidence and further defense on its own. Furthermore, I argue that innate knowledge is a good candidate for a filler term (a term that is used to name the still unknown processes and parts of the mechanism and is likely to become redundant. Recent extensive research on early intermodal perception indicates that the mechanism enabling the perception of regularities and causal patterns in early infancy is grounded in our neurophysiology. However, this mechanism is fairly basic and does not involve highly sophisticated cognitive structures or innate core knowledge. I conclude with a remark that a closer examination of the mechanisms involved in early perceptual learning indicates that the nativism

  1. Goals and Psychological Accounting

    DEFF Research Database (Denmark)

    Koch, Alexander Karl; Nafziger, Julia

    We model how people formulate and evaluate goals to overcome self-control problems. People often attempt to regulate their behavior by evaluating goal-related outcomes separately (in narrow psychological accounts) rather than jointly (in a broad account). To explain this evidence, our theory...... of endogenous narrow or broad psychological accounts combines insights from the literatures on goals and mental accounting with models of expectations-based reference-dependent preferences. By formulating goals the individual creates expectations that induce reference points for task outcomes. These goal......-induced reference points make substandard performance psychologically painful and motivate the individual to stick to his goals. How strong the commitment to goals is depends on the type of psychological account. We provide conditions when it is optimal to evaluate goals in narrow accounts. The key intuition...

  2. An empirical test of Birkett’s competency model for management accountants : A confirmative study conducted in the Netherlands

    NARCIS (Netherlands)

    Bots, J.M.; Groenland, E.A.G.; Swagerman, D.

    2009-01-01

    In 2002, the Accountants-in-Business section of the International Federation of Accountants (IFAC) issued the Competency Profiles for Management Accounting Practice and Practitioners report. This “Birkett Report” presents a framework for competency development during the careers of management

  3. A fundamentalist perspective on accounting and implications for accounting

    OpenAIRE

    Guohua Jiang; Stephen Penman

    2013-01-01

    This paper presents a framework for addressing normative accounting issues for reporting to shareholders. The framework is an alternative to the emerging Conceptual Framework of the International Accounting Standards Board and the Financial Accounting Standards Board. The framework can be broadly characterized as a utilitarian approach to accounting standard setting. It has two main features. First, accounting is linked to valuation models under which shareholders use accounting information t...

  4. Atucha II NPP full scope simulator modelling with the thermal hydraulic code TRACRT

    International Nuclear Information System (INIS)

    Alonso, Pablo Rey; Ruiz, Jose Antonio; Rivero, Norberto

    2011-01-01

    In February 2010 NA-SA (Nucleoelectrica Argentina S.A.) awarded Tecnatom the Atucha II full scope simulator project. NA-SA is a public company owner of the Argentinean nuclear power plants. Atucha II is due to enter in operation shortly. Atucha II NPP is a PHWR type plant cooled by the water of the Parana River and has the same design as the Atucha I unit, doubling its power capacity. Atucha II will produce 745 MWe utilizing heavy water as coolant and moderator, and natural uranium as fuel. A plant singular feature is the permanent core refueling. TRAC R T is the first real time thermal hydraulic six-equations code used in the training simulation industry for NSSS modeling. It is the result from adapting to real time the best estimate code TRACG. TRAC R T is based on first principle conservation equations for mass, energy and momentum for liquid and steam phases, with two phase flows under non homogeneous and non equilibrium conditions. At present, it has been successfully implemented in twelve full scope replica simulators in different training centers throughout the world. To ease the modeling task, TRAC R T includes a graphical pre-processing tool designed to optimize this process and alleviate the burden of entering alpha numerical data in an input file. (author)

  5. The murine angiotensin II-induced abdominal aortic aneurysm model: rupture risk and inflammatory progression patterns

    Directory of Open Access Journals (Sweden)

    Richard Y Cao

    2010-07-01

    Full Text Available An abdominal aortic aneurysm (AAA is an enlargement of the greatest artery in the body defined as an increase in diameter of 1.5-fold. AAAs are common in the elderly population and thousands die each year from their complications. The most commonly used mouse model to study the pathogenesis of AAA is the angiotensin II (Ang II infusion method delivered via osmotic mini-pump for 28 days. Here, we studied the site-specificity and onset of aortic rupture, characterized three-dimensional (3D images and flow patterns in developing AAAs by ultrasound imaging, and examined macrophage infiltration in the Ang II model using 65 apolipoprotein E deficient mice. Aortic rupture occurred in 16 mice (25 % and was nearly as prevalent at the aortic arch (44 % as it was in the suprarenal region (56 % and was most common within the first seven days after Ang II infusion (12 of 16; 75 %. Longitudinal ultrasound screening was found to correlate nicely with histological analysis and AAA volume renderings showed a significant relationship with AAA severity index. Aortic dissection preceded altered flow patterns and macrophage infiltration was a prominent characteristic of developing AAAs. Targeting the inflammatory component of AAA disease with novel therapeutics will hopefully lead to new strategies to attenuate aneurysm growth and aortic rupture.

  6. Regulatory activity based risk model identifies survival of stage II and III colorectal carcinoma.

    Science.gov (United States)

    Liu, Gang; Dong, Chuanpeng; Wang, Xing; Hou, Guojun; Zheng, Yu; Xu, Huilin; Zhan, Xiaohui; Liu, Lei

    2017-11-17

    Clinical and pathological indicators are inadequate for prognosis of stage II and III colorectal carcinoma (CRC). In this study, we utilized the activity of regulatory factors, univariate Cox regression and random forest for variable selection and developed a multivariate Cox model to predict the overall survival of Stage II/III colorectal carcinoma in GSE39582 datasets (469 samples). Patients in low-risk group showed a significant longer overall survival and recurrence-free survival time than those in high-risk group. This finding was further validated in five other independent datasets (GSE14333, GSE17536, GSE17537, GSE33113, and GSE37892). Besides, associations between clinicopathological information and risk score were analyzed. A nomogram including risk score was plotted to facilitate the utilization of risk score. The risk score model is also demonstrated to be effective on predicting both overall and recurrence-free survival of chemotherapy received patients. After performing Gene Set Enrichment Analysis (GSEA) between high and low risk groups, we found that several cell-cell interaction KEGG pathways were identified. Funnel plot results showed that there was no publication bias in these datasets. In summary, by utilizing the regulatory activity in stage II and III colorectal carcinoma, the risk score successfully predicts the survival of 1021 stage II/III CRC patients in six independent datasets.

  7. Reactive Transport Modeling of Microbe-mediated Fe (II) Oxidation for Enhanced Oil Recovery

    Science.gov (United States)

    Surasani, V.; Li, L.

    2011-12-01

    Microbially Enhanced Oil Recovery (MEOR) aims to improve the recovery of entrapped heavy oil in depleted reservoirs using microbe-based technology. Reservoir ecosystems often contain diverse microbial communities those can interact with subsurface fluids and minerals through a network of nutrients and energy fluxes. Microbe-mediated reactions products include gases, biosurfactants, biopolymers those can alter the properties of oil and interfacial interactions between oil, brine, and rocks. In addition, the produced biomass and mineral precipitates can change the reservoir permeability profile and increase sweeping efficiency. Under subsurface conditions, the injection of nitrate and Fe (II) as the electron acceptor and donor allows bacteria to grow. The reaction products include minerals such as Fe(OH)3 and nitrogen containing gases. These reaction products can have large impact on oil and reservoir properties and can enhance the recovery of trapped oil. This work aims to understand the Fe(II) oxidation by nitrate under conditions relevant to MEOR. Reactive transport modeling is used to simulate the fluid flow, transport, and reactions involved in this process. Here we developed a complex reactive network for microbial mediated nitrate-dependent Fe (II) oxidation that involves both thermodynamic controlled aqueous reactions and kinetic controlled Fe (II) mineral reaction. Reactive transport modeling is used to understand and quantify the coupling between flow, transport, and reaction processes. Our results identify key parameter controls those are important for the alteration of permeability profile under field conditions.

  8. Modelling of the physico-chemical behaviour of clay minerals with a thermo-kinetic model taking into account particles morphology in compacted material.

    Science.gov (United States)

    Sali, D.; Fritz, B.; Clément, C.; Michau, N.

    2003-04-01

    Modelling of fluid-mineral interactions is largely used in Earth Sciences studies to better understand the involved physicochemical processes and their long-term effect on the materials behaviour. Numerical models simplify the processes but try to preserve their main characteristics. Therefore the modelling results strongly depend on the data quality describing initial physicochemical conditions for rock materials, fluids and gases, and on the realistic way of processes representations. The current geo-chemical models do not well take into account rock porosity and permeability and the particle morphology of clay minerals. In compacted materials like those considered as barriers in waste repositories, low permeability rocks like mudstones or compacted powders will be used : they contain mainly fine particles and the geochemical models used for predicting their interactions with fluids tend to misjudge their surface areas, which are fundamental parameters in kinetic modelling. The purpose of this study was to improve how to take into account the particles morphology in the thermo-kinetic code KINDIS and the reactive transport code KIRMAT. A new function was integrated in these codes, considering the reaction surface area as a volume depending parameter and the calculated evolution of the mass balance in the system was coupled with the evolution of reactive surface areas. We made application exercises for numerical validation of these new versions of the codes and the results were compared with those of the pre-existing thermo-kinetic code KINDIS. Several points are highlighted. Taking into account reactive surface area evolution during simulation modifies the predicted mass transfers related to fluid-minerals interactions. Different secondary mineral phases are also observed during modelling. The evolution of the reactive surface parameter helps to solve the competition effects between different phases present in the system which are all able to fix the chemical

  9. Computerized transportation model for the NRC Physical Protection Project. Versions I and II

    International Nuclear Information System (INIS)

    Anderson, G.M.

    1978-01-01

    Details on two versions of a computerized model for the transportation system of the NRC Physical Protection Project are presented. The Version I model permits scheduling of all types of transport units associated with a truck fleet, including truck trailers, truck tractors, escort vehicles and crews. A fixed-fleet itinerary construction process is used in which iterations on fleet size are required until the service requirements are satisfied. The Version II model adds an aircraft mode capability and provides for a more efficient non-fixed-fleet itinerary generation process. Test results using both versions are included

  10. ABACC - Brazil-Argentina Agency for Accounting and Control of Nuclear Materials, a model of integration and transparence

    International Nuclear Information System (INIS)

    Oliveira, Antonio A.; Do Canto, Odilon Marcusso

    2013-01-01

    Argentina and Brazil began its activities in the nuclear area about the same time, in the 50 century past. The existence of an international nuclear nonproliferation treaty-TNP-seen by Brazil and Argentina as discriminatory and prejudicial to the interests of the countries without nuclear weapons, led to the need for a common system of control of nuclear material between the two countries to somehow provide assurances to the international community of the exclusively peaceful purpose of its nuclear programs. The creation of a common system, assured the establishment of uniform procedures to implement safeguards in Argentina and Brazil, so the same requirements and safeguards procedures took effect in both countries, and the operators of nuclear facilities began to follow the same rules of control of nuclear materials and subjected to the same type of verification and control. On July 18, 1991, the Bilateral Agreement for the Exclusively Peaceful Use of Nuclear Energy created a binational body, the Argentina-Brazil Agency for Accounting and Control of Nuclear Materials-ABACC-to implement the so-called Common System of Accounting and Control of Nuclear materials - SCCC. The deal provided, permanently, a clear commitment to use exclusively for peaceful purposes all material and nuclear facilities under the jurisdiction or control of the two countries. The Quadripartite Agreement, signed in December of that year, between the two countries, ABACC and IAEA completed the legal framework for the implementation of comprehensive safeguards system. The 'model ABACC' now represents a paradigmatic framework in the long process of economic, political, technological and cultural integration of the two countries. Argentina and Brazil were able to establish a guarantee system that is unique in the world today and that consolidated and matured over more than twenty years, has earned the respect of the international community

  11. Comprehensive Revenue and Expense Data Collection Methodology for Teaching Health Centers: A Model for Accountable Graduate Medical Education Financing.

    Science.gov (United States)

    Regenstein, Marsha; Snyder, John E; Jewers, Mariellen Malloy; Nocella, Kiki; Mullan, Fitzhugh

    2018-04-01

    Despite considerable federal investment, graduate medical education financing is neither transparent for estimating residency training costs nor accountable for effectively producing a physician workforce that matches the nation's health care needs. The Teaching Health Center Graduate Medical Education (THCGME) program's authorization in 2010 provided an opportunity to establish a more transparent financing mechanism. We developed a standardized methodology for quantifying the necessary investment to train primary care physicians in high-need communities. The THCGME Costing Instrument was designed utilizing guidance from site visits, financial documentation, and expert review. It collects educational outlays, patient service expenses and revenues from residents' ambulatory and inpatient care, and payer mix. The instrument was fielded from April to November 2015 in 43 THCGME-funded residency programs of varying specialties and organizational structures. Of the 43 programs, 36 programs (84%) submitted THCGME Costing Instruments. The THCGME Costing Instrument collected standardized, detailed cost data on residency labor (n = 36), administration and educational outlays (n = 33), ambulatory care visits and payer mix (n = 30), patient service expenses (n =  26), and revenues generated by residents (n = 26), in contrast to Medicare cost reports, which include only costs incurred by residency programs. The THCGME Costing Instrument provides a model for calculating evidence-based costs and revenues of community-based residency programs, and it enhances accountability by offering an approach that estimates residency costs and revenues in a range of settings. The instrument may have feasibility and utility for application in other residency training settings.

  12. Signaling and Accounting Information

    OpenAIRE

    Stewart C. Myers

    1989-01-01

    This paper develops a signaling model in which accounting information improves real investment decisions. Pure cash flow reporting is shown to lead to underinvestment when managers have superior information but are acting in shareholders' interests. Accounting by prespecified, "objective" rules alleviates the underinvestment problem.

  13. Towards ecosystem accounting

    NARCIS (Netherlands)

    Duku, C.; Rathjens, H.; Zwart, S.J.; Hein, L.

    2015-01-01

    Ecosystem accounting is an emerging field that aims to provide a consistent approach to analysing environment-economy interactions. One of the specific features of ecosystem accounting is the distinction between the capacity and the flow of ecosystem services. Ecohydrological modelling to support

  14. Basis of accountability system

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    The first part of this presentation describes in an introductory manner the accountability design approach which is used for the Model Plant in order to meet US safeguards requirements. The general requirements for the US national system are first presented. Next, the approach taken to meet each general requirement is described. The general concepts and principles of the accountability system are introduced. The second part of this presentation describes some basic concepts and techniques used in the model plant accounting system and relates them to US safeguards requirements. The specifics and mechanics of the model plant accounting system are presented in the third part. The purpose of this session is to enable participants to: (1) understand how the accounting system is designed to meet safeguards criteria for both IAEA and State Systems; (2) understand the principles of materials accounting used to account for element and isotope in the model plant; (3) understand how the computer-based accounting system operates to meet the above objectives

  15. Eye movement control in reading: accounting for initial fixation locations and refixations within the E-Z Reader model.

    Science.gov (United States)

    Reichle, E D; Rayner, K; Pollatsek, A

    1999-10-01

    Reilly and O'Regan (1998, Vision Research, 38, 303-317) used computer simulations to evaluate how well several different word-targeting strategies could account for results which show that the distributions of fixation locations in reading are systematically related to low-level oculomotor variables, such as saccade distance and launch site [McConkie, Kerr, Reddix & Zola, (1988). Vision Research, 28, 1107-1118]. Their simulation results suggested that fixation locations are primarily determined by word length information, and that the processing of language, such as the identification of words, plays only a minimal role in deciding where to move the eyes. This claim appears to be problematic for our model of eye movement control in reading, E-Z Reader [Rayner, Reichle & Pollatsek (1998). Eye movement control in reading: an overview and model. In G. Underwood, Eye guidance in reading and scene perception (pp. 243-268). Oxford, UK: Elsevier; Reichle, Pollatsek, Fisher & Rayner (1998). Psychological Review, 105, 125-157], because it assumes that lexical access is the engine that drives the eyes forward during reading. However, we show that a newer version of E-Z Reader which still assumes that lexical access is the engine driving eye movements also predicts the locations of fixations and within-word refixations, and therefore provides a viable framework for understanding how both linguistic and oculomotor variables affect eye movements in reading.

  16. Accounts Assistant

    Indian Academy of Sciences (India)

    CHITRA

    (Not more than three months old). Annexure 1. Indian Academy of Sciences. C V Raman Avenue, Bengaluru 560 080. Application for the Post of: Accounts Assistant / Administrative Assistant Trainee / Assistant – Official Language. Implementation Policy / Temporary Copy Editor and Proof Reader / Social Media Manager. 1.

  17. Structural Model of RNA Polymerase II Elongation Complex with Complete Transcription Bubble Reveals NTP Entry Routes.

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2015-07-01

    Full Text Available The RNA polymerase II (Pol II is a eukaryotic enzyme that catalyzes the synthesis of the messenger RNA using a DNA template. Despite numerous biochemical and biophysical studies, it remains elusive whether the "secondary channel" is the only route for NTP to reach the active site of the enzyme or if the "main channel" could be an alternative. On this regard, crystallographic structures of Pol II have been extremely useful to understand the structural basis of transcription, however, the conformation of the unpaired non-template DNA part of the full transcription bubble (TB is still unknown. Since diffusion routes of the nucleoside triphosphate (NTP substrate through the main channel might overlap with the TB region, gaining structural information of the full TB is critical for a complete understanding of Pol II transcription process. In this study, we have built a structural model of Pol II with a complete transcription bubble based on multiple sources of existing structural data and used Molecular Dynamics (MD simulations together with structural analysis to shed light on NTP entry pathways. Interestingly, we found that although both channels have enough space to allow NTP loading, the percentage of MD conformations containing enough space for NTP loading through the secondary channel is twice higher than that of the main channel. Further energetic study based on MD simulations with NTP loaded in the channels has revealed that the diffusion of the NTP through the main channel is greatly disfavored by electrostatic repulsion between the NTP and the highly negatively charged backbones of nucleotides in the non-template DNA strand. Taken together, our results suggest that the secondary channel is the major route for NTP entry during Pol II transcription.

  18. Paraquat-induced injury of type II alveolar cells. An in vitro model of oxidant injury

    International Nuclear Information System (INIS)

    Skillrud, D.M.; Martin, W.J.

    1984-01-01

    Paraquat, a widely used herbicide, causes severe, often fatal lung damage. In vivo studies suggest the alveolar epithelial cells (types I and II) are specific targets of paraquat toxicity. This study used 51 Cr-labeled type II cells to demonstrate that paraquat (10-5 M) resulted in type II cell injury in vitro, independent of interacting immune effector agents. With 51 Cr release expressed as the cytotoxic index (Cl), type II cell injury was found to accelerate with increasing paraquat concentrations (10(-5) M, 10(-4) M, and 10(-3) M, resulting in a Cl of 12.5 +/- 2.2, 22.8 +/- 1.8, and 35.1 +/- 1.9, respectively). Paraquat-induced cytotoxicity (10(-4) M, with a Cl of 22.8 +/- 1.8) was effectively reduced by catalase 1,100 U/ml (Cl 8.0 +/- 3.2, p less than 0.001), superoxide dismutase, 300 U/ml (Cl 17.4 +/- 1.7, p less than 0.05), alpha tocopherol, 10 micrograms/ml (Cl 17.8 +/- 1.6, p less than 0.05). Paraquat toxicity (10(-3) M) was potentiated in the presence of 95% O2 with an increase in Cl from 31.1 +/- 1.7 to 36.4 +/- 2.3 (p less than 0.05). Paraquat-induced type II cell injury was noted as early as 4 h incubation by electron microscopic evidence of swelling of mitochondrial cristae and dispersion of nuclear chromatin. Thus, this in vitro model indicates that paraquat-induced type II cell injury can be quantified, confirmed by morphologic ultrastructural changes, significantly reduced by antioxidants, and potentiated by hyperoxia

  19. Application of Zr/Ti-Pic in the adsorption process of Cu(II), Co(II) and Ni(II) using adsorption physico-chemical models and thermodynamics of the process; Aplicacao de Zr/Ti-PILC no processo de adsorcao de Cu(II), Co(II) e Ni(II) utilizando modelos fisico-quimicos de adsorcao e termodinamica do processo

    Energy Technology Data Exchange (ETDEWEB)

    Guerra, Denis Lima; Airoldi, Claudio [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Inst. de Quimica. Dept. de Quimica Inorganica]. E-mail: dlguerra@iqm.unicamp.br; Lemos, Vanda Porpino; Angelica, Romulo Simoes [Universidade Federal do Para (UFPa), Belem (Brazil); Viana, Rubia Ribeiro [Universidade Federal do Mato Grosso (UFMT), Cuiaba (Brazil). Inst. de Ciencias Exatas e da Terra. Dept. de Recursos Minerais

    2008-07-01

    The aim of this investigation is to study how Zr/Ti-Pic adsorbs metals. The physico-chemical proprieties of Zr/Ti-Pic have been optimized with pillarization processes and Cu(II), Ni(II) and Co(II) adsorption from aqueous solution has been carried out, with maximum adsorption values of 8.85, 8.30 and 7.78 x-1 mmol g{sup -1}, respectively. The Langmuir, Freundlich and Temkin adsorption isotherm models have been applied to fit the experimental data with a linear regression process. The energetic effect caused by metal interaction was determined through calorimetric titration at the solid-liquid interface and gave a net thermal effect that enabled the calculation of the exothermic values and the equilibrium constant. (author)

  20. Relaxin Treatment in an Ang-II-Based Transgenic Preeclamptic-Rat Model.

    Directory of Open Access Journals (Sweden)

    Nadine Haase

    Full Text Available Relaxin is a peptide related to pregnancy that induces nitric oxide-related and gelatinase-related effects, allowing vasodilation and pregnancy-related adjustments permitting parturition to occur. Relaxin controls the hemodynamic and renovascular adaptive changes that occur during pregnancy. Interest has evolved regarding relaxin and a therapeutic principle in preeclampsia and heart failure. Preeclampsia is a pregnancy disorder, featuring hypertension, proteinuria and placental anomalies. We investigated relaxin in an established transgenic rat model of preeclampsia, where the phenotype is induced by angiotensin (Ang-II production in mid pregnancy. We gave recombinant relaxin to preeclamtic rats at day 9 of gestation. Hypertension and proteinuria was not ameliorated after relaxin administration. Intrauterine growth retardation of the fetus was unaltered by relaxin. Heart-rate responses and relaxin levels documented drug effects. In this Ang-II-based model of preeclampsia, we could not show a salubrious effect on preeclampsia.