WorldWideScience

Sample records for model ii accounts

  1. On the Importance of Accounting for Competing Risks in Pediatric Brain Cancer: II. Regression Modeling and Sample Size

    International Nuclear Information System (INIS)

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-01-01

    Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.

  2. Accounting Issues: An Essay Series. Part II--Accounts Receivable

    Science.gov (United States)

    Laux, Judith A.

    2007-01-01

    This is the second in a series of articles designed to help academics refocus the introductory accounting course on the theoretical underpinnings of accounting. Intended as a supplement for the principles course, this article connects the asset Accounts Receivable to the essential theoretical constructs, discusses the inherent tradeoffs and…

  3. SARP-II: Safeguards Accounting and Reports Program, Revised

    International Nuclear Information System (INIS)

    Kempf, C.R.

    1994-01-01

    A computer code, SARP (Safeguards Accounting and Reports Program) which will generate and maintain at-facility safeguards accounting records, and generate IAEA safeguards reports based on accounting data input by the user, was completed in 1990 by the Safeguards, Safety, and Nonproliferation Division (formerly the Technical Support Organization) at Brookhaven National Laboratory as a task under the US Program of Technical Support to IAEA safeguards. The code was based on a State System of Accounting for and Control of Nuclear Material (SSAC) for off-load refueled power reactor facilities, with model facility and safeguards accounting regime as described in IAEA Safeguards Publication STR-165. Since 1990, improvements in computing capabilities and comments and suggestions from users engendered revision of the original code. The result is an updated, revised version called SARP-II which is discussed in this report

  4. A geometrical description of visual sensation II:A complemented model for visual sensation explicitly taking into account the law of Fechner, and its application to Plateau's irradiation

    OpenAIRE

    Ons, Bart; Verstraelen, Pol

    2010-01-01

    Plateau’s irradiation phenomenon in particular describes what one sees when observing a brighter object on a darker background and a physically congruent darker object on a brighter background: the brighter object is seen as being larger. This phenomenon occurs in many optical visual illusions and it involves some fundamental aspects of human vision. We present a general geometrical model of human visual sensation and perception, hereby taking into account the law of Fechner in addition to th...

  5. Some Determinants of Student Performance in Principles of Financial Accounting (II) – Further Evidence from Kuwait

    OpenAIRE

    Khalid, Abdulla A.

    2012-01-01

    The purpose of this study was to perform an empirical investigation of the influence of select factors on the academic performance of students studying Principles of Financial Accounting (II). This study attempts to fill some of the gaps in the existing local and regional accounting education literature and to provide comparative evidence for the harmonization of international accounting education. A stepwise regression model using a sample of 205 students from the College of B...

  6. Stream II-V5: Revision Of Stream II-V4 To Account For The Effects Of Rainfall Events

    International Nuclear Information System (INIS)

    Chen, K.

    2010-01-01

    STREAM II-V4 is the aqueous transport module currently used by the Savannah River Site emergency response Weather Information Display (WIND) system. The transport model of the Water Quality Analysis Simulation Program (WASP) was used by STREAM II to perform contaminant transport calculations. WASP5 is a US Environmental Protection Agency (EPA) water quality analysis program that simulates contaminant transport and fate through surface water. STREAM II-V4 predicts peak concentration and peak concentration arrival time at downstream locations for releases from the SRS facilities to the Savannah River. The input flows for STREAM II-V4 are derived from the historical flow records measured by the United States Geological Survey (USGS). The stream flow for STREAM II-V4 is fixed and the flow only varies with the month in which the releases are taking place. Therefore, the effects of flow surge due to a severe storm are not accounted for by STREAM II-V4. STREAM II-V4 has been revised to account for the effects of a storm event. The steps used in this method are: (1) generate rainfall hyetographs as a function of total rainfall in inches (or millimeters) and rainfall duration in hours; (2) generate watershed runoff flow based on the rainfall hyetographs from step 1; (3) calculate the variation of stream segment volume (cross section) as a function of flow from step 2; (4) implement the results from steps 2 and 3 into the STREAM II model. The revised model (STREAM II-V5) will find the proper stream inlet flow based on the total rainfall and rainfall duration as input by the user. STREAM II-V5 adjusts the stream segment volumes (cross sections) based on the stream inlet flow. The rainfall based stream flow and the adjusted stream segment volumes are then used for contaminant transport calculations.

  7. Modelling in Accounting. Theoretical and Practical Dimensions

    Directory of Open Access Journals (Sweden)

    Teresa Szot-Gabryś

    2010-10-01

    Full Text Available Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic measurements areas which have not been hitherto covered by any accounting system (it applies, for example, to small businesses, agricultural farms, human capital, which requires the development of an appropriate theoretical and practical model. The article illustrates the issue of modelling in accounting based on the example of an accounting model developed for small businesses, i.e. economic entities which are not obliged by law to keep accounting records.

  8. Implementing a trustworthy cost-accounting model.

    Science.gov (United States)

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  9. Questioning Stakeholder Legitimacy: A Philanthropic Accountability Model.

    Science.gov (United States)

    Kraeger, Patsy; Robichau, Robbie

    2017-01-01

    Philanthropic organizations contribute to important work that solves complex problems to strengthen communities. Many of these organizations are moving toward engaging in public policy work, in addition to funding programs. This paper raises questions of legitimacy for foundations, as well as issues of transparency and accountability in a pluralistic democracy. Measures of civic health also inform how philanthropic organizations can be accountable to stakeholders. We propose a holistic model for philanthropic accountability that combines elements of transparency and performance accountability, as well as practices associated with the American pluralistic model for democratic accountability. We argue that philanthropic institutions should seek stakeholder and public input when shaping any public policy agenda. This paper suggests a new paradigm, called philanthropic accountability that can be used for legitimacy and democratic governance of private foundations engaged in policy work. The Philanthropic Accountability Model can be empirically tested and used as a governance tool.

  10. Understanding financial crisis through accounting models

    NARCIS (Netherlands)

    Bezemer, D.J.

    2010-01-01

    This paper presents evidence that accounting (or flow-of-funds) macroeconomic models helped anticipate the credit crisis and economic recession Equilibrium models ubiquitous in mainstream policy and research did not This study traces the Intellectual pedigrees of the accounting approach as an

  11. Material control and accounting at Exxon Nuclear, II

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1985-01-01

    In this session the measurements and the associated measurement control program used at the Model Plant are described. The procedures for evaluating MUF and sigma MUF are also discussed. The use of material composition codes and their role in IAEA safeguards under the US/IAEA Safeguards Agreement are described. In addition, the various accounting forms used at the plant are described and the use of tamper-indicating seals is discussed

  12. Modelling of functional systems of managerial accounting

    Directory of Open Access Journals (Sweden)

    O.V. Fomina

    2017-12-01

    Full Text Available The modern stage of managerial accounting development takes place under the powerful influence of managerial innovations. The article aimed at the development of integrational model of budgeting and the system of balanced indices in the system of managerial accounting that will contribute the increasing of relevance for making managerial decisions by managers of different levels management. As a result of the study the author proposed the highly pragmatical integration model of budgeting and system of the balanced indices in the system of managerial accounting, which is realized by the development of the system of gathering, consolidation, analysis, and interpretation of financial and nonfinancial information, contributes the increasing of relevance for making managerial decisions on the base of coordination and effective and purpose orientation both strategical and operative resources of an enterprise. The effective integrational process of the system components makes it possible to distribute limited resources rationally taking into account prospective purposes and strategic initiatives, to carry

  13. Media Accountability Systems: Models, proposals and outlooks

    Directory of Open Access Journals (Sweden)

    Luiz Martins da Silva

    2007-06-01

    Full Text Available This paper analyzes one of the basic actions of SOS-Imprensa, the mechanism to assure Media Accountability with the goal of proposing a synthesis of models for the Brazilian reality. The article aims to address the possibilities of creating and improving mechanisms to stimulate the democratic press process and to mark out and assure freedom of speech and personal rights with respect to the media. Based on the Press Social Responsibility Theory, the hypothesis is that the experiences analyzed (Communication Council, Press Council, Ombudsman and Readers Council are alternatives for accountability, mediation and arbitration, seeking visibility, trust and public support in favor of fairer media.

  14. Guidelines for School Property Accounting in Colorado, Part II--General Fixed Asset Accounts.

    Science.gov (United States)

    Stiverson, Clare L.

    The second publication of a series of three issued by the Colorado Department of Education is designed as a guide for local school districts in the development of a property accounting system. It defines and classifies groups of accounts whereby financial information, taken from inventory records, may be transcribed into debit and credit entries…

  15. Driving Strategic Risk Planning With Predictive Modelling For Managerial Accounting

    DEFF Research Database (Denmark)

    Nielsen, Steen; Pontoppidan, Iens Christian

    for managerial accounting and shows how it can be used to determine the impact of different types of risk assessment input parameters on the variability of important outcome measures. The purpose is to: (i) point out the theoretical necessity of a stochastic risk framework; (ii) present a stochastic framework......Currently, risk management in management/managerial accounting is treated as deterministic. Although it is well-known that risk estimates are necessarily uncertain or stochastic, until recently the methodology required to handle stochastic risk-based elements appear to be impractical and too...... mathematical. The ultimate purpose of this paper is to “make the risk concept procedural and analytical” and to argue that accountants should now include stochastic risk management as a standard tool. Drawing on mathematical modelling and statistics, this paper methodically develops risk analysis approach...

  16. Fusion of expertise among accounting accounting faculty. Towards an expertise model for academia in accounting.

    NARCIS (Netherlands)

    Njoku, Jonathan C.; van der Heijden, Beatrice; Inanga, Eno L.

    2010-01-01

    This paper aims to portray an accounting faculty expert. It is argued that neither the academic nor the professional orientation alone appears adequate in developing accounting faculty expertise. The accounting faculty expert is supposed to develop into a so-called ‘flexpert’ (Van der Heijden, 2003)

  17. A simulation model for material accounting systems

    International Nuclear Information System (INIS)

    Coulter, C.A.; Thomas, K.E.

    1987-01-01

    A general-purpose model that was developed to simulate the operation of a chemical processing facility for nuclear materials has been extended to describe material measurement and accounting procedures as well. The model now provides descriptors for material balance areas, a large class of measurement instrument types and their associated measurement errors for various classes of materials, the measurement instruments themselves with their individual calibration schedules, and material balance closures. Delayed receipt of measurement results (as for off-line analytical chemistry assay), with interim use of a provisional measurement value, can be accurately represented. The simulation model can be used to estimate inventory difference variances for processing areas that do not operate at steady state, to evaluate the timeliness of measurement information, to determine process impacts of measurement requirements, and to evaluate the effectiveness of diversion-detection algorithms. Such information is usually difficult to obtain by other means. Use of the measurement simulation model is illustrated by applying it to estimate inventory difference variances for two material balance area structures of a fictitious nuclear material processing line

  18. Modelling in Accounting. Theoretical and Practical Dimensions

    OpenAIRE

    Teresa Szot -Gabryś

    2010-01-01

    Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic...

  19. Model of accounting regulation in Lithuania

    OpenAIRE

    Rudžionienė, Kristina; Gipienė, Gailutė

    2008-01-01

    This paper analyses the regulation of accounting system in Lithuania. There are different approaches to accounting regulation. For example, the "free market" approach is against any regulation; it says that each process in the market will be in equilibrium itself. Mostly it is clear that regulation is important and useful, especially in financial accounting. It makes information in financial reports understandable and comparable in one country or different countries. There are three theories ...

  20. An object-oriented model for ex ante accounting information

    NARCIS (Netherlands)

    Verdaasdonk, P.J.A.

    2003-01-01

    Present accounting data models such as the Research-Event-Agent (REA) model merely focus on the modeling of static accounting phenomena. In this paper, it is argued that these models are not able to provide relevant ex ante accounting data for operations management decisions. These decisions require

  1. River water quality modelling: II

    DEFF Research Database (Denmark)

    Shanahan, P.; Henze, Mogens; Koncsos, L.

    1998-01-01

    The U.S. EPA QUAL2E model is currently the standard for river water quality modelling. While QUAL2E is adequate for the regulatory situation for which it was developed (the U.S. wasteload allocation process), there is a need for a more comprehensive framework for research and teaching. Moreover......, QUAL2E and similar models do not address a number of practical problems such as stormwater-flow events, nonpoint source pollution, and transient streamflow. Limitations in model formulation affect the ability to close mass balances, to represent sessile bacteria and other benthic processes......, and to achieve robust model calibration. Mass balance problems arise from failure to account for mass in the sediment as well as in the water column and due to the fundamental imprecision of BOD as a state variable. (C) 1998 IAWQ Published by Elsevier Science Ltd. All rights reserved....

  2. DMFCA Model as a Possible Way to Detect Creative Accounting and Accounting Fraud in an Enterprise

    Directory of Open Access Journals (Sweden)

    Jindřiška Kouřilová

    2013-05-01

    Full Text Available The quality of reported accounting data as well as the quality and behaviour of their users influence the efficiency of an enterprise’s management. Its assessment could therefore be changed as well. To identify creative accounting and fraud, several methods and tools were used. In this paper we would like to present our proposal of the DMFCA (Detection model Material Flow Cost Accounting balance model based on environmental accounting and the MFCA (Material Flow Cost Accounting as its method. The following balance areas are included: material, financial and legislative. Using the analysis of strengths and weaknesses of the model, its possible use within a production and business company were assessed. Its possible usage to the detection of some creative accounting techniques was also assessed. The Model is developed in details for practical use and describing theoretical aspects.

  3. Display of the information model accounting system

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2011-12-01

    Full Text Available This paper presents the accounting information system in public companies, business technology matrix and data flow diagram. The paper describes the purpose and goals of the accounting process, matrix sub-process and data class. Data flow in the accounting process and the so-called general ledger module are described in detail. Activities of the financial statements and determining the financial statements of the companies are mentioned as well. It is stated how the general ledger module should function and what characteristics it must have. Line graphs will depict indicators of the company’s business success, indebtedness and company’s efficiency coefficients based on financial balance reports, and profit and loss report.

  4. 25 CFR 547.9 - What are the minimum technical standards for Class II gaming system accounting functions?

    Science.gov (United States)

    2010-04-01

    ... gaming system accounting functions? 547.9 Section 547.9 Indians NATIONAL INDIAN GAMING COMMISSION... accounting functions? This section provides standards for accounting functions used in Class II gaming systems. (a) Required accounting data.The following minimum accounting data, however named, shall be...

  5. Anisotropic models to account for large borehole washouts to estimate gas hydrate saturations in the Gulf of Mexico Gas Hydrate Joint Industry Project Leg II Alaminos 21 B well

    Science.gov (United States)

    Lee, M.W.; Collett, T.S.; Lewis, K.A.

    2012-01-01

    Through the use of 3-D seismic amplitude mapping, several gashydrate prospects were identified in the Alaminos Canyon (AC) area of the Gulf of Mexico. Two locations were drilled as part of the Gulf of MexicoGasHydrate Joint Industry Project Leg II (JIP Leg II) in May of 2009 and a comprehensive set of logging-while-drilling (LWD) logs were acquired at each well site. LWD logs indicated that resistivity in the range of ~2 ohm-m and P-wave velocity in the range of ~1.9 km/s were measured in the target sand interval between 515 and 645 feet below sea floor. These values were slightly elevated relative to those measured in the sediment above and below the target sand. However, the initial well log analysis was inconclusive regarding the presence of gashydrate in the logged sand interval, mainly because largewashouts caused by drilling in the target interval degraded confidence in the well log measurements. To assess gashydratesaturations in the sedimentary section drilled in the Alaminos Canyon 21B (AC21-B) well, a method of compensating for the effect of washouts on the resistivity and acoustic velocities was developed. The proposed method models the washed-out portion of the borehole as a vertical layer filled with sea water (drilling fluid) and the apparent anisotropic resistivity and velocities caused by a vertical layer are used to correct the measured log values. By incorporating the conventional marine seismic data into the well log analysis, the average gashydratesaturation in the target sand section in the AC21-Bwell can be constrained to the range of 8–28%, with 20% being our best estimate.

  6. The Financial Accounting Model from a System Dynamics' Perspective

    OpenAIRE

    Melse, Eric

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal conflict that resides in a stock and flow model. Through formal analysis the accounting equation is redefined as a cybernetic model by expressing the temporal and dynamic properties of its terms. Articu...

  7. Accountability

    Science.gov (United States)

    Fielding, Michael; Inglis, Fred

    2017-01-01

    This contribution republishes extracts from two important articles published around 2000 concerning the punitive accountability system suffered by English primary and secondary schools. The first concerns the inspection agency Ofsted, and the second managerialism. Though they do not directly address assessment, they are highly relevant to this…

  8. Testing of a one dimensional model for Field II calibration

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2008-01-01

    Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...... to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show...

  9. The financial accounting model from a system dynamics' perspective

    NARCIS (Netherlands)

    Melse, E.

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal

  10. Modelling adversary actions against a nuclear material accounting system

    International Nuclear Information System (INIS)

    Lim, J.J.; Huebel, J.G.

    1979-01-01

    A typical nuclear material accounting system employing double-entry bookkeeping is described. A logic diagram is used to model the interactions of the accounting system and the adversary when he attempts to thwart it. Boolean equations are derived from the logic diagram; solution of these equations yields the accounts and records through which the adversary may disguise a SSNM theft and the collusion requirements needed to accomplish this feat. Some technical highlights of the logic diagram are also discussed

  11. The Accounting Class as Accounting Firm: A Model Program for Developing Technical and Managerial Skills

    Science.gov (United States)

    Docherty, Gary

    1976-01-01

    One way to bring the accounting office into the classroom is to conduct the class as a "company." Such a class is aimed at developing students' technical and managerial skills, as well as their career awareness and career goals. Performance goals, a course description, and overall objectives of the course are given and might serve as a model.…

  12. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Niculae Feleaga

    2006-04-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  13. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Liliana Feleaga

    2006-06-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  14. Modelling solar cells with thermal phenomena taken into account

    International Nuclear Information System (INIS)

    Górecki, K; Górecki, P; Paduch, K

    2014-01-01

    The paper is devoted to modelling properties of solar cells. The authors' electrothermal model of such cells is described. This model takes into account the influence of temperature on its characteristics. Some results of calculations and measurements of selected solar cells are presented and discussed. The good agreement between the results of calculations and measurements was obtained, which proves the correctness of the elaborated model.

  15. PROFESSIONAL COMPETITIVE EVOLUTION AND QUANTIFICATION MODELS IN ACCOUNTING SERVICE ELABORATION

    Directory of Open Access Journals (Sweden)

    Gheorghe FATACEAN

    2013-12-01

    Full Text Available The objective of this article consists in using an assessment framework of the accounting service elaboration. The purpose of this model is the identification and revaluation of an elite group of expert accounts from Romania, which should provide solutions to solve the most complex legal matters in the legal field, in the field of criminal, tax, civil, or commercial clauses making the object of law suits.

  16. Individual Learning Accounts and Other Models of Financing Lifelong Learning

    Science.gov (United States)

    Schuetze, Hans G.

    2007-01-01

    To answer the question "Financing what?" this article distinguishes several models of lifelong learning as well as a variety of lifelong learning activities. Several financing methods are briefly reviewed, however the principal focus is on Individual Learning Accounts (ILAs) which were seen by some analysts as a promising model for…

  17. Can An Amended Standard Model Account For Cold Dark Matter?

    International Nuclear Information System (INIS)

    Goldhaber, Maurice

    2004-01-01

    It is generally believed that one has to invoke theories beyond the Standard Model to account for cold dark matter particles. However, there may be undiscovered universal interactions that, if added to the Standard Model, would lead to new members of the three generations of elementary fermions that might be candidates for cold dark matter particles

  18. The CARE model of social accountability: promoting cultural change.

    Science.gov (United States)

    Meili, Ryan; Ganem-Cuenca, Alejandra; Leung, Jannie Wing-sea; Zaleschuk, Donna

    2011-09-01

    On the 10th anniversary of Health Canada and the Association of Faculties of Medicine of Canada's publication in 2001 of Social Accountability: A Vision for Canadian Medical Schools, the authors review the progress at one Canadian medical school, the College of Medicine at the University of Saskatchewan, in developing a culture of social accountability. They review the changes that have made the medical school more socially accountable and the steps taken to make those changes possible. In response to calls for socially accountable medical schools, the College of Medicine created a Social Accountability Committee to oversee the integration of these principles into the college. The committee developed the CARE model (Clinical activity, Advocacy, Research, Education and training) as a guiding tool for social accountability initiatives toward priority health concerns and as a means of evaluation. Diverse faculty and student committees have emerged as a result and have had far-reaching impacts on the college and communities: from changes in curricula and admissions to community programming and international educational experiences. Although a systematic assessment of the CARE model is needed, early evidence shows that the most significant effects can be found in the cultural shift in the college, most notably among students. The CARE model may serve as an important example for other educational institutions in the development of health practitioners and research that is responsive to the needs of their communities.

  19. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  20. A static world model. II

    International Nuclear Information System (INIS)

    Sundman, S.

    1981-01-01

    The static particle model of Part I requires creation of ether proportional to the energy of the particle. It is shown that this ether creation leads to gravitation and a forever expanding universe in agreement with the large-number hypothesis. The age, mass and size of the universe are calculated from atomic constants and G. The model predicts scale-invariance with different scales for gravitational matter, nucleons and electrons. This leads to a fine structure constant decreasing very slowly with time. For each scale there is a different type of dynamic balance governing the expansion of the universe. The model indicates that the universe was initially densely packed with (tau) leptons. It suggests a program for calculating the gravitational constant and the muon-electron mass ratio from other universal constants. Tentative numerological derivation gives these quantities with a higher accuracy than has been achieved experimentally. (Auth.)

  1. Supo Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Wass, Alexander Joseph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-14

    This report describes the continuation of the Computational Fluid Dynamics (CFD) model of the Supo cooling system described in the report, Supo Thermal Model Development1, by Cynthia Buechler. The goal for this report is to estimate the natural convection heat transfer coefficient (HTC) of the system using the CFD results and to compare those results to remaining past operational data. Also, the correlation for determining radiolytic gas bubble size is reevaluated using the larger simulation sample size. The background, solution vessel geometry, mesh, material properties, and boundary conditions are developed in the same manner as the previous report. Although, the material properties and boundary conditions are determined using the appropriate experiment results for each individual power level.

  2. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.

    2000-01-01

    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest

  3. Application of a predictive Bayesian model to environmental accounting.

    Science.gov (United States)

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  4. Stochastic scalar mixing models accounting for turbulent frequency multiscale fluctuations

    International Nuclear Information System (INIS)

    Soulard, Olivier; Sabel'nikov, Vladimir; Gorokhovski, Michael

    2004-01-01

    Two new scalar micromixing models accounting for a turbulent frequency scale distribution are investigated. These models were derived by Sabel'nikov and Gorokhovski [Second International Symposium on Turbulence and Shear FLow Phenomena, Royal Institute of technology (KTH), Stockholm, Sweden, June 27-29, 2001] using a multiscale extension of the classical interaction by exchange with the mean (IEM) and Langevin models. They are, respectively, called Extended IEM (EIEM) and Extended Langevin (ELM) models. The EIEM and ELM models are tested against DNS results in the case of the decay of a homogeneous scalar field in homogeneous turbulence. This comparison leads to a reformulation of the law governing the mixing frequency distribution. Finally, the asymptotic behaviour of the modeled PDF is discussed

  5. The shift of accounting models and accounting quality: the case of Norwegian GAAP

    OpenAIRE

    Stenheim, Tonny; Madsen, Dag Øivind

    2017-01-01

    This is an Open Access journal available from http://www.virtusinterpress.org This paper investigates the change in accounting quality w hen firms shift from a revenue - oriented historical cost accounting regime as Norwegian GAAP (NGAAP) to a balance - oriented fair value accounting regime as International Financial Reporting Standards (IFRS). Previous studies have demonstrated mixed effects o n the accounting quality upon IFRS adoption. One possible reason is that the investigated domest...

  6. Accounting for Business Models: Increasing the Visibility of Stakeholders

    Directory of Open Access Journals (Sweden)

    Colin Haslam

    2015-01-01

    Full Text Available Purpose: This paper conceptualises a firm’s business model employing stakeholder theory as a central organising element to help inform the purpose and objective(s of business model financial reporting and disclosure. Framework: Firms interact with a complex network of primary and secondary stakeholders to secure the value proposition of a firm’s business model. This value proposition is itself a complex amalgam of value creating, value capturing and value manipulating arrangements with stakeholders. From a financial accounting perspective the purpose of the value proposition for a firm’s business model is to sustain liquidity and solvency as a going concern. Findings: This article argues that stakeholder relations impact upon the financial viability of a firm’s business model value proposition. However current financial reporting by function of expenses and the central organising objectives of the accounting conceptual framework conceal firm-stakeholder relations and their impact on reported financials. Practical implications: The practical implication of our paper is that ‘Business Model’ financial reporting would require a reorientation in the accounting conceptual framework that defines the objectives and purpose of financial reporting. This reorientation would involve reporting about stakeholder relations and their impact on a firms financials not simply reporting financial information to ‘investors’. Social Implications: Business model financial reporting has the potential to be stakeholder inclusive because the numbers and narratives reported by firms in their annual financial statements will increase the visibility of stakeholder relations and how these are being managed. What is original/value of paper: This paper’s original perspective is that it argues that a firm’s business model is structured out of stakeholder relations. It presents the firm’s value proposition as the product of value creating, capturing and

  7. Accounting for microbial habitats in modeling soil organic matter dynamics

    Science.gov (United States)

    Chenu, Claire; Garnier, Patricia; Nunan, Naoise; Pot, Valérie; Raynaud, Xavier; Vieublé, Laure; Otten, Wilfred; Falconer, Ruth; Monga, Olivier

    2017-04-01

    The extreme heterogeneity of soils constituents, architecture and inhabitants at the microscopic scale is increasingly recognized. Microbial communities exist and are active in a complex 3-D physical framework of mineral and organic particles defining pores of various sizes, more or less inter-connected. This results in a frequent spatial disconnection between soil carbon, energy sources and the decomposer organisms and a variety of microhabitats that are more or less suitable for microbial growth and activity. However, current biogeochemical models account for C dynamics at the macroscale (cm, m) and consider time- and spatially averaged relationships between microbial activity and soil characteristics. Different modelling approaches have intended to account for this microscale heterogeneity, based either on considering aggregates as surrogates for microbial habitats, or pores. Innovative modelling approaches are based on an explicit representation of soil structure at the fine scale, i.e. at µm to mm scales: pore architecture and their saturation with water, localization of organic resources and of microorganisms. Three recent models are presented here, that describe the heterotrophic activity of either bacteria or fungi and are based upon different strategies to represent the complex soil pore system (Mosaic, LBios and µFun). These models allow to hierarchize factors of microbial activity in soil's heterogeneous architecture. Present limits of these approaches and challenges are presented, regarding the extensive information required on soils at the microscale and to up-scale microbial functioning from the pore to the core scale.

  8. Accounting for small scale heterogeneity in ecohydrologic watershed models

    Science.gov (United States)

    Burke, W.; Tague, C.

    2017-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach

  9. Accounting for household heterogeneity in general equilibrium economic growth models

    International Nuclear Information System (INIS)

    Melnikov, N.B.; O'Neill, B.C.; Dalton, M.G.

    2012-01-01

    We describe and evaluate a new method of aggregating heterogeneous households that allows for the representation of changing demographic composition in a multi-sector economic growth model. The method is based on a utility and labor supply calibration that takes into account time variations in demographic characteristics of the population. We test the method using the Population-Environment-Technology (PET) model by comparing energy and emissions projections employing the aggregate representation of households to projections representing different household types explicitly. Results show that the difference between the two approaches in terms of total demand for energy and consumption goods is negligible for a wide range of model parameters. Our approach allows the effects of population aging, urbanization, and other forms of compositional change on energy demand and CO 2 emissions to be estimated and compared in a computationally manageable manner using a representative household under assumptions and functional forms that are standard in economic growth models.

  10. Principles of Public School Accounting. State Educational Records and Reports Series: Handbook II-B.

    Science.gov (United States)

    Adams, Bert K.; And Others

    This handbook discusses the following primary aspects of school accounting: Definitions and principles; opening the general ledger; recording the approved budget; a sample month of transactions; the balance sheet, monthly, and annual reports; subsidiary journals; payroll procedures; cafeteria fund accounting; debt service accounting; construction…

  11. Accounting for Trust: A Conceptual Model for the Determinants of Trust in the Australian Public Accountant – SME Client Relationship

    Directory of Open Access Journals (Sweden)

    Michael Cherry

    2016-06-01

    Full Text Available This paper investigates trust as it relates to the relationship between Australia’s public accountants and their SME clients. It describes the contribution of the accountancy profession to the SME market, as well as the key challenges faced by accountants and their SME clients. Following the review of prior scholarly studies, a working definition of trust as it relates to this important relationship is also developed and presented. A further consequence of prior academic work is the development of a comprehensive conceptual model to describe the determinants of trust in the Australian public accountant – SME client relationship, which requires testing via empirical studies.

  12. Accounting for Water Insecurity in Modeling Domestic Water Demand

    Science.gov (United States)

    Galaitsis, S. E.; Huber-lee, A. T.; Vogel, R. M.; Naumova, E.

    2013-12-01

    Water demand management uses price elasticity estimates to predict consumer demand in relation to water pricing changes, but studies have shown that many additional factors effect water consumption. Development scholars document the need for water security, however, much of the water security literature focuses on broad policies which can influence water demand. Previous domestic water demand studies have not considered how water security can affect a population's consumption behavior. This study is the first to model the influence of water insecurity on water demand. A subjective indicator scale measuring water insecurity among consumers in the Palestinian West Bank is developed and included as a variable to explore how perceptions of control, or lack thereof, impact consumption behavior and resulting estimates of price elasticity. A multivariate regression model demonstrates the significance of a water insecurity variable for data sets encompassing disparate water access. When accounting for insecurity, the R-squaed value improves and the marginal price a household is willing to pay becomes a significant predictor for the household quantity consumption. The model denotes that, with all other variables held equal, a household will buy more water when the users are more water insecure. Though the reasons behind this trend require further study, the findings suggest broad policy implications by demonstrating that water distribution practices in scarcity conditions can promote consumer welfare and efficient water use.

  13. Accrual based accounting implementation: An approach for modelling major decisions

    OpenAIRE

    Ratno Agriyanto; Abdul Rohman; Dwi Ratmono; Imam Ghozali

    2016-01-01

    Over the last three decades the main issues of implementation of accrual based accounting government institutions in Indonesia. Implementation of accrual based accounting in government institutions amid debate about the usefulness of accounting information for decision-making. Empirical study shows that the accrual based of accounting information on a government institution is not used for decision making. The research objective was to determine the impact of the implementation of the accrual...

  14. EMPIRE-II statistical model code for nuclear reaction calculations

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M [International Atomic Energy Agency, Vienna (Austria)

    2001-12-15

    EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)

  15. Bedrijfsrisico's van de accountant en het Audit Risk Model

    NARCIS (Netherlands)

    Wallage, Ph.; Klijnsmit, P.; Sodekamp, M.

    2003-01-01

    In de afgelopen jaren is het bedrijfsrisico van de controlerend accountant sterk toegenomen. De bedrijfsrisico’s van de accountant beginnen in toenemende mate een belemmering te vormen voor het aanvaarden van opdrachten. In dit artikel wordt aandacht besteed aan de wijze waarop de bedrijfsrisico’s

  16. Accrual based accounting implementation: An approach for modelling major decisions

    Directory of Open Access Journals (Sweden)

    Ratno Agriyanto

    2016-12-01

    Full Text Available Over the last three decades the main issues of implementation of accrual based accounting government institutions in Indonesia. Implementation of accrual based accounting in government institutions amid debate about the usefulness of accounting information for decision-making. Empirical study shows that the accrual based of accounting information on a government institution is not used for decision making. The research objective was to determine the impact of the implementation of the accrual based accounting to the accrual basis of accounting information use for decision-making basis. We used the survey questionnaires. The data were processed by SEM using statistical software WarpPLS. The results showed that the implementation of the accrual based accounting in City Government Semarang has significantly positively associated with decision-making. Another important finding is the City Government officials of Semarang have personality, low tolerance of ambiguity is a negative effect on the relationship between the implementation of the accrual based accounting for decision making

  17. Solar photocatalytic removal of Cu(II), Ni(II), Zn(II) and Pb(II): Speciation modeling of metal-citric acid complexes

    International Nuclear Information System (INIS)

    Kabra, Kavita; Chaudhary, Rubina; Sawhney, R.L.

    2008-01-01

    The present study is targeted on solar photocatalytic removal of metal ions from wastewater. Photoreductive deposition and dark adsorption of metal ions Cu(II), Ni(II), Pb(II) and Zn(II), using solar energy irradiated TiO 2 , has been investigated. Citric acid has been used as a hole scavenger. Modeling of metal species has been performed and speciation is used as a tool for discussing the photodeposition trends. Ninety-seven percent reductive deposition was obtained for copper. The deposition values of other metals were significantly low [nickel (36.4%), zinc (22.2%) and lead (41.4%)], indicating that the photocatalytic treatment process, using solar energy, was more suitable for wastewater containing Cu(II) ions. In absence of citric acid, the decreasing order deposition was Cu(II) > Ni(II) > Pb(II) > Zn(II), which proves the theoretical thermodynamic predictions about the metals

  18. A generic accounting model to support operations management decisions

    NARCIS (Netherlands)

    Verdaasdonk, P.J.A.; Wouters, M.J.F.

    2001-01-01

    Information systems are generally unable to generate information about the financial consequences of operations management decisions. This is because the procedures for determining the relevant accounting information for decision support are not formalised in ways that can be implemented in

  19. Material control in nuclear fuel fabrication facilities. Part II. Accountability, instrumentation and measurement techniques in fuel fabrication facilities

    International Nuclear Information System (INIS)

    Borgonovi, G.M.; McCartin, T.J.; McDaniel, T.; Miller, C.L.; Nguyen, T.

    1978-01-01

    This report describes the measurement techniques, the instrumentation, and the procedures used in accountability and control of nuclear materials, as they apply to fuel fabrication facilities. A general discussion is given of instrumentation and measurement techniques which are presently used being considered for fuel fabrication facilities. Those aspects which are most significant from the point of view of satisfying regulatory constraints have been emphasized. Sensors and measurement devices have been discussed, together with their interfacing into a computerized system designed to permit real-time data collection and analysis. Estimates of accuracy and precision of measurement techniques have been given, and, where applicable, estimates of associated costs have been presented. A general description of material control and accounting is also included. In this section, the general principles of nuclear material accounting have been reviewed first (closure of material balance). After a discussion of the most current techniques used to calculate the limit of error on inventory difference, a number of advanced statistical techniques are reviewed. The rest of the section deals with some regulatory aspects of data collection and analysis, for accountability purposes, and with the overall effectiveness of accountability in detecting diversion attempts in fuel fabrication facilities. A specific example of application of the accountability methods to a model fuel fabrication facility is given. The effect of random and systematic errors on the total material uncertainty has been discussed, together with the effect on uncertainty of the length of the accounting period

  20. Material control in nuclear fuel fabrication facilities. Part II. Accountability, instrumentation and measurement techniques in fuel fabrication facilities

    Energy Technology Data Exchange (ETDEWEB)

    Borgonovi, G.M.; McCartin, T.J.; McDaniel, T.; Miller, C.L.; Nguyen, T.

    1978-01-01

    This report describes the measurement techniques, the instrumentation, and the procedures used in accountability and control of nuclear materials, as they apply to fuel fabrication facilities. A general discussion is given of instrumentation and measurement techniques which are presently used being considered for fuel fabrication facilities. Those aspects which are most significant from the point of view of satisfying regulatory constraints have been emphasized. Sensors and measurement devices have been discussed, together with their interfacing into a computerized system designed to permit real-time data collection and analysis. Estimates of accuracy and precision of measurement techniques have been given, and, where applicable, estimates of associated costs have been presented. A general description of material control and accounting is also included. In this section, the general principles of nuclear material accounting have been reviewed first (closure of material balance). After a discussion of the most current techniques used to calculate the limit of error on inventory difference, a number of advanced statistical techniques are reviewed. The rest of the section deals with some regulatory aspects of data collection and analysis, for accountability purposes, and with the overall effectiveness of accountability in detecting diversion attempts in fuel fabrication facilities. A specific example of application of the accountability methods to a model fuel fabrication facility is given. The effect of random and systematic errors on the total material uncertainty has been discussed, together with the effect on uncertainty of the length of the accounting period.

  1. Accounting for Epistemic and Aleatory Uncertainty in Early System Design, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — This project extends Probability Bounds Analysis to model epistemic and aleatory uncertainty during early design of engineered systems in an Integrated Concurrent...

  2. MAIN COORDINATES OF ACCOUNTING PROFESSION CO-OPETITIONAL MODEL

    Directory of Open Access Journals (Sweden)

    MARIOARA AVRAM

    2012-01-01

    Full Text Available The accounting profession fulfills a vital role in the development of modern economy, contributing to the thorough knowledge of business environment, the improvement of economic performance and solving some of the many problems the post-modern society is facing. Accounting profession fulfills a vital role in modern economy, contributing to a thorough knowledge of business to improve economic performance and to resolve some of the many problems facing post-modern society. Currently, the accounting profession is characterized by the expansion of information technology, internationalization of businesses and professional specialization which has made possible the creation of several professional bodies. Against this background, it becomes urgent to discover new perspectives on strategies able to maintain and increase business success, based on the simultaneous combination of the elements of cooperation and competition, which involves a new type of relation, called by the North - American literature "co-opetition".

  3. SEBAL-A: A Remote Sensing ET Algorithm that Accounts for Advection with Limited Data. Part II: Test for Transferability

    Directory of Open Access Journals (Sweden)

    Mcebisi Mkhwanazi

    2015-11-01

    Full Text Available Because the Surface Energy Balance Algorithm for Land (SEBAL tends to underestimate ET when there is advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET. The modification involved the estimation of advected energy, which required the development of a wind function. In Part I, the modified SEBAL model (SEBAL-A was developed and validated on well-watered alfalfa of a standard height of 40–60 cm. In this Part II, SEBAL-A was tested on different crops and irrigation treatments in order to determine its performance under varying conditions. The crops used for the transferability test were beans (Phaseolus vulgaris L., wheat (Triticum aestivum L. and corn (Zea mays L.. The estimated ET using SEBAL-A was compared to actual ET measured using a Bowen Ratio Energy Balance (BREB system. Results indicated that SEBAL-A estimated ET fairly well for beans and wheat, only showing some slight underestimation of a Mean Bias Error (MBE of −0.7 mm·d−1 (−11.3%, a Root Mean Square Error (RMSE of 0.82 mm·d−1 (13.9% and a Nash Sutcliffe Coefficient of Efficiency (NSCE of 0.64. On corn, SEBAL-A resulted in an ET estimation error MBE of −0.7 mm·d−1 (−9.9%, a RMSE of 1.59 mm·d−1 (23.1% and NSCE = 0.24. This result shows an improvement on the original SEBAL model, which for the same data resulted in an ET MBE of −1.4 mm·d−1 (−20.4%, a RMSE of 1.97 mm·d−1 (28.8% and a NSCE of −0.18. When SEBAL-A was tested on only fully irrigated corn, it performed well, resulting in no bias, i.e., MBE of 0.0 mm·d−1; RMSE of 0.78 mm·d−1 (10.7% and NSCE of 0.82. The SEBAL-A model showed less or no improvement on corn that was either water-stressed or at early stages of growth. The errors incurred under these conditions were not due to advection not accounted for but rather were due to the nature of SEBAL and SEBAL-A being single-source energy balance models and

  4. Models and Rules of Evaluation in International Accounting

    OpenAIRE

    Liliana Feleaga; Niculae Feleaga

    2006-01-01

    The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is freque...

  5. Accounting for heterogeneity of public lands in hedonic property models

    Science.gov (United States)

    Charlotte Ham; Patricia A. Champ; John B. Loomis; Robin M. Reich

    2012-01-01

    Open space lands, national forests in particular, are usually treated as homogeneous entities in hedonic price studies. Failure to account for the heterogeneous nature of public open spaces may result in inappropriate inferences about the benefits of proximate location to such lands. In this study the hedonic price method is used to estimate the marginal values for...

  6. Uranium accountability for ATR fuel fabrication: Part II. A computer simulation

    International Nuclear Information System (INIS)

    Dolan, C.A.; Nieschmidt, E.B.; Vegors, S.H. Jr.; Wagner, E.P. Jr.

    1977-08-01

    A stochastic computer model has been designed to simulate the material control system used during the production of fuel plates for the Advanced Test Reactor. Great care has been taken to see that this model follows the manufacturing and measuring processes used. The model is designed so that manufacturing process and measurement parameters are fed in as input; hence, changes in the manufacturing process and measurement procedures are easily simulated. Individual operations in the plant are described by program subroutines. By varying the calling sequence of these subroutines, variations in the manufacturing process may be simulated. By using this model values for MUF and LEMUF may be calculated for predetermined plant operating conditions. Furthermore the effect on MUF and LEMUF produced by changing plant operating procedures and measurement techniques may also be examined. A sample calculation simulating one inventory period of the plant's operation is included

  7. Modelling Financial-Accounting Decisions by Means of OLAP Tools

    Directory of Open Access Journals (Sweden)

    Diana Elena CODREAN

    2011-03-01

    Full Text Available At present, one can say that a company’s good running largely depends on the information quantity and quality it relies on when making decisions. The information needed to underlie decisions and be obtained due to the existence of a high-performing information system which makes it possible for the data to be shown quickly, synthetically and truly, also providing the opportunity for complex analyses and predictions. In such circumstances, computerized accounting systems, too, have grown their complexity by means of data analyzing information solutions such as OLAP and Data Mining which help perform a multidimensional analysis of financial-accounting data, potential frauds can be detected and data hidden information can be revealed, trends for certain indicators can be set up, therefore ensuring useful information to a company’s decision making.

  8. Higgs potential in the type II seesaw model

    International Nuclear Information System (INIS)

    Arhrib, A.; Benbrik, R.; Chabab, M.; Rahili, L.; Ramadan, J.; Moultaka, G.; Peyranere, M. C.

    2011-01-01

    The standard model Higgs sector, extended by one weak gauge triplet of scalar fields with a very small vacuum expectation value, is a very promising setting to account for neutrino masses through the so-called type II seesaw mechanism. In this paper we consider the general renormalizable doublet/triplet Higgs potential of this model. We perform a detailed study of its main dynamical features that depend on five dimensionless couplings and two mass parameters after spontaneous symmetry breaking, and highlight the implications for the Higgs phenomenology. In particular, we determine (i) the complete set of tree-level unitarity constraints on the couplings of the potential and (ii) the exact tree-level boundedness from below constraints on these couplings, valid for all directions. When combined, these constraints delineate precisely the theoretically allowed parameter space domain within our perturbative approximation. Among the seven physical Higgs states of this model, the mass of the lighter (heavier) CP even state h 0 (H 0 ) will always satisfy a theoretical upper (lower) bound that is reached for a critical value μ c of μ (the mass parameter controlling triple couplings among the doublet/triplet Higgses). Saturating the unitarity bounds, we find an upper bound m h 0 or approx. μ c and μ c . In the first regime the Higgs sector is typically very heavy, and only h 0 that becomes SM-like could be accessible to the LHC. In contrast, in the second regime, somewhat overlooked in the literature, most of the Higgs sector is light. In particular, the heaviest state H 0 becomes SM-like, the lighter states being the CP odd Higgs, the (doubly) charged Higgses, and a decoupled h 0 , possibly leading to a distinctive phenomenology at the colliders.

  9. Accounting for ecosystem services in Life Cycle Assessment, Part II: toward an ecologically based LCA.

    Science.gov (United States)

    Zhang, Yi; Baral, Anil; Bakshi, Bhavik R

    2010-04-01

    Despite the essential role of ecosystem goods and services in sustaining all human activities, they are often ignored in engineering decision making, even in methods that are meant to encourage sustainability. For example, conventional Life Cycle Assessment focuses on the impact of emissions and consumption of some resources. While aggregation and interpretation methods are quite advanced for emissions, similar methods for resources have been lagging, and most ignore the role of nature. Such oversight may even result in perverse decisions that encourage reliance on deteriorating ecosystem services. This article presents a step toward including the direct and indirect role of ecosystems in LCA, and a hierarchical scheme to interpret their contribution. The resulting Ecologically Based LCA (Eco-LCA) includes a large number of provisioning, regulating, and supporting ecosystem services as inputs to a life cycle model at the process or economy scale. These resources are represented in diverse physical units and may be compared via their mass, fuel value, industrial cumulative exergy consumption, or ecological cumulative exergy consumption or by normalization with total consumption of each resource or their availability. Such results at a fine scale provide insight about relative resource use and the risk and vulnerability to the loss of specific resources. Aggregate indicators are also defined to obtain indices such as renewability, efficiency, and return on investment. An Eco-LCA model of the 1997 economy is developed and made available via the web (www.resilience.osu.edu/ecolca). An illustrative example comparing paper and plastic cups provides insight into the features of the proposed approach. The need for further work in bridging the gap between knowledge about ecosystem services and their direct and indirect role in supporting human activities is discussed as an important area for future work.

  10. Resource Allocation Models and Accountability: A Jamaican Case Study

    Science.gov (United States)

    Nkrumah-Young, Kofi K.; Powell, Philip

    2008-01-01

    Higher education institutions (HEIs) may be funded privately, by the state or by a mixture of the two. Nevertheless, any state financing of HE necessitates a mechanism to determine the level of support and the channels through which it is to be directed; that is, a resource allocation model. Public funding, through resource allocation models,…

  11. 76 FR 34712 - Medicare Program; Pioneer Accountable Care Organization Model; Extension of the Submission...

    Science.gov (United States)

    2011-06-14

    ... stakeholders to develop initiatives to test innovative payment and service delivery models to reduce program...] Medicare Program; Pioneer Accountable Care Organization Model; Extension of the Submission Deadlines for... of the Pioneer Accountable Care Organization Model letters of intent to June 30, 2011 and the...

  12. PARALLEL MEASUREMENT AND MODELING OF TRANSPORT IN THE DARHT II BEAMLINE ON ETA II

    International Nuclear Information System (INIS)

    Chambers, F W; Raymond, B A; Falabella, S; Lee, B S; Richardson, R A; Weir, J T; Davis, H A; Schultze, M E

    2005-01-01

    To successfully tune the DARHT II transport beamline requires the close coupling of a model of the beam transport and the measurement of the beam observables as the beam conditions and magnet settings are varied. For the ETA II experiment using the DARHT II beamline components this was achieved using the SUICIDE (Simple User Interface Connecting to an Integrated Data Environment) data analysis environment and the FITS (Fully Integrated Transport Simulation) model. The SUICIDE environment has direct access to the experimental beam transport data at acquisition and the FITS predictions of the transport for immediate comparison. The FITS model is coupled into the control system where it can read magnet current settings for real time modeling. We find this integrated coupling is essential for model verification and the successful development of a tuning aid for the efficient convergence on a useable tune. We show the real time comparisons of simulation and experiment and explore the successes and limitations of this close coupled approach

  13. Accountability: a missing construct in models of adherence behavior and in clinical practice.

    Science.gov (United States)

    Oussedik, Elias; Foy, Capri G; Masicampo, E J; Kammrath, Lara K; Anderson, Robert E; Feldman, Steven R

    2017-01-01

    Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients' motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8-12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability) to patients' autonomous internal desire to please a respected health care provider (autonomous accountability), the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura's Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well as the testing and refinement of adherence interventions that make use of this critical determinant of human behavior.

  14. Behavioural Procedural Models – a multipurpose mechanistic account

    Directory of Open Access Journals (Sweden)

    Leonardo Ivarola

    2012-05-01

    Full Text Available In this paper we outline an epistemological defence of what wecall Behavioural Procedural Models (BPMs, which represent the processes of individual decisions that lead to relevant economic patterns as psychologically (rather than rationally driven. Their general structure, and the way in which they may be incorporated to a multipurpose view of models, where the representational and interventionist goals are combined, is shown. It is argued that BPMs may provide “mechanistic-based explanations” in the sense defended by Hedström and Ylikoski (2010, which involve invariant regularities in Woodward’s sense. Such mechanisms provide a causal sort of explanation of anomalous economic patterns, which allow for extra marketintervention and manipulability in order to correct and improve some key individual decisions. This capability sets the basis for the so called libertarian paternalism (Sunstein and Thaler 2003.

  15. ACCOUNTING FUNDAMENTALS AND VARIATIONS OF STOCK PRICE: METHODOLOGICAL REFINEMENT WITH RECURSIVE SIMULTANEOUS MODEL

    OpenAIRE

    Sumiyana, Sumiyana; Baridwan, Zaki

    2015-01-01

    This study investigates association between accounting fundamentals and variations of stock prices using recursive simultaneous equation model. The accounting fundamentalsconsist of earnings yield, book value, profitability, growth opportunities and discount rate. The prior single relationships model has been investigated by Chen and Zhang (2007),Sumiyana (2011) and Sumiyana et al. (2010). They assume that all accounting fundamentals associate direct-linearly to the stock returns. This study ...

  16. Accounting Fundamentals and Variations of Stock Price: Methodological Refinement with Recursive Simultaneous Model

    OpenAIRE

    Sumiyana, Sumiyana; Baridwan, Zaki

    2013-01-01

    This study investigates association between accounting fundamentals and variations of stock prices using recursive simultaneous equation model. The accounting fundamentalsconsist of earnings yield, book value, profitability, growth opportunities and discount rate. The prior single relationships model has been investigated by Chen and Zhang (2007),Sumiyana (2011) and Sumiyana et al. (2010). They assume that all accounting fundamentals associate direct-linearly to the stock returns. This study ...

  17. Asymmetric Gepner models II. Heterotic weight lifting

    Energy Technology Data Exchange (ETDEWEB)

    Gato-Rivera, B. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); Schellekens, A.N., E-mail: t58@nikhef.n [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); IMAPP, Radboud Universiteit, Nijmegen (Netherlands)

    2011-05-21

    A systematic study of 'lifted' Gepner models is presented. Lifted Gepner models are obtained from standard Gepner models by replacing one of the N=2 building blocks and the E{sub 8} factor by a modular isomorphic N=0 model on the bosonic side of the heterotic string. The main result is that after this change three family models occur abundantly, in sharp contrast to ordinary Gepner models. In particular, more than 250 new and unrelated moduli spaces of three family models are identified. We discuss the occurrence of fractionally charged particles in these spectra.

  18. Asymmetric Gepner models II. Heterotic weight lifting

    International Nuclear Information System (INIS)

    Gato-Rivera, B.; Schellekens, A.N.

    2011-01-01

    A systematic study of 'lifted' Gepner models is presented. Lifted Gepner models are obtained from standard Gepner models by replacing one of the N=2 building blocks and the E 8 factor by a modular isomorphic N=0 model on the bosonic side of the heterotic string. The main result is that after this change three family models occur abundantly, in sharp contrast to ordinary Gepner models. In particular, more than 250 new and unrelated moduli spaces of three family models are identified. We discuss the occurrence of fractionally charged particles in these spectra.

  19. Applying the International Medical Graduate Program Model to Alleviate the Supply Shortage of Accounting Doctoral Faculty

    Science.gov (United States)

    HassabElnaby, Hassan R.; Dobrzykowski, David D.; Tran, Oanh Thikie

    2012-01-01

    Accounting has been faced with a severe shortage in the supply of qualified doctoral faculty. Drawing upon the international mobility of foreign scholars and the spirit of the international medical graduate program, this article suggests a model to fill the demand in accounting doctoral faculty. The underlying assumption of the suggested model is…

  20. Integrating Seasonal Oscillations into Basel II Behavioural Scoring Models

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-09-01

    Full Text Available The article introduces a new methodology of temporal influence measurement (seasonal oscillations, temporal patterns for behavioural scoring development purposes. The paper shows how significant temporal variables can be recognised and then integrated into the behavioural scoring models in order to improve model performance. Behavioural scoring models are integral parts of the Basel II standard on Internal Ratings-Based Approaches (IRB. The IRB approach much more precisely reflects individual risk bank profile.A solution of the problem of how to analyze and integrate macroeconomic and microeconomic factors represented in time series into behavioural scorecard models will be shown in the paper by using the REF II model.

  1. Towards accounting for dissolved iron speciation in global ocean models

    Directory of Open Access Journals (Sweden)

    A. Tagliabue

    2011-10-01

    Full Text Available The trace metal iron (Fe is now routinely included in state-of-the-art ocean general circulation and biogeochemistry models (OGCBMs because of its key role as a limiting nutrient in regions of the world ocean important for carbon cycling and air-sea CO2 exchange. However, the complexities of the seawater Fe cycle, which impact its speciation and bioavailability, are simplified in such OGCBMs due to gaps in understanding and to avoid high computational costs. In a similar fashion to inorganic carbon speciation, we outline a means by which the complex speciation of Fe can be included in global OGCBMs in a reasonably cost-effective manner. We construct an Fe speciation model based on hypothesised relationships between rate constants and environmental variables (temperature, light, oxygen, pH, salinity and assumptions regarding the binding strengths of Fe complexing organic ligands and test hypotheses regarding their distributions. As a result, we find that the global distribution of different Fe species is tightly controlled by spatio-temporal environmental variability and the distribution of Fe binding ligands. Impacts on bioavailable Fe are highly sensitive to assumptions regarding which Fe species are bioavailable and how those species vary in space and time. When forced by representations of future ocean circulation and climate we find large changes to the speciation of Fe governed by pH mediated changes to redox kinetics. We speculate that these changes may exert selective pressure on phytoplankton Fe uptake strategies in the future ocean. In future work, more information on the sources and sinks of ocean Fe ligands, their bioavailability, the cycling of colloidal Fe species and kinetics of Fe-surface coordination reactions would be invaluable. We hope our modeling approach can provide a means by which new observations of Fe speciation can be tested against hypotheses of the processes present in governing the ocean Fe cycle in an

  2. Spherical Detector Device Mathematical Modelling with Taking into Account Detector Module Symmetry

    International Nuclear Information System (INIS)

    Batyj, V.G.; Fedorchenko, D.V.; Prokopets, S.I.; Prokopets, I.M.; Kazhmuradov, M.A.

    2005-01-01

    Mathematical Model for spherical detector device accounting to symmetry properties is considered. Exact algorithm for simulation of measurement procedure with multiple radiation sources is developed. Modelling results are shown to have perfect agreement with calibration measurements

  3. A mathematical model of sentimental dynamics accounting for marital dissolution.

    Science.gov (United States)

    Rey, José-Manuel

    2010-03-31

    Marital dissolution is ubiquitous in western societies. It poses major scientific and sociological problems both in theoretical and therapeutic terms. Scholars and therapists agree on the existence of a sort of second law of thermodynamics for sentimental relationships. Effort is required to sustain them. Love is not enough. Building on a simple version of the second law we use optimal control theory as a novel approach to model sentimental dynamics. Our analysis is consistent with sociological data. We show that, when both partners have similar emotional attributes, there is an optimal effort policy yielding a durable happy union. This policy is prey to structural destabilization resulting from a combination of two factors: there is an effort gap because the optimal policy always entails discomfort and there is a tendency to lower effort to non-sustaining levels due to the instability of the dynamics. These mathematical facts implied by the model unveil an underlying mechanism that may explain couple disruption in real scenarios. Within this framework the apparent paradox that a union consistently planned to last forever will probably break up is explained as a mechanistic consequence of the second law.

  4. A mathematical model of sentimental dynamics accounting for marital dissolution.

    Directory of Open Access Journals (Sweden)

    José-Manuel Rey

    Full Text Available BACKGROUND: Marital dissolution is ubiquitous in western societies. It poses major scientific and sociological problems both in theoretical and therapeutic terms. Scholars and therapists agree on the existence of a sort of second law of thermodynamics for sentimental relationships. Effort is required to sustain them. Love is not enough. METHODOLOGY/PRINCIPAL FINDINGS: Building on a simple version of the second law we use optimal control theory as a novel approach to model sentimental dynamics. Our analysis is consistent with sociological data. We show that, when both partners have similar emotional attributes, there is an optimal effort policy yielding a durable happy union. This policy is prey to structural destabilization resulting from a combination of two factors: there is an effort gap because the optimal policy always entails discomfort and there is a tendency to lower effort to non-sustaining levels due to the instability of the dynamics. CONCLUSIONS/SIGNIFICANCE: These mathematical facts implied by the model unveil an underlying mechanism that may explain couple disruption in real scenarios. Within this framework the apparent paradox that a union consistently planned to last forever will probably break up is explained as a mechanistic consequence of the second law.

  5. Nurse-directed care model in a psychiatric hospital: a model for clinical accountability.

    Science.gov (United States)

    E-Morris, Marlene; Caldwell, Barbara; Mencher, Kathleen J; Grogan, Kimberly; Judge-Gorny, Margaret; Patterson, Zelda; Christopher, Terrian; Smith, Russell C; McQuaide, Teresa

    2010-01-01

    The focus on recovery for persons with severe and persistent mental illness is leading state psychiatric hospitals to transform their method of care delivery. This article describes a quality improvement project involving a hospital's administration and multidisciplinary state-university affiliation that collaborated in the development and implementation of a nursing care delivery model in a state psychiatric hospital. The quality improvement project team instituted a new model to promote the hospital's vision of wellness and recovery through utilization of the therapeutic relationship and greater clinical accountability. Implementation of the model was accomplished in 2 phases: first, the establishment of a structure to lay the groundwork for accountability and, second, the development of a mechanism to provide a clinical supervision process for staff in their work with clients. Effectiveness of the model was assessed by surveys conducted at baseline and after implementation. Results indicated improvement in clinical practices and client living environment. As a secondary outcome, these improvements appeared to be associated with increased safety on the units evidenced by reduction in incidents of seclusion and restraint. Restructuring of the service delivery system of care so that clients are the center of clinical focus improves safety and can enhance the staff's attention to work with clients on their recovery. The role of the advanced practice nurse can influence the recovery of clients in state psychiatric hospitals. Future research should consider the impact on clients and their perceptions of the new service models.

  6. Lanchester-Type Models of Warfare. Volume II

    Science.gov (United States)

    1980-10-01

    ii7 L HOWES and THRALL (1972) ,HT n HTY HT m HTX jini ijl HOLTER (1973) and ANDERSON (1979) CHA HAx Y tAs in the preceding table, SPUDICH (1968) - the...detail can one afford? A recent U. S. General Accounting Office ( GAO ) report [150, pp. 28-29] points out that there is a strong inconsistency between...further details). 65. A recent U. S. Getueral Accounting Office ( GAO ) [1501 study has emphasized that empirical study is necessary to strengthen the

  7. A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles

    Science.gov (United States)

    Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.

    2016-12-01

    Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.

  8. MODEL OF ACCOUNTING ENGINEERING IN VIEW OF EARNINGS MANAGEMENT IN POLAND

    Directory of Open Access Journals (Sweden)

    Leszek Michalczyk

    2012-10-01

    Full Text Available The article introduces the theoretical foundations of the author’s original concept of accounting engineering. We assume a theoretical premise whereby accounting engineering is understood as a system of accounting practice utilising differences in economic events resultant from the use of divergent accounting methods. Unlike, for instance, creative or praxeological accounting, accounting engineering is composed only, and under all circumstances, of lawful activities and adheres to the current regulations of the balance sheet law. The aim of the article is to construct a model of accounting engineering exploiting taking into account differences inherently present in variant accounting. These differences result in disparate financial results of identical economic events. Given the fact that regardless of which variant is used in accounting, all settlements are eventually equal to one another, a new class of differences emerges - the accounting engineering potential. It is transferred to subsequent reporting (balance sheet periods. In the end, the profit “made” in a given period reduces the financial result of future periods. This effect is due to the “transfer” of costs from one period to another. Such actions may have sundry consequences and are especially dangerous whenever many individuals are concerned with the profit of a given company, e.g. on a stock exchange. The reverse may be observed when a company is privatised and its value is being intentionally reduced by a controlled recording of accounting provisions, depending on the degree to which they are justified. The reduction of a company’s goodwill in Balcerowicz’s model of no-tender privatisation allows to justify the low value of the purchased company. These are only some of many manifestations of variant accounting which accounting engineering employs. A theoretical model of the latter is presented in this article.

  9. Impact of Distance in the Provision of Maternal Health Care Services and Its Accountability in Murarai-II Block, Birbhum District

    Directory of Open Access Journals (Sweden)

    Alokananda Ghosh

    2016-06-01

    Full Text Available The maternal health issue was a part of the Millennium Development Goals (MDGs, Target-5. Now it has been incorporated into Target-3 of 17 points Sustainable Development Goal-2030, declared by the United Nations, 2015. In India, about 50% of newborn deaths can be reduced by taking good care of the mother during pregnancy, childbirth and postpartum period. This requires timely, well-equipped healthcare by trained providers, along with emergency transportation for referral obstetric emergency. Governments need to ensure physicians in the rural underserved areas. The utilisation of maternal healthcare services (MHCSs depends on both the availability and accessibility of services along with accountability. This study is based on an empirical retrospective survey, also called a historic study, to evaluate the influences of distance on the provision of maternal health services and on its accountability in Murarai-II block, Birbhum District. The major objective of the study is to identify the influence of distance on the provision and accountability of the overall MHCSs. The investigation has found that there is a strong inverse relationship (-0.75 between accessibility index and accountability score with p-value = 0.05. Tracking of pregnant women, identification of high risk pregnancy and timely Postnatal Care (PNC have become the dominant factors of the maternal healthcare services in the first Principal Component Analysis (PCA, explaining 49.67% of the accountability system. Overall, institutional barriers to accessibility are identified as important constraints behind lesser accountability of the services, preventing the anticipated benefit. This study highlights the critical areas where maternal healthcare services are lacking. The analysis has highlighted the importance of physical access to health services in shaping the provision of maternal healthcare services. Drawing on empirical observations of operation of public distribution system in

  10. A Case Study of the Accounting Models for the Participants in an Emissions Trading Scheme

    Directory of Open Access Journals (Sweden)

    Marius Deac

    2013-10-01

    Full Text Available As emissions trading schemes are becoming more popular across the world, accounting has to keep up with these new economic developments. The absence of guidance regarding the accounting for greenhouse gases (GHGs emissions generated by the withdrawal of IFRIC 3- Emission Rights - is the main reason why there is a diversity of accounting practices. This diversity of accounting methods makes the financial statements of companies that are taking part in emissions trading schemes like EU ETS, difficult to compare. The present paper uses a case study that assumes the existence of three entities that have chosen three different accounting methods: the IFRIC 3 cost model, the IFRIC 3 revaluation model and the “off balance sheet” approach. This illustrates how the choice of an accounting method regarding GHGs emissions influences their interim and annual reports through the chances in the companies’ balance sheet and financial results.

  11. A new model in achieving Green Accounting at hotels in Bali

    Science.gov (United States)

    Astawa, I. P.; Ardina, C.; Yasa, I. M. S.; Parnata, I. K.

    2018-01-01

    The concept of green accounting becomes a debate in terms of its implementation in a company. The result of previous studies indicates that there are no standard model regarding its implementation to support performance. The research aims to create a different green accounting model to other models by using local cultural elements as the variables in building it. The research is conducted in two steps. The first step is designing the model based on theoretical studies by considering the main and supporting elements in building the concept of green accounting. The second step is conducting a model test at 60 five stars hotels started with data collection through questionnaire and followed by data processing using descriptive statistic. The result indicates that the hotels’ owner has implemented green accounting attributes and it supports previous studies. Another result, which is a new finding, shows that the presence of local culture, government regulation, and the awareness of hotels’ owner has important role in the development of green accounting concept. The results of the research give contribution to accounting science in terms of green reporting. The hotel management should adopt local culture in building the character of accountant hired in the accounting department.

  12. Predictive Models and Computational Toxicology (II IBAMTOX)

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  13. Nyala and Bushbuck II: A Harvesting Model.

    Science.gov (United States)

    Fay, Temple H.; Greeff, Johanna C.

    1999-01-01

    Adds a cropping or harvesting term to the animal overpopulation model developed in Part I of this article. Investigates various harvesting strategies that might suggest a solution to the overpopulation problem without actually culling any animals. (ASK)

  14. Base Flow Model Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  15. Mineral vein dynamics modelling (FRACS II)

    International Nuclear Information System (INIS)

    Urai, J.; Virgo, S.; Arndt, M.

    2016-08-01

    The Mineral Vein Dynamics Modeling group ''FRACS'' started out as a team of 7 research groups in its first phase and continued with a team of 5 research groups at the Universities of Aachen, Tuebingen, Karlsruhe, Mainz and Glasgow during its second phase ''FRACS 11''. The aim of the group was to develop an advanced understanding of the interplay between fracturing, fluid flow and fracture healing with a special emphasis on the comparison of field data and numerical models. Field areas comprised the Oman mountains in Oman (which where already studied in detail in the first phase), a siliciclastic sequence in the Internal Ligurian Units in Italy (closed to Sestri Levante) and cores of Zechstein carbonates from a Lean Gas reservoir in Northern Germany. Numerical models of fracturing, sealing and interaction with fluid that were developed in phase I where expanded in phase 11. They were used to model small scale fracture healing by crystal growth and the resulting influence on flow, medium scale fracture healing and its influence on successive fracturing and healing, as well as large scale dynamic fluid flow through opening and closing fractures and channels as a function of fluid overpressure. The numerical models were compared with structures in the field and we were able to identify first proxies for mechanical vein-hostrock properties and fluid overpressures versus tectonic stresses. Finally we propose a new classification of stylolites based on numerical models and observations in the Zechstein cores and continued to develop a new stress inversion tool to use stylolites to estimate depth of their formation.

  16. Mineral vein dynamics modelling (FRACS II)

    Energy Technology Data Exchange (ETDEWEB)

    Urai, J.; Virgo, S.; Arndt, M. [RWTH Aachen (Germany); and others

    2016-08-15

    The Mineral Vein Dynamics Modeling group ''FRACS'' started out as a team of 7 research groups in its first phase and continued with a team of 5 research groups at the Universities of Aachen, Tuebingen, Karlsruhe, Mainz and Glasgow during its second phase ''FRACS 11''. The aim of the group was to develop an advanced understanding of the interplay between fracturing, fluid flow and fracture healing with a special emphasis on the comparison of field data and numerical models. Field areas comprised the Oman mountains in Oman (which where already studied in detail in the first phase), a siliciclastic sequence in the Internal Ligurian Units in Italy (closed to Sestri Levante) and cores of Zechstein carbonates from a Lean Gas reservoir in Northern Germany. Numerical models of fracturing, sealing and interaction with fluid that were developed in phase I where expanded in phase 11. They were used to model small scale fracture healing by crystal growth and the resulting influence on flow, medium scale fracture healing and its influence on successive fracturing and healing, as well as large scale dynamic fluid flow through opening and closing fractures and channels as a function of fluid overpressure. The numerical models were compared with structures in the field and we were able to identify first proxies for mechanical vein-hostrock properties and fluid overpressures versus tectonic stresses. Finally we propose a new classification of stylolites based on numerical models and observations in the Zechstein cores and continued to develop a new stress inversion tool to use stylolites to estimate depth of their formation.

  17. NGC1300 dynamics - II. The response models

    Science.gov (United States)

    Kalapotharakos, C.; Patsis, P. A.; Grosbøl, P.

    2010-10-01

    We study the stellar response in a spectrum of potentials describing the barred spiral galaxy NGC1300. These potentials have been presented in a previous paper and correspond to three different assumptions as regards the geometry of the galaxy. For each potential we consider a wide range of Ωp pattern speed values. Our goal is to discover the geometries and the Ωp supporting specific morphological features of NGC1300. For this purpose we use the method of response models. In order to compare the images of NGC1300 with the density maps of our models, we define a new index which is a generalization of the Hausdorff distance. This index helps us to find out quantitatively which cases reproduce specific features of NGC1300 in an objective way. Furthermore, we construct alternative models following a Schwarzschild-type technique. By this method we vary the weights of the various energy levels, and thus the orbital contribution of each energy, in order to minimize the differences between the response density and that deduced from the surface density of the galaxy, under certain assumptions. We find that the models corresponding to Ωp ~ 16 and 22 kms-1kpc-1 are able to reproduce efficiently certain morphological features of NGC1300, with each one having its advantages and drawbacks. Based on observations collected at the European Southern Observatory, Chile: programme ESO 69.A-0021. E-mail: ckalapot@phys.uoa.gr (CK); patsis@academyofathens.gr (PAP); pgrosbol@eso.org (PG)

  18. An Integrative Model of the Strategic Management Accounting at the Enterprises of Chemical Industry

    Directory of Open Access Journals (Sweden)

    Aleksandra Vasilyevna Glushchenko

    2016-06-01

    Full Text Available Currently, the issues of information and analytical support of strategic management enabling to take timely and high-quality management decisions, are extremely relevant. Conflicting and poor information, haphazard collected in the practice of large companies from unreliable sources, affects the effective implementation of their development strategies and carries the threat of risk, by the increasing instability of the external environment. Thus chemical industry is one of the central places in the industry of Russia and, of course, has its specificity in the formation of the informationsupport system. Such an information system suitable for the development and implementation of strategic directions, changes in recognized competitive advantages of strategic management accounting. The issues of the lack of requirements for strategic accounting information, its inconsistency in the result of simultaneous accumulation in different parts and using different methods of calculation and assessment of indicators is impossible without a well-constructed model of organization of strategic management accounting. The purpose of this study is to develop such a model, the implementation of which will allow realizing the possibility of achieving strategic goals by harmonizing information from the individual objects of the strategic account to increase the functional effectiveness of management decisions with a focus on strategy. Case study was based on dialectical logic and methods of system analysis, and identifying causal relationships in building a model of strategic management accounting that contributes to the forecasts of its development. The study proposed to implement an integrative model of organization of strategic management accounting. The purpose of a phased implementation of this model defines the objects and tools of strategic management accounting. Moreover, it is determined that from the point of view of increasing the usefulness of management

  19. Simulation modeling and analysis in safety. II

    International Nuclear Information System (INIS)

    Ayoub, M.A.

    1981-01-01

    The paper introduces and illustrates simulation modeling as a viable approach for dealing with complex issues and decisions in safety and health. The author details two studies: evaluation of employee exposure to airborne radioactive materials and effectiveness of the safety organization. The first study seeks to define a policy to manage a facility used in testing employees for radiation contamination. An acceptable policy is one that would permit the testing of all employees as defined under regulatory requirements, while not exceeding available resources. The second study evaluates the relationship between safety performance and the characteristics of the organization, its management, its policy, and communication patterns among various functions and levels. Both studies use models where decisions are reached based on the prevailing conditions and occurrence of key events within the simulation environment. Finally, several problem areas suitable for simulation studies are highlighted. (Auth.)

  20. A Dynamic Simulation Model of the Management Accounting Information Systems (MAIS)

    Science.gov (United States)

    Konstantopoulos, Nikolaos; Bekiaris, Michail G.; Zounta, Stella

    2007-12-01

    The aim of this paper is to examine the factors which determine the problems and the advantages on the design of management accounting information systems (MAIS). A simulation is carried out with a dynamic model of the MAIS design.

  1. System modeling and simulation at EBR-II

    International Nuclear Information System (INIS)

    Dean, E.M.; Lehto, W.K.; Larson, H.A.

    1986-01-01

    The codes being developed and verified using EBR-II data are the NATDEMO, DSNP and CSYRED. NATDEMO is a variation of the Westinghouse DEMO code coupled to the NATCON code previously used to simulate perturbations of reactor flow and inlet temperature and loss-of-flow transients leading to natural convection in EBR-II. CSYRED uses the Continuous System Modeling Program (CSMP) to simulate the EBR-II core, including power, temperature, control-rod movement reactivity effects and flow and is used primarily to model reactivity induced power transients. The Dynamic Simulator for Nuclear Power Plants (DSNP) allows a whole plant, thermal-hydraulic simulation using specific component and system models called from libraries. It has been used to simulate flow coastdown transients, reactivity insertion events and balance-of-plant perturbations

  2. Argonne Bubble Experiment Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations. The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.

  3. Horns Rev II, 2-D Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), on behalf of Energy E2 A/S part of DONG Energy A/S, Denmark. The objective of the tests was: to investigate the combined influence of the pile...... diameter to water depth ratio and the wave hight to water depth ratio on wave run-up of piles. The measurements should be used to design access platforms on piles....

  4. Illusory inferences from a disjunction of conditionals: a new mental models account.

    Science.gov (United States)

    Barrouillet, P; Lecas, J F

    2000-08-14

    (Johnson-Laird, P.N., & Savary, F. (1999, Illusory inferences: a novel class of erroneous deductions. Cognition, 71, 191-229.) have recently presented a mental models account, based on the so-called principle of truth, for the occurrence of inferences that are compelling but invalid. This article presents an alternative account of the illusory inferences resulting from a disjunction of conditionals. In accordance with our modified theory of mental models of the conditional, we show that the way individuals represent conditionals leads them to misinterpret the locus of the disjunction and prevents them from drawing conclusions from a false conditional, thus accounting for the compelling character of the illusory inference.

  5. Taking into account of the Pauli principle in the quasiparticle-phonon nuclear model

    International Nuclear Information System (INIS)

    Solov'ev, V.G.

    1979-01-01

    The effect of an exact account taken of the Pauli principle and correlations in ground states in calculations in the framework of the quasiparticle-phonon model of a nucleus has been studied. It is elucidated when it is possible to use the random phase approximation (RPA) and when the Pauli principle should be exactly taken into account. It has been shown that in the quasiparticle-phonon model of a nucleus one may perform calculations with a precise account of the Pauli principle. In most of the problems calculations can be carried out with RPA-phonons

  6. Mutual Calculations in Creating Accounting Models: A Demonstration of the Power of Matrix Mathematics in Accounting Education

    Science.gov (United States)

    Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg

    2016-01-01

    The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…

  7. 76 FR 29249 - Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications

    Science.gov (United States)

    2011-05-20

    ... Affordable Care Act, to test innovative payment and service delivery models that reduce spending under.... This Model will test the effectiveness of a combination of the following: Payment arrangements that...] Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications AGENCY: Centers for...

  8. PEP-II vacuum system pressure profile modeling using EXCEL

    International Nuclear Information System (INIS)

    Nordby, M.; Perkins, C.

    1994-06-01

    A generic, adaptable Microsoft EXCEL program to simulate molecular flow in beam line vacuum systems is introduced. Modeling using finite-element approximation of the governing differential equation is discussed, as well as error estimation and program capabilities. The ease of use and flexibility of the spreadsheet-based program is demonstrated. PEP-II vacuum system models are reviewed and compared with analytical models

  9. Measurement and modeling of advanced coal conversion processes, Volume II

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, P.R.; Serio, M.A.; Hamblen, D.G. [and others

    1993-06-01

    A two dimensional, steady-state model for describing a variety of reactive and nonreactive flows, including pulverized coal combustion and gasification, is presented. The model, referred to as 93-PCGC-2 is applicable to cylindrical, axi-symmetric systems. Turbulence is accounted for in both the fluid mechanics equations and the combustion scheme. Radiation from gases, walls, and particles is taken into account using a discrete ordinates method. The particle phase is modeled in a lagrangian framework, such that mean paths of particle groups are followed. A new coal-general devolatilization submodel (FG-DVC) with coal swelling and char reactivity submodels has been added.

  10. LP II--A GOAL PROGRAMMING MODEL FOR MEDIA.

    Science.gov (United States)

    CHARNES, A.; AND OTHERS

    A GOAL PROGRAMING MODEL FOR SELECTING MEDIA IS PRESENTED WHICH ALTERS THE OBJECTIVE AND EXTENDS PREVIOUS MEDIA MODELS BY ACCOUNTING FOR CUMULATIVE DUPLICATING AUDIENCES OVER A VARIETY OF TIME PERIODS. THIS PERMITS DETAILED CONTROL OF THE DISTRIBUTION OF MESSAGE FREQUENCIES DIRECTED AT EACH OF NUMEROUS MARKETING TARGETS OVER A SEQUENCE OF…

  11. A thermoelectric power generating heat exchanger: Part II – Numerical modeling and optimization

    DEFF Research Database (Denmark)

    Sarhadi, Ali; Bjørk, Rasmus; Lindeburg, N.

    2016-01-01

    In Part I of this study, the performance of an experimental integrated thermoelectric generator (TEG)-heat exchanger was presented. In the current study, Part II, the obtained experimental results are compared with those predicted by a finite element (FE) model. In the simulation of the integrated...... TEG-heat exchanger, the thermal contact resistance between the TEG and the heat exchanger is modeled assuming either an ideal thermal contact or using a combined Cooper–Mikic–Yovanovich (CMY) and parallel plate gap formulation, which takes into account the contact pressure, roughness and hardness...

  12. Accountability: a missing construct in models of adherence behavior and in clinical practice

    Directory of Open Access Journals (Sweden)

    Oussedik E

    2017-07-01

    Full Text Available Elias Oussedik,1 Capri G Foy,2 E J Masicampo,3 Lara K Kammrath,3 Robert E Anderson,1 Steven R Feldman1,4,5 1Center for Dermatology Research, Department of Dermatology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 2Department of Social Sciences and Health Policy, Wake Forest School of Medicine, Winston-Salem, NC, USA; 3Department of Psychology, Wake Forest University, Winston-Salem, NC, USA; 4Department of Pathology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 5Department of Public Health Sciences, Wake Forest School of Medicine, Winston-Salem, NC, USA Abstract: Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients’ motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8–12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability to patients’ autonomous internal desire to please a respected health care provider (autonomous accountability, the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura’s Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well

  13. Development and application of a large scale river system model for National Water Accounting in Australia

    Science.gov (United States)

    Dutta, Dushmanta; Vaze, Jai; Kim, Shaun; Hughes, Justin; Yang, Ang; Teng, Jin; Lerat, Julien

    2017-04-01

    Existing global and continental scale river models, mainly designed for integrating with global climate models, are of very coarse spatial resolutions and lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing water accounts, which have become increasingly important for water resources planning and management at regional and national scales. A continental scale river system model called Australian Water Resource Assessment River System model (AWRA-R) has been developed and implemented for national water accounting in Australia using a node-link architecture. The model includes major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. Two key components of the model are an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. The results in the Murray-Darling Basin shows highly satisfactory performance of the model with median daily Nash-Sutcliffe Efficiency (NSE) of 0.64 and median annual bias of less than 1% for the period of calibration (1970-1991) and median daily NSE of 0.69 and median annual bias of 12% for validation period (1992-2014). The results have demonstrated that the performance of the model is less satisfactory when the key processes such as overbank flow, groundwater seepage and irrigation diversion are switched off. The AWRA-R model, which has been operationalised by the Australian Bureau of Meteorology for continental scale water accounting, has contributed to improvements in the national water account by substantially reducing accounted different volume (gain/loss).

  14. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    OpenAIRE

    Mitchell, Lewis; Carrassi, Alberto

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are describe...

  15. A Social Accountable Model for Medical Education System in Iran: A Grounded-Theory

    Directory of Open Access Journals (Sweden)

    Mohammadreza Abdolmaleki

    2017-10-01

    Full Text Available Social accountability has been increasingly discussed over the past three decades in various fields providing service to the community and has been expressed as a goal for various areas. In medical education system, like other social accountability areas, it is considered as one of the main objectives globally. The aim of this study was to seek a social accountability theory in the medical education system that is capable of identifying all the standards, norms, and conditions within the country related to the study subject and recognize their relationship. In this study, a total of eight experts in the field of social accountability in medical education system with executive or study experience were interviewedpersonally. After analysis of interviews, 379 codes, 59 secondary categories, 16 subcategories, and 9 main categories were obtained. The resulting data was collected and analyzed at three levels of open coding, axial coding, and selective coding in the form of grounded theory study of “Accountability model of medical education in Iran”, which can be used in education system’s policies and planning for social accountability, given that almost all effective components of social accountability in highereducation health system with causal and facilitator associations were determined.Keywords: SOCIAL ACCOUNTABILITY, COMMUNITY–ORIENTED MEDICINE, COMMUNITY MEDICINE, EDUCATION SYSTEM, GROUNDED THEORY

  16. Toward a Useful Model for Group Mentoring in Public Accounting Firms

    Directory of Open Access Journals (Sweden)

    Steven J. Johnson

    2013-07-01

    Full Text Available Today’s public accounting firms face a number of challenges in relation to their most valuable resource and primary revenue generator, human capital. Expanding regulations, technology advances, increased competition and high turnover rates are just a few of the issues confronting public accounting leaders in today’s complex business environment. In recent years, some public accounting firms have attempted to combat low retention and high burnout rates with traditional one-to-one mentoring programs, with varying degrees of success. Many firms have found that they lack the resources necessary to successfully implement and maintain such programs. In other industries, organizations have used a group mentoring approach in attempt to remove potential barriers to mentoring success. Although the research regarding group mentoring shows promise for positive organizational outcomes, no cases could be found in the literature regarding its usage in a public accounting firm. Because of the unique challenges associated with public accounting firms, this paper attempts to answer two questions: (1Does group mentoring provide a viable alternative to traditional mentoring in a public accounting firm? (2 If so, what general model might be used for implementing such a program? In answering these questions, a review of the group mentoring literature is provided, along with a suggested model for the implementation of group mentoring in a public accounting firm.

  17. A simulation model of hospital management based on cost accounting analysis according to disease.

    Science.gov (United States)

    Tanaka, Koji; Sato, Junzo; Guo, Jinqiu; Takada, Akira; Yoshihara, Hiroyuki

    2004-12-01

    Since a little before 2000, hospital cost accounting has been increasingly performed at Japanese national university hospitals. At Kumamoto University Hospital, for instance, departmental costs have been analyzed since 2000. And, since 2003, the cost balance has been obtained according to certain diseases for the preparation of Diagnosis-Related Groups and Prospective Payment System. On the basis of these experiences, we have constructed a simulation model of hospital management. This program has worked correctly at repeated trials and with satisfactory speed. Although there has been room for improvement of detailed accounts and cost accounting engine, the basic model has proved satisfactory. We have constructed a hospital management model based on the financial data of an existing hospital. We will later improve this program from the viewpoint of construction and using more various data of hospital management. A prospective outlook may be obtained for the practical application of this hospital management model.

  18. Computing Models of CDF and D0 in Run II

    International Nuclear Information System (INIS)

    Lammel, S.

    1997-05-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunchspacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II

  19. Computing Models of CDF and D0 in Run II

    International Nuclear Information System (INIS)

    Lammel, S.

    1997-01-01

    The next collider run of the Fermilab Tevatron, Run II, is scheduled for autumn of 1999. Both experiments, the Collider Detector at Fermilab (CDF) and the D0 experiment are being modified to cope with the higher luminosity and shorter bunch spacing of the Tevatron. New detector components, higher event complexity, and an increased data volume require changes from the data acquisition systems up to the analysis systems. In this paper we present a summary of the computing models of the two experiments for Run II

  20. Facility level SSAC for model country - an introduction and material balance accounting principles

    International Nuclear Information System (INIS)

    Jones, R.J.

    1989-01-01

    A facility level State System of Accounting for and Control of Nuclear Materials (SSAC) for a model country and the principles of materials balance accounting relating to that country are described. The seven principal elements of a SSAC are examined and a facility level system based on them discussed. The seven elements are organization and management; nuclear material measurements; measurement quality; records and reports; physical inventory taking; material balance closing; containment and surveillance. 11 refs., 19 figs., 5 tabs

  1. Assimilation of tourism satellite accounts and applied general equilibrium models to inform tourism policy analysis

    OpenAIRE

    Rossouw, Riaan; Saayman, Melville

    2011-01-01

    Historically, tourism policy analysis in South Africa has posed challenges to accurate measurement. The primary reason for this is that tourism is not designated as an 'industry' in standard economic accounts. This paper therefore demonstrates the relevance and need for applied general equilibrium (AGE) models to be completed and extended through an integration with tourism satellite accounts (TSAs) as a tool for policy makers (especially tourism policy makers) in South Africa. The paper sets...

  2. Modeling Fe II Emission and Revised Fe II (UV) Empirical Templates for the Seyfert 1 Galaxy I Zw 1

    Science.gov (United States)

    Bruhweiler, F.; Verner, E.

    2008-03-01

    We use the narrow-lined broad-line region (BLR) of the Seyfert 1 galaxy, I Zw 1, as a laboratory for modeling the ultraviolet (UV) Fe II 2100-3050 Å emission complex. We calculate a grid of Fe II emission spectra representative of BLR clouds and compare them with the observed I Zw 1 spectrum. Our predicted spectrum for log [nH/(cm -3) ] = 11.0, log [ΦH/(cm -2 s-1) ] = 20.5, and ξ/(1 km s-1) = 20, using Cloudy and an 830 level model atom for Fe II with energies up to 14.06 eV, gives a better fit to the UV Fe II emission than models with fewer levels. Our analysis indicates (1) the observed UV Fe II emission must be corrected for an underlying Fe II pseudocontinuum; (2) Fe II emission peaks can be misidentified as that of other ions in active galactic nuclei (AGNs) with narrow-lined BLRs possibly affecting deduced physical parameters; (3) the shape of 4200-4700 Å Fe II emission in I Zw 1 and other AGNs is a relative indicator of narrow-line region (NLR) and BLR Fe II emission; (4) predicted ratios of Lyα, C III], and Fe II emission relative to Mg II λ2800 agree with extinction corrected observed I Zw 1 fluxes, except for C IV λ1549 (5) the sensitivity of Fe II emission strength to microturbulence ξ casts doubt on existing relative Fe/Mg abundances derived from Fe II (UV)/Mg II flux ratios. Our calculated Fe II emission spectra, suitable for BLRs in AGNs, are available at http://iacs.cua.edu/people/verner/FeII. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 05-26555.

  3. Shadow Segmentation and Augmentation Using á-overlay Models that Account for Penumbra

    DEFF Research Database (Denmark)

    Nielsen, Michael; Madsen, Claus B.

    2006-01-01

    that an augmented virtual object can cast an exact shadow. The penumbras (half-shadows) must be taken into account so that we can model the soft shadows.We hope to achieve this by modelling the shadow regions (umbra and penumbra alike) with a transparent overlay. This paper reviews the state-of-the-art shadow...

  4. School Board Improvement Plans in Relation to the AIP Model of Educational Accountability: A Content Analysis

    Science.gov (United States)

    van Barneveld, Christina; Stienstra, Wendy; Stewart, Sandra

    2006-01-01

    For this study we analyzed the content of school board improvement plans in relation to the Achievement-Indicators-Policy (AIP) model of educational accountability (Nagy, Demeris, & van Barneveld, 2000). We identified areas of congruence and incongruence between the plans and the model. Results suggested that the content of the improvement…

  5. Lessons learned for spatial modelling of ecosystem services in support of ecosystem accounting

    NARCIS (Netherlands)

    Schroter, M.; Remme, R.P.; Sumarga, E.; Barton, D.N.; Hein, L.G.

    2015-01-01

    Assessment of ecosystem services through spatial modelling plays a key role in ecosystem accounting. Spatial models for ecosystem services try to capture spatial heterogeneity with high accuracy. This endeavour, however, faces several practical constraints. In this article we analyse the trade-offs

  6. A Carbon Monitoring System Approach to US Coastal Wetland Carbon Fluxes: Progress Towards a Tier II Accounting Method with Uncertainty Quantification

    Science.gov (United States)

    Windham-Myers, L.; Holmquist, J. R.; Bergamaschi, B. A.; Byrd, K. B.; Callaway, J.; Crooks, S.; Drexler, J. Z.; Feagin, R. A.; Ferner, M. C.; Gonneea, M. E.; Kroeger, K. D.; Megonigal, P.; Morris, J. T.; Schile, L. M.; Simard, M.; Sutton-Grier, A.; Takekawa, J.; Troxler, T.; Weller, D.; Woo, I.

    2015-12-01

    Despite their high rates of long-term carbon (C) sequestration when compared to upland ecosystems, coastal C accounting is only recently receiving the attention of policy makers and carbon markets. Assessing accuracy and uncertainty in net C flux estimates requires both direct and derived measurements based on both short and long term dynamics in key drivers, particularly soil accretion rates and soil organic content. We are testing the ability of remote sensing products and national scale datasets to estimate biomass and soil stocks and fluxes over a wide range of spatial and temporal scales. For example, the 2013 Wetlands Supplement to the 2006 IPCC GHG national inventory reporting guidelines requests information on development of Tier I-III reporting, which express increasing levels of detail. We report progress toward development of a Carbon Monitoring System for "blue carbon" that may be useful for IPCC reporting guidelines at Tier II levels. Our project uses a current dataset of publically available and contributed field-based measurements to validate models of changing soil C stocks, across a broad range of U.S. tidal wetland types and landuse conversions. Additionally, development of biomass algorithms for both radar and spectral datasets will be tested and used to determine the "price of precision" of different satellite products. We discuss progress in calculating Tier II estimates focusing on variation introduced by the different input datasets. These include the USFWS National Wetlands Inventory, NOAA Coastal Change Analysis Program, and combinations to calculate tidal wetland area. We also assess the use of different attributes and depths from the USDA-SSURGO database to map soil C density. Finally, we examine the relative benefit of radar, spectral and hybrid approaches to biomass mapping in tidal marshes and mangroves. While the US currently plans to report GHG emissions at a Tier I level, we argue that a Tier II analysis is possible due to national

  7. Quantitative assessment of evidential weight for a fingerprint comparison. Part II: a generalisation to take account of the general pattern.

    Science.gov (United States)

    Neumann, Cedric; Evett, Ian W; Skerrett, James E; Mateos-Garcia, Ismael

    2012-01-10

    The authors have proposed a quantitative method for assessing weight of evidence in the case where a fingermark from a crime scene is compared with a set of control prints from the ten fingers of a suspect. The approach is based on the notion of calculating a Likelihood Ratio (LR) that addresses a pair of propositions relating to the individual who left the crime mark. The current method considers only information extracted from minutiae, such as location, direction and type. It does not consider other information usually taken into account by fingerprint examiners, such as the general pattern of the ridge flow on the mark and the control prints. In this paper, we propose an improvement to our model that allows a fingerprint examiner to take advantage of pattern information when assessing the evidential weight to be assigned to a fingerprint comparison. We present an extension of the formal analysis proposed earlier and we illustrate our approach with an example. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. Model Selection and Accounting for Model Uncertainty in Graphical Models Using OCCAM’s Window

    Science.gov (United States)

    1991-07-22

    mental work; C, strenuous physical work; D, systolic blood pressure: E. ratio of 13 and Qt proteins; F, family anamnesis of coronary heart disease...of F, family anamnesis . The models are shown in Figure 4. 12 Table 1: Risk factors for Coronary lfeart Disea:W B No Yes A No Yes No Yes F E D C...a link from smoking (A) to systolic blood pressure (D). There is decisive evidence in favour of the marginal independence of family anamnesis of

  9. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    Energy Technology Data Exchange (ETDEWEB)

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  10. Accounting for differences in dieting status: steps in the refinement of a model.

    Science.gov (United States)

    Huon, G; Hayne, A; Gunewardene, A; Strong, K; Lunn, N; Piira, T; Lim, J

    1999-12-01

    The overriding objective of this paper is to outline the steps involved in refining a structural model to explain differences in dieting status. Cross-sectional data (representing the responses of 1,644 teenage girls) derive from the preliminary testing in a 3-year longitudinal study. A battery of measures assessed social influence, vulnerability (to conformity) disposition, protective (social coping) skills, and aspects of positive familial context as core components in a model proposed to account for the initiation of dieting. Path analyses were used to establish the predictive ability of those separate components and their interrelationships in accounting for differences in dieting status. Several components of the model were found to be important predictors of dieting status. The model incorporates significant direct, indirect (or mediated), and moderating relationships. Taking all variables into account, the strongest prediction of dieting status was from peer competitiveness, using a new scale developed specifically for this study. Systematic analyses are crucial for the refinement of models to be used in large-scale multivariate studies. In the short term, the model investigated in this study has been shown to be useful in accounting for cross-sectional differences in dieting status. The refined model will be most powerfully employed in large-scale time-extended studies of the initiation of dieting to lose weight. Copyright 1999 by John Wiley & Sons, Inc.

  11. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    DEFF Research Database (Denmark)

    Jensen, Ninna Reitzel; Schomacker, Kristian Juul

    2015-01-01

    Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death......, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance...... product types. This enables comparison of participating life insurance products and unit-linked insurance products, thus building a bridge between the two different ways of formalizing life insurance products. Finally, our model distinguishes itself from the existing literature by taking into account...

  12. Microscopic model accounting of 2p2p configurations in magic nuclei

    International Nuclear Information System (INIS)

    Kamerdzhiev, S.P.

    1983-01-01

    A model for account of the 2p2h configurations in magic nuclei is described in the framework of the Green function formalism. The model is formulated in the lowest order in the phonon production amplitude, so that the series are expansions not over pure 2p2h configurations, but over con figurations of the type ''1p1h+phonon''. Equations are obtained for the vertex and the density matrix, as well as an expression for the transition probabilities, that are extensions of the corresponding results of the theory of finite Fermi systems, or of the random-phase approximation to the case where the ''1p1h+phonon'' configurations are taken into account. Corrections to the one-particle phenomenological basis which arise with account for complicated configurations are obtained. Comparison with other approaches, using phonons, has shown that they are particular cases of the described model

  13. Accounting for imperfect forward modeling in geophysical inverse problems — Exemplified for crosshole tomography

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Holm Jacobsen, Bo

    2014-01-01

    forward models, can be more than an order of magnitude larger than the measurement uncertainty. We also found that the modeling error is strongly linked to the spatial variability of the assumed velocity field, i.e., the a priori velocity model.We discovered some general tools by which the modeling error...... synthetic ground-penetrating radar crosshole tomographic inverse problems. Ignoring the modeling error can lead to severe artifacts, which erroneously appear to be well resolved in the solution of the inverse problem. Accounting for the modeling error leads to a solution of the inverse problem consistent...

  14. Modelling characteristics of photovoltaic panels with thermal phenomena taken into account

    International Nuclear Information System (INIS)

    Krac, Ewa; Górecki, Krzysztof

    2016-01-01

    In the paper a new form of the electrothermal model of photovoltaic panels is proposed. This model takes into account the optical, electrical and thermal properties of the considered panels, as well as electrical and thermal properties of the protecting circuit and thermal inertia of the considered panels. The form of this model is described and some results of measurements and calculations of mono-crystalline and poly-crystalline panels are presented

  15. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  16. Spike Neural Models Part II: Abstract Neural Models

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2018-02-01

    Full Text Available Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF model which is not biologically realistic but does quickly and easily integrate input to produce spikes. Izhikevich's model is based on Hodgkin-Huxley's model but simplified such that it uses only two differentiation equations and four parameters to produce various realistic spike patterns. LIF is based on a standard electrical circuit and contains one equation. Either of these two models, or any of the many other models in literature can be used in a SNN. Choosing a neural model is an important task that depends on the goal of the research and the resources available. Once a model is chosen, network decisions such as connectivity, delay, and sparseness, need to be made. Understanding neural models and how they are incorporated into the network is the first step in creating a SNN.

  17. The Anachronism of the Local Public Accountancy Determinate by the Accrual European Model

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2009-01-01

    Full Text Available Placing the European accrual model upon cash accountancy model,presently used in Romania, at the level of the local communities, makespossible that the anachronism of the model to manifest itself on the discussion’sconcentration at the nominalization about the model’s inclusion in everydaypublic practice. The basis of the accrual model were first defined in the lawregarding the commercial societies adopted in Great Britain in 1985, when theydetermined that all income and taxes referring to the financial year “will betaken into consideration without any boundary to the reception or paymentdate.”1 The accrual model in accountancy needs the recording of the non-casheffects in transactions or financial events for their appearance periods and not inany generated cash, received or paid. The business development was the basisfor “sophistication” of the recordings of the transactions and financial events,being prerequisite for recording the debtors’ or creditors’ sums.

  18. Multiple imputation to account for measurement error in marginal structural models

    Science.gov (United States)

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  19. Multiple Imputation to Account for Measurement Error in Marginal Structural Models.

    Science.gov (United States)

    Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J

    2015-09-01

    Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.

  20. Theoretical models for Type I and Type II supernova

    International Nuclear Information System (INIS)

    Woosley, S.E.; Weaver, T.A.

    1985-01-01

    Recent theoretical progress in understanding the origin and nature of Type I and Type II supernovae is discussed. New Type II presupernova models characterized by a variety of iron core masses at the time of collapse are presented and the sensitivity to the reaction rate 12 C(α,γ) 16 O explained. Stars heavier than about 20 M/sub solar/ must explode by a ''delayed'' mechanism not directly related to the hydrodynamical core bounce and a subset is likely to leave black hole remnants. The isotopic nucleosynthesis expected from these massive stellar explosions is in striking agreement with the sun. Type I supernovae result when an accreting white dwarf undergoes a thermonuclear explosion. The critical role of the velocity of the deflagration front in determining the light curve, spectrum, and, especially, isotopic nucleosynthesis in these models is explored. 76 refs., 8 figs

  1. Anisotropic Bianchi II cosmological models with matter and electromagnetic fields

    International Nuclear Information System (INIS)

    Soares, D.

    1978-01-01

    A class of solutions of Einstein-Maxwell equations is presented, which corresponds to anisotropic Bianchi II spatially homogeneous cosmological models with perfect fluid and electromagnetic field. A particular model is examined and shown to be unstable for perturbations of the electromagnetic field strength parameter about a particular value. This value defines a limiar unstable case in which the ratio epsilon, of the fluid density to the e.m. energy density is monotonically increasing with a minimum finite value at the singularity. Beyond this limiar, the model has a matter dominated singularity, and a characteristic stage appears where epsilon has a minimum, at a finite time from the singularity. For large times, the models tend to an exact solution for zero electromagnetic field and fluid with p = (1/5)p. Some cosmological features of the models are calculated, as the effect of anisotropy on matter density and expansion time scale factors, as compared to the corresponding Friedmann model [pt

  2. Computer modelling of structures with account of the construction stages and the time dependent material properties

    Directory of Open Access Journals (Sweden)

    Traykov Alexander

    2015-01-01

    Full Text Available Numerical studies are performed on computer models taking into account the stages of construction and time dependent material properties defined in two forms. A 2D model of three storey two spans frame is created. The first form deals with material defined in the usual design practice way - without taking into account the time dependent properties of the concrete. The second form creep and shrinkage of the concrete are taken into account. Displacements and internal forces in specific elements and sections are reported. The influence of the time dependent material properties on the displacement and the internal forces in the main structural elements is tracked down. The results corresponding to the two forms of material definition are compared together as well as with the results obtained by the usual design calculations. Conclusions on the influence of the concrete creep and shrinkage during the construction towards structural behaviour are made.

  3. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    Directory of Open Access Journals (Sweden)

    Ninna Reitzel Jensen

    2015-06-01

    Full Text Available Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance and unit-linked insurance. By use of a two-account model, we are able to illustrate general concepts without making the model too abstract. To allow for complicated financial markets without dramatically increasing the mathematical complexity, we focus on economic scenarios. We illustrate the use of our model by conducting scenario analysis based on Monte Carlo simulation, but the model applies to scenarios in general and to worst-case and best-estimate scenarios in particular. In addition to easy computations, our model offers a common framework for the valuation of life insurance payments across product types. This enables comparison of participating life insurance products and unit-linked insurance products, thus building a bridge between the two different ways of formalizing life insurance products. Finally, our model distinguishes itself from the existing literature by taking into account the Markov model for the state of the policyholder and, hereby, facilitating event risk.

  4. Supportive Accountability: A Model for Providing Human Support to Enhance Adherence to eHealth Interventions

    Science.gov (United States)

    2011-01-01

    The effectiveness of and adherence to eHealth interventions is enhanced by human support. However, human support has largely not been manualized and has usually not been guided by clear models. The objective of this paper is to develop a clear theoretical model, based on relevant empirical literature, that can guide research into human support components of eHealth interventions. A review of the literature revealed little relevant information from clinical sciences. Applicable literature was drawn primarily from organizational psychology, motivation theory, and computer-mediated communication (CMC) research. We have developed a model, referred to as “Supportive Accountability.” We argue that human support increases adherence through accountability to a coach who is seen as trustworthy, benevolent, and having expertise. Accountability should involve clear, process-oriented expectations that the patient is involved in determining. Reciprocity in the relationship, through which the patient derives clear benefits, should be explicit. The effect of accountability may be moderated by patient motivation. The more intrinsically motivated patients are, the less support they likely require. The process of support is also mediated by the communications medium (eg, telephone, instant messaging, email). Different communications media each have their own potential benefits and disadvantages. We discuss the specific components of accountability, motivation, and CMC medium in detail. The proposed model is a first step toward understanding how human support enhances adherence to eHealth interventions. Each component of the proposed model is a testable hypothesis. As we develop viable human support models, these should be manualized to facilitate dissemination. PMID:21393123

  5. An Analysis Plan for the ARCOMS II (Armor Combat Operations Model Support II) Experiment.

    Science.gov (United States)

    1983-06-01

    In order to facilitate Armor Combat Modeling, the data analysis shculd focus upon the methods which transform the data intc descriptive or predictive ...discussed in Chapter III tc predict the Farameter for probability of detection in time ŕt. This should be compared with the results of the N.4gh -t Vision...J 6A 46.) I-I 0 f U-CL 0~ z o -Z 06 09 03 v 0 0 SJldnYS 10 ON Ipgr Cp o LSTm n at emn itgas 4AA rI z ;A (AZ - 090.0 UlA0 -O ON 404 Fiur CAd &P CC

  6. Accounting for Co-Teaching: A Guide for Policymakers and Developers of Value-Added Models

    Science.gov (United States)

    Isenberg, Eric; Walsh, Elias

    2015-01-01

    We outline the options available to policymakers for addressing co-teaching in a value-added model. Building on earlier work, we propose an improvement to a method of accounting for co-teaching that treats co-teachers as teams, with each teacher receiving equal credit for co-taught students. Hock and Isenberg (2012) described a method known as the…

  7. Accounting for the influence of vegetation and landscape improves model transferability in a tropical savannah region

    NARCIS (Netherlands)

    Gao, H.; Hrachowitz, M.; Sriwongsitanon, Nutchanart; Fenicia, F.; Gharari, S.; Savenije, H.H.G.

    2016-01-01

    Understanding which catchment characteristics dominate hydrologic response and how to take them into account remains a challenge in hydrological modeling, particularly in ungauged basins. This is even more so in nontemperate and nonhumid catchments, where—due to the combination of seasonality and

  8. The Politics and Statistics of Value-Added Modeling for Accountability of Teacher Preparation Programs

    Science.gov (United States)

    Lincove, Jane Arnold; Osborne, Cynthia; Dillon, Amanda; Mills, Nicholas

    2014-01-01

    Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring…

  9. Developing a Model for Identifying Students at Risk of Failure in a First Year Accounting Unit

    Science.gov (United States)

    Smith, Malcolm; Therry, Len; Whale, Jacqui

    2012-01-01

    This paper reports on the process involved in attempting to build a predictive model capable of identifying students at risk of failure in a first year accounting unit in an Australian university. Identifying attributes that contribute to students being at risk can lead to the development of appropriate intervention strategies and support…

  10. A creep rupture model accounting for cavitation at sliding grain boundaries

    NARCIS (Netherlands)

    Giessen, Erik van der; Tvergaard, Viggo

    1991-01-01

    An axisymmetric cell model analysis is used to study creep failure by grain boundary cavitation at facets normal to the maximum principal tensile stress, taking into account the influence of cavitation and sliding at adjacent inclined grain boundaries. It is found that the interaction between the

  11. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  12. One Model Fits All: Explaining Many Aspects of Number Comparison within a Single Coherent Model-A Random Walk Account

    Science.gov (United States)

    Reike, Dennis; Schwarz, Wolf

    2016-01-01

    The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…

  13. Advanced training course on state systems of accounting for and control of nuclear materials. Volume II. Visual aids

    International Nuclear Information System (INIS)

    Sorenson, R.J.; Schneider, R.A.

    1979-01-01

    Purpose of the course was to train in the accounting and control of nuclear materials in a bulk processing facility, for international safeguards. The Exxon low enriched uranium fabrication plant is used as an example. This volume contains visual aids used for the presentation

  14. Does Structural Inequality Begin with a Bank Account? Creating a Financial Stake in College: Report II of IV

    Science.gov (United States)

    Elliott, William, III

    2012-01-01

    "Creating a Financial Stake in College" is a four-part series of reports that focuses on the relationship between children's savings and improving college success. This series examines: (1) why policymakers should care about savings, (2) the relationship between inequality and bank account ownership, (3) the connections between savings and college…

  15. Taking into account for the Pauli principle in particle-vibrator model

    International Nuclear Information System (INIS)

    Knyaz'kov, O.M.

    1985-01-01

    To construct Hamiltonian of the particle interaction and phonons a semimicroscopic approach developed by the author earlier is used. At that the Pauli principle is taken account of in local formalism of density matrix. Analytical expressions permitting in a closed form to solve a task of taking account of the Pauli principle in the particle-vibrator model have been derived. Unlike a phenomenological approach form factors of inelastic transitions are determined with parameters of effective nucleon-nucleon forces, central and transition densities and contain no free parameters

  16. Accounting of inter-electron correlations in the model of mobile electron shells

    International Nuclear Information System (INIS)

    Panov, Yu.D.; Moskvin, A.S.

    2000-01-01

    One studied the basic peculiar features of the model for mobile electron shells for multielectron atom or cluster. One offered a variation technique to take account of the electron correlations where the coordinates of the centre of single-particle atomic orbital served as variation parameters. It enables to interpret dramatically variation of electron density distribution under anisotropic external effect in terms of the limited initial basis. One studied specific correlated states that might make correlation contribution into the orbital current. Paper presents generalization of the typical MO-LCAO pattern with the limited set of single particle functions enabling to take account of additional multipole-multipole interactions in the cluster [ru

  17. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    Science.gov (United States)

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  18. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  19. Modeling phoneme perception. II: A model of stop consonant discrimination.

    Science.gov (United States)

    van Hessen, A J; Schouten, M E

    1992-10-01

    Combining elements from two existing theories of speech sound discrimination, dual process theory (DPT) and trace context theory (TCT), a new theory, called phoneme perception theory, is proposed, consisting of a long-term phoneme memory, a context-coding memory, and a trace memory, each with its own time constants. This theory is tested by means of stop-consonant discrimination data in which interstimulus interval (ISI; values of 100, 300, and 2000 ms) is an important variable. It is shown that discrimination in which labeling plays an important part (2IFC and AX between category) benefits from increased ISI, whereas discrimination in which only sensory traces are compared (AX within category), decreases with increasing ISI. The theory is also tested on speech discrimination data from the literature in which ISI is a variable [Pisoni, J. Acoust. Soc. Am. 36, 277-282 (1964); Cowan and Morse, J. Acoust. Soc. Am. 79, 500-507 (1986)]. It is concluded that the number of parameters in trace context theory is not sufficient to account for most speech-sound discrimination data and that a few additional assumptions are needed, such as a form of sublabeling, in which subjects encode the quality of a stimulus as a member of a category, and which requires processing time.

  20. Predicting NonInertial Effects with Algebraic Stress Models which Account for Dissipation Rate Anisotropies

    Science.gov (United States)

    Jongen, T.; Machiels, L.; Gatski, T. B.

    1997-01-01

    Three types of turbulence models which account for rotational effects in noninertial frames of reference are evaluated for the case of incompressible, fully developed rotating turbulent channel flow. The different types of models are a Coriolis-modified eddy-viscosity model, a realizable algebraic stress model, and an algebraic stress model which accounts for dissipation rate anisotropies. A direct numerical simulation of a rotating channel flow is used for the turbulent model validation. This simulation differs from previous studies in that significantly higher rotation numbers are investigated. Flows at these higher rotation numbers are characterized by a relaminarization on the cyclonic or suction side of the channel, and a linear velocity profile on the anticyclonic or pressure side of the channel. The predictive performance of the three types of models are examined in detail, and formulation deficiencies are identified which cause poor predictive performance for some of the models. Criteria are identified which allow for accurate prediction of such flows by algebraic stress models and their corresponding Reynolds stress formulations.

  1. Accounting for Local Dependence with the Rasch Model: The Paradox of Information Increase.

    Science.gov (United States)

    Andrich, David

    Test theories imply statistical, local independence. Where local independence is violated, models of modern test theory that account for it have been proposed. One violation of local independence occurs when the response to one item governs the response to a subsequent item. Expanding on a formulation of this kind of violation between two items in the dichotomous Rasch model, this paper derives three related implications. First, it formalises how the polytomous Rasch model for an item constituted by summing the scores of the dependent items absorbs the dependence in its threshold structure. Second, it shows that as a consequence the unit when the dependence is accounted for is not the same as if the items had no response dependence. Third, it explains the paradox, known, but not explained in the literature, that the greater the dependence of the constituent items the greater the apparent information in the constituted polytomous item when it should provide less information.

  2. Cost accounting models used for price-setting of health services: an international review.

    Science.gov (United States)

    Raulinajtys-Grzybek, Monika

    2014-12-01

    The aim of the article was to present and compare cost accounting models which are used in the area of healthcare for pricing purposes in different countries. Cost information generated by hospitals is further used by regulatory bodies for setting or updating prices of public health services. The article presents a set of examples from different countries of the European Union, Australia and the United States and concentrates on DRG-based payment systems as they primarily use cost information for pricing. Differences between countries concern the methodology used, as well as the data collection process and the scope of the regulations on cost accounting. The article indicates that the accuracy of the calculation is only one of the factors that determine the choice of the cost accounting methodology. Important aspects are also the selection of the reference hospitals, precise and detailed regulations and the existence of complex healthcare information systems in hospitals. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Accounting for sex differences in PTSD: A multi-variable mediation model

    DEFF Research Database (Denmark)

    Christiansen, Dorte M.; Hansen, Maj

    2015-01-01

    methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually....... Objective: The present study is the first to systematically test the hypothesis that a combination of pre-, peri-, and posttraumatic risk factors more prevalent in females can account for sex differences in PTSD severity. Method: The study was a quasi-prospective questionnaire survey assessing PTSD...... cognitions about self and the world, and feeling let down. These variables were included in the model as potential mediators. The combination of risk factors significantly mediated the association between sex and PTSD severity, accounting for 83% of the association. Conclusion: The findings suggest...

  4. Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide.

    Science.gov (United States)

    Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark

    2011-01-01

    Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.

  5. Modeling of Accounting Doctoral Thesis with Emphasis on Solution for Financial Problems

    Directory of Open Access Journals (Sweden)

    F. Mansoori

    2015-02-01

    Full Text Available By passing the instruction period and increase of graduate students and also research budget, knowledge of accounting in Iran entered to the field of research in a way that number of accounting projects has been implemented in the real world. Because of that different experience in implementing the accounting standards were achieved. So, it was expected the mentioned experiences help to solve the financial problems in country, in spite of lots of efforts which were done for researching; we still have many financial and accounting problems in our country. PHD projects could be considered as one of the important solutions to improve the University subjects including accounting. PHD projects are considered as team work job and it will be legitimate by supervisor teams in universities.It is obvious that applied projects should solve part of the problems in accounting field but unfortunately it is not working in the real world. The question which came in to our mind is how come that the out put of the applied and knowledge base projects could not make the darkness of the mentioned problems clear and also why politicians in difficult situations prefer to use their own previous experiences in important decision makings instead of using the consultant’s knowledge base suggestions.In this research I’m going to study, the reasons behind that prevent the applied PHD projects from success in real world which relates to the point of view that consider the political suggestions which are out put of knowledge base projects are not qualified enough for implementation. For this purpose, the indicators of an applied PHD project were considered and 110 vise people were categorized the mentioned indicators and then in a comprehensive study other applied PHD accounting projects were compared to each other.As result, in this study problems of the studied researches were identified and a proper and applied model for creating applied research was developed.

  6. Accounting for measurement error in log regression models with applications to accelerated testing.

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    Full Text Available In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  7. Accounting for measurement error in log regression models with applications to accelerated testing.

    Science.gov (United States)

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  8. The self-consistent field model for Fermi systems with account of three-body interactions

    Directory of Open Access Journals (Sweden)

    Yu.M. Poluektov

    2015-12-01

    Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.

  9. Demonstration of an automated electromanometer for measurement of solution in accountability vessels in the Tokai Reprocessing Plant (part II)

    International Nuclear Information System (INIS)

    Yamonouchi, T.; Fukuari, Y.; Hayashi, M.; Komatsu, M.; Suyama, N.; Uchida, T.

    1982-01-01

    This report describes the results of an operational field test of the automated electromanometer system installed at the input accountability vessel (251V10) and the plutonium product accountability vessel (266V23) in the Tokai Reprocessing Plant. This system has been in use since September 1979 when it was installed in the PNC plant by BNL as part of Task-E, one of the thirteen tasks, in the Tokai Advanced Safeguards Technology Exercise (TASTEX) program. The first report on the progress of this task was published by S. Suda, et al., in the Proceedings of the INMM 22nd Annual Meeting. In this paper, further results of measurement and data analysis are shown. Also, the reliability and applicability of this instrument for accountability, safeguards, and process control purposes are investigated using the data of 106 batches for 251V10 and 40 batches for 266V23 obtained during two campaigns in 1981. There were small but significant differences relative to the plant's measurements for both vessels of 251V10 and 266V23; however, the difference for 251V10 was slightly decreased in the latest vessel calibration. Initially, there were many spurious signals originating with the raw data caused by a software error in the system. However, almost normal conditions were obtained after corrections of the program were made

  10. Material control in nuclear fuel fabrication facilities. Part II. Accountability, instrumntation, and measurement techniques in fuel fabrication facilities, P.O.1236909. Final report

    International Nuclear Information System (INIS)

    Borgonovi, G.M.; McCartin, T.J.; McDaniel, T.; Miller, C.L.; Nguyen, T.

    1978-12-01

    This report describes the measurement techniques, the instrumentation, and the procedures used in accountability and control of nuclear materials, as they apply to fuel fabrication facilities. Some of the material included has appeared elswhere and it has been summarized. An extensive bibliography is included. A spcific example of application of the accountability methods to a model fuel fabrication facility which is based on the Westinghouse Anderson design

  11. Material control in nuclear fuel fabrication facilities. Part II. Accountability, instrumntation, and measurement techniques in fuel fabrication facilities, P. O. 1236909. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Borgonovi, G.M.; McCartin, T.J.; McDaniel, T.; Miller, C.L.; Nguyen, T.

    1978-12-01

    This report describes the measurement techniques, the instrumentation, and the procedures used in accountability and control of nuclear materials, as they apply to fuel fabrication facilities. Some of the material included has appeared elswhere and it has been summarized. An extensive bibliography is included. A spcific example of application of the accountability methods to a model fuel fabrication facility which is based on the Westinghouse Anderson design.

  12. Two-species occupancy modeling accounting for species misidentification and nondetection

    Science.gov (United States)

    Chambert, Thierry; Grant, Evan H. Campbell; Miller, David A. W.; Nichols, James; Mulder, Kevin P.; Brand, Adrianne B,

    2018-01-01

    1. In occupancy studies, species misidentification can lead to false positive detections, which can cause severe estimator biases. Currently, all models that account for false positive errors only consider omnibus sources of false detections and are limited to single species occupancy. 2. However, false detections for a given species often occur because of the misidentification with another, closely-related species. To exploit this explicit source of false positive detection error, we develop a two-species occupancy model that accounts for misidentifications between two species of interest. As with other false positive models, identifiability is greatly improved by the availability of unambiguous detections at a subset of site-occasions. Here, we consider the case where some of the field observations can be confirmed using laboratory or other independent identification methods (“confirmatory data”). 3. We performed three simulation studies to (1) assess the model’s performance under various realistic scenarios, (2) investigate the influence of the proportion of confirmatory data on estimator accuracy, and (3) compare the performance of this two-species model with that of the single-species false positive model. The model shows good performance under all scenarios, even when only small proportions of detections are confirmed (e.g., 5%). It also clearly outperforms the single-species model.

  13. Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica

    International Nuclear Information System (INIS)

    Oboh, I.; Aluyor, E.; Audu, T.

    2015-01-01

    The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R 2 ), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem

  14. Analysis of the microscopic model taking into account of the 2p2h configurations

    International Nuclear Information System (INIS)

    Kamerdzhiev, S.P.; Tkachev, V.N.

    1986-01-01

    A general equation for the effective field inside the nucleus, which takes into account both 1p1h and 2p2h configurations, is derived by the Green function method. This equation is used as a starting point to derive the previously developed microscopic model for account of the > configurations in magic nuclei. The equations for the density matrix are analyzed in this model. It is shown that the quasiparticle number conservation law is valid. The equation for the effective field is written in the coordinate representation. As a result, the problem acquires the formulation in the > approximation. The equation in the space of one-phonon states is derived and quantitatively analyzed

  15. Analysis of a microscopic model of taking into account 2p2h configurations

    International Nuclear Information System (INIS)

    Kamerdzhiev, S.P.; Tkachev, V.N.

    1986-01-01

    The Green's-function method has been used to obtain a general equation for the effective field in a nucleus, taking into account both 1p1h and 2p2h configurations. This equation has been used as the starting point for derivation of a previously developed microscopic model of taking 1p1h+phonon configurations into account in magic nuclei. The equation for the density matrix is analyzed in this model. It is shown that the number of quasiparticles is conserved. An equation is obtained for the effective field in the coordinate representation, which provides a formulation of the problem in the 1p1h+2p2h+continuum approximation. The equation is derived and quantitatively analyzed in the space of one-phonon states

  16. A Simple Accounting-based Valuation Model for the Debt Tax Shield

    Directory of Open Access Journals (Sweden)

    Andreas Scholze

    2010-05-01

    Full Text Available This paper describes a simple way to integrate the debt tax shield into an accounting-based valuation model. The market value of equity is determined by forecasting residual operating income, which is calculated by charging operating income for the operating assets at a required return that accounts for the tax benefit that comes from borrowing to raise cash for the operations. The model assumes that the firm maintains a deterministic financial leverage ratio, which tends to converge quickly to typical steady-state levels over time. From a practical point of view, this characteristic is of particular help, because it allows a continuing value calculation at the end of a short forecast period.

  17. Evaluating the predictive abilities of community occupancy models using AUC while accounting for imperfect detection

    Science.gov (United States)

    Zipkin, Elise F.; Grant, Evan H. Campbell; Fagan, William F.

    2012-01-01

    The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multi-species hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions of species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation dataset. We found that wetland hydroperiod (the length of time that a wetland holds water) as well as the occurrence state in the prior year were generally the most important factors in determining occupancy. The model with only habitat covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multi-species models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for

  18. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect

    Science.gov (United States)

    Carenzo, M.; Pellicciotti, F.; Mabillard, J.; Reid, T.; Brock, B. W.

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the

  19. Insurance: Accounting, Regulation, Actuarial Science

    OpenAIRE

    Alain Tosetti; Thomas Behar; Michel Fromenteau; Stéphane Ménart

    2001-01-01

    We shall be examining the following topics: (i) basic frameworks for accounting and for statutory insurance rules; and (ii) actuarial principles of insurance; for both life and nonlife (i.e. casualty and property) insurance.Section 1 introduces insurance terminology, regarding what an operation must include in order to be an insurance operation (the legal, statistical, financial or economic aspects), and introduces the accounting and regulation frameworks and the two actuarial models of insur...

  20. Models and error analyses of measuring instruments in accountability systems in safeguards control

    International Nuclear Information System (INIS)

    Dattatreya, E.S.

    1977-05-01

    Essentially three types of measuring instruments are used in plutonium accountability systems: (1) the bubblers, for measuring the total volume of liquid in the holding tanks, (2) coulometers, titration apparatus and calorimeters, for measuring the concentration of plutonium; and (3) spectrometers, for measuring isotopic composition. These three classes of instruments are modeled and analyzed. Finally, the uncertainty in the estimation of total plutonium in the holding tank is determined

  1. [Application of detecting and taking overdispersion into account in Poisson regression model].

    Science.gov (United States)

    Bouche, G; Lepage, B; Migeot, V; Ingrand, P

    2009-08-01

    Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.

  2. A database model for evaluating material accountability safeguards effectiveness against protracted theft

    International Nuclear Information System (INIS)

    Sicherman, A.; Fortney, D.S.; Patenaude, C.J.

    1993-07-01

    DOE Material Control and Accountability Order 5633.3A requires that facilities handling special nuclear material evaluate their effectiveness against protracted theft (repeated thefts of small quantities of material, typically occurring over an extended time frame, to accumulate a goal quantity). Because a protracted theft attempt can extend over time, material accountability-like (MA) safeguards may help detect a protracted theft attempt in progress. Inventory anomalies, and material not in its authorized location when requested for processing are examples of MA detection mechanisms. Crediting such detection in evaluations, however, requires taking into account potential insider subversion of MA safeguards. In this paper, the authors describe a database model for evaluating MA safeguards effectiveness against protracted theft that addresses potential subversion. The model includes a detailed yet practical structure for characterizing various types of MA activities, lists of potential insider MA defeat methods and access/authority related to MA activities, and an initial implementation of built-in MA detection probabilities. This database model, implemented in the new Protracted Insider module of ASSESS (Analytic System and Software for Evaluating Safeguards and Security), helps facilitate the systematic collection of relevant information about MA activity steps, and ''standardize'' MA safeguards evaluations

  3. Modelling Job-Related and Personality Predictors of Intention to Pursue Accounting Careers among Undergraduate Students in Ghana

    Science.gov (United States)

    Mbawuni, Joseph; Nimako, Simon Gyasi

    2015-01-01

    This study principally investigates job-related and personality factors that determine Ghanaian accounting students' intentions to pursue careers in accounting. It draws on a rich body of existing literature to develop a research model. Primary data were collected from a cross-sectional survey of 516 final year accounting students in a Ghanaian…

  4. INVESTIGATION INTO ACCOUNT OF A TIME VALUE OF MONEY IN CLASSICAL MULTITOPIC INVENTORY MODELS

    Directory of Open Access Journals (Sweden)

    Natalya A. Chernyaeva

    2013-01-01

    Full Text Available The article describes two types of models. The first is a traditional multitopic inventory model with constant demand and the second is a model based on the average cost of inventory in optimizing inventory management system. The authors taking into account the time value of money in the models study three possible schemes for the payment of costs: «prenumerando» (at the time of the general batch order delivery, «postnumerando» (at the time of the general next batch order delivery and the scheme of payment of costs in the mid-term.Maximization of the total intensity of revenue for outgoing and incoming cash flows occurring in the inventory management system that characterize the analyzed models was adopted as the criterion of optimization of inventory control strategy.

  5. A multiscale active structural model of the arterial wall accounting for smooth muscle dynamics.

    Science.gov (United States)

    Coccarelli, Alberto; Edwards, David Hughes; Aggarwal, Ankush; Nithiarasu, Perumal; Parthimos, Dimitris

    2018-02-01

    Arterial wall dynamics arise from the synergy of passive mechano-elastic properties of the vascular tissue and the active contractile behaviour of smooth muscle cells (SMCs) that form the media layer of vessels. We have developed a computational framework that incorporates both these components to account for vascular responses to mechanical and pharmacological stimuli. To validate the proposed framework and demonstrate its potential for testing hypotheses on the pathogenesis of vascular disease, we have employed a number of pharmacological probes that modulate the arterial wall contractile machinery by selectively inhibiting a range of intracellular signalling pathways. Experimental probes used on ring segments from the rabbit central ear artery are: phenylephrine, a selective α 1-adrenergic receptor agonist that induces vasoconstriction; cyclopiazonic acid (CPA), a specific inhibitor of sarcoplasmic/endoplasmic reticulum Ca 2+ -ATPase; and ryanodine, a diterpenoid that modulates Ca 2+ release from the sarcoplasmic reticulum. These interventions were able to delineate the role of membrane versus intracellular signalling, previously identified as main factors in smooth muscle contraction and the generation of vessel tone. Each SMC was modelled by a system of nonlinear differential equations that account for intracellular ionic signalling, and in particular Ca 2+ dynamics. Cytosolic Ca 2+ concentrations formed the catalytic input to a cross-bridge kinetics model. Contractile output from these cellular components forms the input to the finite-element model of the arterial rings under isometric conditions that reproduces the experimental conditions. The model does not account for the role of the endothelium, as the nitric oxide production was suppressed by the action of L-NAME, and also due to the absence of shear stress on the arterial ring, as the experimental set-up did not involve flow. Simulations generated by the integrated model closely matched experimental

  6. Model of inventory replenishment in periodic review accounting for the occurrence of shortages

    Directory of Open Access Journals (Sweden)

    Stanisław Krzyżaniak

    2014-03-01

    Full Text Available Background: Despite the development of alternative concepts of goods flow management, the inventory management under conditions of random variations of demand is still an important issue, both from the point of view of inventory keeping and replenishment costs and the service level measured as the level of inventory availability. There is a number of inventory replenishment systems used in these conditions, but they are mostly developments of two basic systems: reorder point-based and periodic review-based. The paper deals with the latter system. Numerous researches indicate the need to improve the classical models describing that system, the reason being mainly the necessity to adapt the model better to the actual conditions. This allows a correct selection of parameters that control the used inventory replenishment system and - as a result - to obtain expected economic effects. Methods: This research aimed at building a model of the periodic review system to reflect the relations (observed during simulation tests between the volume of inventory shortages and the degree of accounting for so-called deferred demand, and the service level expressed as the probability of satisfying the demand in the review and the inventory replenishment cycle. The following model building and testing method has been applied: numerical simulation of inventory replenishment - detailed analysis of simulation results - construction of the model taking into account the regularities observed during the simulations - determination of principles of solving the system of relations creating the model - verification of the results obtained from the model using the results from simulation. Results: Presented are selected results of calculations based on classical formulas and using the developed model, which describe the relations between the service level and the parameters controlling the discussed inventory replenishment system. The results are compared to the simulation

  7. Testing the limits of the 'joint account' model of genetic information: a legal thought experiment.

    Science.gov (United States)

    Foster, Charles; Herring, Jonathan; Boyd, Magnus

    2015-05-01

    We examine the likely reception in the courtroom of the 'joint account' model of genetic confidentiality. We conclude that the model, as modified by Gilbar and others, is workable and reflects, better than more conventional legal approaches, both the biological and psychological realities and the obligations owed under Articles 8 and 10 of the European Convention on Human Rights (ECHR). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    Science.gov (United States)

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Generation of SEEAW asset accounts based on water resources management models

    Science.gov (United States)

    Pedro-Monzonís, María; Solera, Abel; Andreu, Joaquín

    2015-04-01

    One of the main challenges in the XXI century is related with the sustainable use of water. This is due to the fact that water is an essential element for the life of all who inhabit our planet. In many cases, the lack of economic valuation of water resources causes an inefficient water use. In this regard, society expects of policymakers and stakeholders maximise the profit produced per unit of natural resources. Water planning and the Integrated Water Resources Management (IWRM) represent the best way to achieve this goal. The System of Environmental-Economic Accounting for Water (SEEAW) is displayed as a tool for water allocation which enables the building of water balances in a river basin. The main concern of the SEEAW is to provide a standard approach which allows the policymakers to compare results between different territories. But building water accounts is a complex task due to the difficulty of the collection of the required data. Due to the difficulty of gauging the components of the hydrological cycle, the use of simulation models has become an essential tool extensively employed in last decades. The target of this paper is to present the building up of a database that enables the combined use of hydrological models and water resources models developed with AQUATOOL DSSS to fill in the SEEAW tables. This research is framed within the Water Accounting in a Multi-Catchment District (WAMCD) project, financed by the European Union. Its main goal is the development of water accounts in the Mediterranean Andalusian River Basin District, in Spain. This research pretends to contribute to the objectives of the "Blueprint to safeguard Europe's water resources". It is noteworthy that, in Spain, a large part of these methodological decisions are included in the Spanish Guideline of Water Planning with normative status guaranteeing consistency and comparability of the results.

  10. @AACAnatomy twitter account goes live: A sustainable social media model for professional societies.

    Science.gov (United States)

    Benjamin, Hannah K; Royer, Danielle F

    2018-05-01

    Social media, with its capabilities of fast, global information sharing, provides a useful medium for professional development, connecting and collaborating with peers, and outreach. The goals of this study were to describe a new, sustainable model for Twitter use by professional societies, and analyze its impact on @AACAnatomy, the Twitter account of the American Association of Clinical Anatomists. Under supervision of an Association committee member, an anatomy graduate student developed a protocol for publishing daily tweets for @AACAnatomy. Five tweet categories were used: Research, Announcements, Replies, Engagement, and Community. Analytics from the 6-month pilot phase were used to assess the impact of the new model. @AACAnatomy had a steady average growth of 33 new followers per month, with less than 10% likely representing Association members. Research tweets, based on Clinical Anatomy articles with an abstract link, were the most shared, averaging 5,451 impressions, 31 link clicks, and nine #ClinAnat hashtag clicks per month. However, tweets from non-Research categories accounted for the highest impression and engagement metrics in four out of six months. For all tweet categories, monthly averages show consistent interaction of followers with the account. Daily tweet publication resulted in a 103% follower increase. An active Twitter account successfully facilitated regular engagement with @AACAnatomy followers and the promotion of clinical anatomy topics within a broad community. This Twitter model has the potential for implementation by other societies as a sustainable medium for outreach, networking, collaboration, and member engagement. Clin. Anat. 31:566-575, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Directional harmonic theory: a computational Gestalt model to account for illusory contour and vertex formation.

    Science.gov (United States)

    Lehar, Steven

    2003-01-01

    Visual illusions and perceptual grouping phenomena offer an invaluable tool for probing the computational mechanism of low-level visual processing. Some illusions, like the Kanizsa figure, reveal illusory contours that form edges collinear with the inducing stimulus. This kind of illusory contour has been modeled by neural network models by way of cells equipped with elongated spatial receptive fields designed to detect and complete the collinear alignment. There are, however, other illusory groupings which are not so easy to account for in neural network terms. The Ehrenstein illusion exhibits an illusory contour that forms a contour orthogonal to the stimulus instead of collinear with it. Other perceptual grouping effects reveal illusory contours that exhibit a sharp corner or vertex, and still others take the form of vertices defined by the intersection of three, four, or more illusory contours that meet at a point. A direct extension of the collinear completion models to account for these phenomena tends towards a combinatorial explosion, because it would suggest cells with specialized receptive fields configured to perform each of those completion types, each of which would have to be replicated at every location and every orientation across the visual field. These phenomena therefore challenge the adequacy of the neural network approach to account for these diverse perceptual phenomena. I have proposed elsewhere an alternative paradigm of neurocomputation in the harmonic resonance theory (Lehar 1999, see website), whereby pattern recognition and completion are performed by spatial standing waves across the neural substrate. The standing waves perform a computational function analogous to that of the spatial receptive fields of the neural network approach, except that, unlike that paradigm, a single resonance mechanism performs a function equivalent to a whole array of spatial receptive fields of different spatial configurations and of different orientations

  12. A Buffer Model Account of Behavioral and ERP Patterns in the Von Restorff Paradigm

    Directory of Open Access Journals (Sweden)

    Siri-Maria Kamp

    2016-06-01

    Full Text Available We combined a mechanistic model of episodic encoding with theories on the functional significance of two event-related potential (ERP components to develop an integrated account for the Von Restorff effect, which refers to the enhanced recall probability for an item that deviates in some feature from other items in its study list. The buffer model of Lehman and Malmberg (2009, 2013 can account for this effect such that items encountered during encoding enter an episodic buffer where they are actively rehearsed. When a deviant item is encountered, in order to re-allocate encoding resources towards this item the buffer is emptied from its prior content, a process labeled “compartmentalization”. Based on theories on their functional significance, the P300 component of the ERP may co-occur with this hypothesized compartmentalization process, while the frontal slow wave may index rehearsal. We derived predictions from this integrated model for output patterns in free recall, systematic variance in ERP components, as well as associations between the two types of measures in a dataset of 45 participants who studied and freely recalled lists of the Von Restorff type. Our major predictions were confirmed and the behavioral and physiological results were consistent with the predictions derived from the model. These findings demonstrate that constraining mechanistic models of episodic memory with brain activity patterns and generating predictions for relationships between brain activity and behavior can lead to novel insights into the relationship between the brain, the mind, and behavior.

  13. Modelling of the PROTO-II crossover network

    International Nuclear Information System (INIS)

    Proulx, G.A.; Lackner, H.; Spence, P.; Wright, T.P.

    1985-01-01

    In order to drive a double ring, symmetrically fed bremsstrahlung diode, the PROTO II accelerator was redesigned. The radially converging triplate water line was reconfigured to drive two radial converging triplate lines in parallel. The four output lines were connected to the two input lines via an electrically enclosed tubular crossover network. Low-voltage Time Domain Reflectrometry (TDR) experiments were conducted on a full scale water immersed model of one section of the crossover network as an aid in this design. A lumped element analysis of the power flow through the network was inadequate in explaining the observed wave transmission and reflection characteristics. A more detailed analysis was performed with a circuit code in which we considered both localized lump-element and transmission line features of the crossover network. Experimental results of the model tests are given and compared with the circuit simulations. 7 figs

  14. Accounting for standard errors of vision-specific latent trait in regression models.

    Science.gov (United States)

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits

  15. System modeling of spent fuel transfers at EBR-II

    International Nuclear Information System (INIS)

    Imel, G.R.; Houshyar, A.

    1994-01-01

    The unloading of spent fuel from the Experimental Breeder Reactor-II (EBR-II) for interim storage and subsequent processing in the Fuel Cycle Facility (FCF) is a multi-stage process, involving complex operations at a minimum of four different facilities at the Argonne National Laboratory-West (ANL-W) site. Each stage typically has complicated handling and/or cooling equipment that must be periodically maintained, leading to both planned and unplanned downtime. A program was initiated in October, 1993 to replace the 330 depleted uranium blanket subassemblies (S/As) with stainless steel reflectors. Routine operation of the reactor for fuels performance and materials testing occurred simultaneously in FY 1994 with the blanket unloading. In the summer of 1994, Congress dictated the October 1, 1994 shutdown of EBR-2. Consequently, all blanket S/As and fueled drivers will be removed from the reactor tank and replaced with stainless steel assemblies (which are needed to maintain a precise configuration within the grid so that the under sodium fuel handling equipment can function). A system modeling effort was conducted to determine the means to achieve the objective for the blanket and fuel unloading program, which under the current plan requires complete unloading of the primary tank of all fueled assemblies in 2 1/2 years. A simulation model of the fuel handling system at ANL-W was developed and used to analyze different unloading scenarios; the model has provided valuable information about required resources and modifications to equipment and procedures. This paper reports the results of this modeling effort

  16. An analytical model accounting for tip shape evolution during atom probe analysis of heterogeneous materials.

    Science.gov (United States)

    Rolland, N; Larson, D J; Geiser, B P; Duguay, S; Vurpillot, F; Blavette, D

    2015-12-01

    An analytical model describing the field evaporation dynamics of a tip made of a thin layer deposited on a substrate is presented in this paper. The difference in evaporation field between the materials is taken into account in this approach in which the tip shape is modeled at a mesoscopic scale. It was found that the non-existence of sharp edge on the surface is a sufficient condition to derive the morphological evolution during successive evaporation of the layers. This modeling gives an instantaneous and smooth analytical representation of the surface that shows good agreement with finite difference simulations results, and a specific regime of evaporation was highlighted when the substrate is a low evaporation field phase. In addition, the model makes it possible to calculate theoretically the tip analyzed volume, potentially opening up new horizons for atom probe tomographic reconstruction. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Spectral Neugebauer-based color halftone prediction model accounting for paper fluorescence.

    Science.gov (United States)

    Hersch, Roger David

    2014-08-20

    We present a spectral model for predicting the fluorescent emission and the total reflectance of color halftones printed on optically brightened paper. By relying on extended Neugebauer models, the proposed model accounts for the attenuation by the ink halftones of both the incident exciting light in the UV wavelength range and the emerging fluorescent emission in the visible wavelength range. The total reflectance is predicted by adding the predicted fluorescent emission relative to the incident light and the pure reflectance predicted with an ink-spreading enhanced Yule-Nielsen modified Neugebauer reflectance prediction model. The predicted fluorescent emission spectrum as a function of the amounts of cyan, magenta, and yellow inks is very accurate. It can be useful to paper and ink manufacturers who would like to study in detail the contribution of the fluorescent brighteners and the attenuation of the fluorescent emission by ink halftones.

  18. Regional Balance Model of Financial Flows through Sectoral Approaches System of National Accounts

    Directory of Open Access Journals (Sweden)

    Ekaterina Aleksandrovna Zaharchuk

    2017-03-01

    Full Text Available The main purpose of the study, the results of which are reflected in this article, is the theoretical and methodological substantiation of possibilities to build a regional balance model of financial flows consistent with the principles of the construction of the System of National Accounts (SNA. The paper summarizes the international experience of building regional accounts in the SNA as well as reflects the advantages and disadvantages of the existing techniques for constructing Social Accounting Matrix. The authors have proposed an approach to build the regional balance model of financial flows, which is based on the disaggregated tables of the formation, distribution and use of the added value of territory in the framework of institutional sectors of SNA (corporations, public administration, households. Within the problem resolution of the transition of value added from industries to sectors, the authors have offered an approach to the accounting of development, distribution and use of value added within the institutional sectors of the territories. The methods of calculation are based on the publicly available information base of statistics agencies and federal services. The authors provide the scheme of the interrelations of the indicators of the regional balance model of financial flows. It allows to coordinate mutually the movement of regional resources by the sectors of «corporation», «public administration» and «households» among themselves, and cash flows of the region — by the sectors and directions of use. As a result, they form a single account of the formation and distribution of territorial financial resources, which is a regional balance model of financial flows. This matrix shows the distribution of financial resources by income sources and sectors, where the components of the formation (compensation, taxes and gross profit, distribution (transfers and payments and use (final consumption, accumulation of value added are

  19. Developing a tuberculosis transmission model that accounts for changes in population health.

    Science.gov (United States)

    Oxlade, Olivia; Schwartzman, Kevin; Benedetti, Andrea; Pai, Madhukar; Heymann, Jody; Menzies, Dick

    2011-01-01

    Simulation models are useful in policy planning for tuberculosis (TB) control. To accurately assess interventions, important modifiers of the epidemic should be accounted for in evaluative models. Improvements in population health were associated with the declining TB epidemic in the pre-antibiotic era and may be relevant today. The objective of this study was to develop and validate a TB transmission model that accounted for changes in population health. We developed a deterministic TB transmission model, using reported data from the pre-antibiotic era in England. Change in adjusted life expectancy, used as a proxy for general health, was used to determine the rate of change of key epidemiological parameters. Predicted outcomes included risk of TB infection and TB mortality. The model was validated in the setting of the Netherlands and then applied to modern Peru. The model, developed in the setting of England, predicted TB trends in the Netherlands very accurately. The R(2) value for correlation between observed and predicted data was 0.97 and 0.95 for TB infection and mortality, respectively. In Peru, the predicted decline in incidence prior to the expansion of "Directly Observed Treatment Short Course" (The DOTS strategy) was 3.7% per year (observed = 3.9% per year). After DOTS expansion, the predicted decline was very similar to the observed decline of 5.8% per year. We successfully developed and validated a TB model, which uses a proxy for population health to estimate changes in key epidemiology parameters. Population health contributed significantly to improvement in TB outcomes observed in Peru. Changing population health should be incorporated into evaluative models for global TB control.

  20. Accounting for Zero Inflation of Mussel Parasite Counts Using Discrete Regression Models

    Directory of Open Access Journals (Sweden)

    Emel Çankaya

    2017-06-01

    Full Text Available In many ecological applications, the absences of species are inevitable due to either detection faults in samples or uninhabitable conditions for their existence, resulting in high number of zero counts or abundance. Usual practice for modelling such data is regression modelling of log(abundance+1 and it is well know that resulting model is inadequate for prediction purposes. New discrete models accounting for zero abundances, namely zero-inflated regression (ZIP and ZINB, Hurdle-Poisson (HP and Hurdle-Negative Binomial (HNB amongst others are widely preferred to the classical regression models. Due to the fact that mussels are one of the economically most important aquatic products of Turkey, the purpose of this study is therefore to examine the performances of these four models in determination of the significant biotic and abiotic factors on the occurrences of Nematopsis legeri parasite harming the existence of Mediterranean mussels (Mytilus galloprovincialis L.. The data collected from the three coastal regions of Sinop city in Turkey showed more than 50% of parasite counts on the average are zero-valued and model comparisons were based on information criterion. The results showed that the probability of the occurrence of this parasite is here best formulated by ZINB or HNB models and influential factors of models were found to be correspondent with ecological differences of the regions.

  1. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  2. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  3. Accounting for and predicting the influence of spatial autocorrelation in water quality modeling

    Science.gov (United States)

    Miralha, L.; Kim, D.

    2017-12-01

    Although many studies have attempted to investigate the spatial trends of water quality, more attention is yet to be paid to the consequences of considering and ignoring the spatial autocorrelation (SAC) that exists in water quality parameters. Several studies have mentioned the importance of accounting for SAC in water quality modeling, as well as the differences in outcomes between models that account for and ignore SAC. However, the capacity to predict the magnitude of such differences is still ambiguous. In this study, we hypothesized that SAC inherently possessed by a response variable (i.e., water quality parameter) influences the outcomes of spatial modeling. We evaluated whether the level of inherent SAC is associated with changes in R-Squared, Akaike Information Criterion (AIC), and residual SAC (rSAC), after accounting for SAC during modeling procedure. The main objective was to analyze if water quality parameters with higher Moran's I values (inherent SAC measure) undergo a greater increase in R² and a greater reduction in both AIC and rSAC. We compared a non-spatial model (OLS) to two spatial regression approaches (spatial lag and error models). Predictor variables were the principal components of topographic (elevation and slope), land cover, and hydrological soil group variables. We acquired these data from federal online sources (e.g. USGS). Ten watersheds were selected, each in a different state of the USA. Results revealed that water quality parameters with higher inherent SAC showed substantial increase in R² and decrease in rSAC after performing spatial regressions. However, AIC values did not show significant changes. Overall, the higher the level of inherent SAC in water quality variables, the greater improvement of model performance. This indicates a linear and direct relationship between the spatial model outcomes (R² and rSAC) and the degree of SAC in each water quality variable. Therefore, our study suggests that the inherent level of

  4. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    Science.gov (United States)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  5. Simple inflationary quintessential model. II. Power law potentials

    Science.gov (United States)

    de Haro, Jaume; Amorós, Jaume; Pan, Supriya

    2016-09-01

    The present work is a sequel of our previous work [Phys. Rev. D 93, 084018 (2016)] which depicted a simple version of an inflationary quintessential model whose inflationary stage was described by a Higgs-type potential and the quintessential phase was responsible due to an exponential potential. Additionally, the model predicted a nonsingular universe in past which was geodesically past incomplete. Further, it was also found that the model is in agreement with the Planck 2013 data when running is allowed. But, this model provides a theoretical value of the running which is far smaller than the central value of the best fit in ns , r , αs≡d ns/d l n k parameter space where ns, r , αs respectively denote the spectral index, tensor-to-scalar ratio and the running of the spectral index associated with any inflationary model, and consequently to analyze the viability of the model one has to focus in the two-dimensional marginalized confidence level in the allowed domain of the plane (ns,r ) without taking into account the running. Unfortunately, such analysis shows that this model does not pass this test. However, in this sequel we propose a family of models runs by a single parameter α ∈[0 ,1 ] which proposes another "inflationary quintessential model" where the inflation and the quintessence regimes are respectively described by a power law potential and a cosmological constant. The model is also nonsingular although geodesically past incomplete as in the cited model. Moreover, the present one is found to be more simple compared to the previous model and it is in excellent agreement with the observational data. In fact, we note that, unlike the previous model, a large number of the models of this family with α ∈[0 ,1/2 ) match with both Planck 2013 and Planck 2015 data without allowing the running. Thus, the properties in the current family of models compared to its past companion justify its need for a better cosmological model with the successive

  6. Observational tests for H II region models - A 'champagne party'

    Energy Technology Data Exchange (ETDEWEB)

    Alloin, D; Tenorio-Tagle, G

    1979-09-01

    Observations of several neighboring H II regions associated with a molecular cloud were performed in order to test the champagne model of H II region-molecular cloud interaction leading to the supersonic expansion of molecular cloud gas. Nine different positions in the Gum 61 nebula were observed using an image dissector scanner attached to a 3.6-m telescope, and it is found that the area corresponds to a low excitation, high density nebula, with electron densities ranging between 1400 and 2800/cu cm and larger along the boundary of the ionized gas. An observed increase in pressure and density located in an interior region of the nebula is interpreted in terms of an area between two rarefaction waves generated together with a strong isothermal shock, responsible for the champagne-like streaming, by a pressure discontinuity between the ionized molecular cloud in which star formation takes place and the intercloud gas. It is noted that a velocity field determination would provide the key in understanding the evolution of such a region.

  7. A GLOBAL MODEL OF THE LIGHT CURVES AND EXPANSION VELOCITIES OF TYPE II-PLATEAU SUPERNOVAE

    Energy Technology Data Exchange (ETDEWEB)

    Pejcha, Ondřej [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08540 (United States); Prieto, Jose L., E-mail: pejcha@astro.princeton.edu [Núcleo de Astronomía de la Facultad de Ingeniería, Universidad Diego Portales, Av. Ejército 441 Santiago (Chile)

    2015-02-01

    We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of Type II-Plateau supernovae based on their expansion velocities and photometric measurements. We apply the method to a sample of 26 well-observed, nearby supernovae with published light curves and velocities. We simultaneously fit ∼230 velocity and ∼6800 mag measurements distributed over 21 photometric passbands spanning wavelengths from 0.19 to 2.2 μm. The light-curve differences among the Type II-Plateau supernovae are well modeled by assuming different rates of photospheric radius expansion, which we explain as different density profiles of the ejecta, and we argue that steeper density profiles result in flatter plateaus, if everything else remains unchanged. The steep luminosity decline of Type II-Linear supernovae is due to fast evolution of the photospheric temperature, which we verify with a successful fit of SN 1980K. Eliminating the need for theoretical supernova atmosphere models, we obtain self-consistent relative distances, reddenings, and nickel masses fully accounting for all internal model uncertainties and covariances. We use our global fit to estimate the time evolution of any missing band tailored specifically for each supernova, and we construct spectral energy distributions and bolometric light curves. We produce bolometric corrections for all filter combinations in our sample. We compare our model to the theoretical dilution factors and find good agreement for the B and V filters. Our results differ from the theory when the I, J, H, or K bands are included. We investigate the reddening law toward our supernovae and find reasonable agreement with standard R{sub V}∼3.1 reddening law in UBVRI bands. Results for other bands are inconclusive. We make our fitting code publicly available.

  8. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    Energy Technology Data Exchange (ETDEWEB)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO{sub 2} and NO{sub x} emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner.

  9. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    International Nuclear Information System (INIS)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO 2 and NO x emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner

  10. AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS

    Energy Technology Data Exchange (ETDEWEB)

    Rodríguez-Ramírez, J. C.; Raga, A. C. [Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Ap. 70-543, 04510 D.F., México (Mexico); Lora, V. [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Cantó, J., E-mail: juan.rodriguez@nucleares.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ap. 70-468, 04510 D. F., México (Mexico)

    2016-12-20

    We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. We compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.

  11. SDOF models for reinforced concrete beams under impulsive loads accounting for strain rate effects

    Energy Technology Data Exchange (ETDEWEB)

    Stochino, F., E-mail: fstochino@unica.it [Department of Civil and Environmental Engineering and Architecture, University of Cagliari, Via Marengo 2, 09123 Cagliari (Italy); Carta, G., E-mail: giorgio_carta@unica.it [Department of Mechanical, Chemical and Materials Engineering, University of Cagliari, Via Marengo 2, 09123 Cagliari (Italy)

    2014-09-15

    Highlights: • Flexural failure of reinforced concrete beams under blast and impact loads is studied. • Two single degree of freedom models are formulated to predict the beam response. • Strain rate effects are taken into account for both models. • The theoretical response obtained from each model is compared with experimental data. • The two models give a good estimation of the maximum deflection at collapse. - Abstract: In this paper, reinforced concrete beams subjected to blast and impact loads are examined. Two single degree of freedom models are proposed to predict the response of the beam. The first model (denoted as “energy model”) is developed from the law of energy balance and assumes that the deformed shape of the beam is represented by its first vibration mode. In the second model (named “dynamic model”), the dynamic behavior of the beam is simulated by a spring-mass oscillator. In both formulations, the strain rate dependencies of the constitutive properties of the beams are considered by varying the parameters of the models at each time step of the computation according to the values of the strain rates of the materials (i.e. concrete and reinforcing steels). The efficiency of each model is evaluated by comparing the theoretical results with experimental data found in literature. The comparison shows that the energy model gives a good estimation of the maximum deflection of the beam at collapse, defined as the attainment of the ultimate strain in concrete. On the other hand, the dynamic model generally provides a smaller value of the maximum displacement. However, both approaches yield reliable results, even though they are based on some approximations. Being also very simple to implement, they may serve as an useful tool in practical applications.

  12. Equilibrium modeling of mono and binary sorption of Cu(II and Zn(II onto chitosan gel beads

    Directory of Open Access Journals (Sweden)

    Nastaj Józef

    2016-12-01

    Full Text Available The objective of the work are in-depth experimental studies of Cu(II and Zn(II ion removal on chitosan gel beads from both one- and two-component water solutions at the temperature of 303 K. The optimal process conditions such as: pH value, dose of sorbent and contact time were determined. Based on the optimal process conditions, equilibrium and kinetic studies were carried out. The maximum sorption capacities equaled: 191.25 mg/g and 142.88 mg/g for Cu(II and Zn(II ions respectively, when the sorbent dose was 10 g/L and the pH of a solution was 5.0 for both heavy metal ions. One-component sorption equilibrium data were successfully presented for six of the most useful three-parameter equilibrium models: Langmuir-Freundlich, Redlich-Peterson, Sips, Koble-Corrigan, Hill and Toth. Extended forms of Langmuir-Freundlich, Koble-Corrigan and Sips models were also well fitted to the two-component equilibrium data obtained for different ratios of concentrations of Cu(II and Zn(II ions (1:1, 1:2, 2:1. Experimental sorption data were described by two kinetic models of the pseudo-first and pseudo-second order. Furthermore, an attempt to explain the mechanisms of the divalent metal ion sorption process on chitosan gel beads was undertaken.

  13. Research on mouse model of grade II corneal alkali burn

    Directory of Open Access Journals (Sweden)

    Jun-Qiang Bai

    2016-04-01

    Full Text Available AIM: To choose appropriate concentration of sodium hydroxide (NaOH solution to establish a stable and consistent corneal alkali burn mouse model in grade II. METHODS: The mice (n=60 were randomly divided into four groups and 15 mice each group. Corneal alkali burns were induced by placing circle filter paper soaked with NaOH solutions on the right central cornea for 30s. The concentrations of NaOH solutions of groups A, B, C, and D were 0.1 mol/L, 0.15 mol/L , 0.2 mol/L, and 1.0 mol/L respectively. Then these corneas were irrigated with 20 mL physiological saline (0.9% NaCl. On day 7 postburn, slit lamp microscope was used to observe corneal opacity, corneal epithelial sodium fluorescein staining positive rate, incidence of corneal ulcer and corneal neovascularization, meanwhile pictures of the anterior eyes were taken. Cirrus spectral domain optical coherence tomography was used to scan cornea to observe corneal epithelial defect and corneal ulcer. RESULTS: Corneal opacity scores ( were not significantly different between the group A and group B (P=0.097. Incidence of corneal ulcer in group B was significantly higher than that in group A (P=0.035. Incidence of corneal ulcer and perforation rate in group B was lower than that in group C. Group C and D had corneal neovascularization, and incidence of corneal neovascularization in group D was significantly higher than that in group C (P=0.000. CONCLUSION: Using 0.15 mol/L NaOH can establish grade II mouse model of corneal alkali burns.

  14. A three-dimensional model of mammalian tyrosinase active site accounting for loss of function mutations.

    Science.gov (United States)

    Schweikardt, Thorsten; Olivares, Concepción; Solano, Francisco; Jaenicke, Elmar; García-Borrón, José Carlos; Decker, Heinz

    2007-10-01

    Tyrosinases are the first and rate-limiting enzymes in the synthesis of melanin pigments responsible for colouring hair, skin and eyes. Mutation of tyrosinases often decreases melanin production resulting in albinism, but the effects are not always understood at the molecular level. Homology modelling of mouse tyrosinase based on recently published crystal structures of non-mammalian tyrosinases provides an active site model accounting for loss-of-function mutations. According to the model, the copper-binding histidines are located in a helix bundle comprising four densely packed helices. A loop containing residues M374, S375 and V377 connects the CuA and CuB centres, with the peptide oxygens of M374 and V377 serving as hydrogen acceptors for the NH-groups of the imidazole rings of the copper-binding His367 and His180. Therefore, this loop is essential for the stability of the active site architecture. A double substitution (374)MS(375) --> (374)GG(375) or a single M374G mutation lead to a local perturbation of the protein matrix at the active site affecting the orientation of the H367 side chain, that may be unable to bind CuB reliably, resulting in loss of activity. The model also accounts for loss of function in two naturally occurring albino mutations, S380P and V393F. The hydroxyl group in S380 contributes to the correct orientation of M374, and the substitution of V393 for a bulkier phenylalanine sterically impedes correct side chain packing at the active site. Therefore, our model explains the mechanistic necessity for conservation of not only active site histidines but also adjacent amino acids in tyrosinase.

  15. Life system modeling and intelligent computing. Pt. II. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang; Irwin, George W. (eds.) [Belfast Queen' s Univ. (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Fei, Minrui; Jia, Li [Shanghai Univ. (China). School of Mechatronical Engineering and Automation

    2010-07-01

    This book is part II of a two-volume work that contains the refereed proceedings of the International Conference on Life System Modeling and Simulation, LSMS 2010 and the International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2010, held in Wuxi, China, in September 2010. The 194 revised full papers presented were carefully reviewed and selected from over 880 submissions and recommended for publication by Springer in two volumes of Lecture Notes in Computer Science (LNCS) and one volume of Lecture Notes in Bioinformatics (LNBI). This particular volume of Lecture Notes in Computer Science (LNCS) includes 55 papers covering 7 relevant topics. The 56 papers in this volume are organized in topical sections on advanced evolutionary computing theory and algorithms; advanced neural network and fuzzy system theory and algorithms; modeling and simulation of societies and collective behavior; biomedical signal processing, imaging, and visualization; intelligent computing and control in distributed power generation systems; intelligent methods in power and energy infrastructure development; intelligent modeling, monitoring, and control of complex nonlinear systems. (orig.)

  16. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    Directory of Open Access Journals (Sweden)

    Czoli Christine

    2011-10-01

    Full Text Available Abstract Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories. These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed. Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment.

  17. Response Mixture Modeling: Accounting for Heterogeneity in Item Characteristics across Response Times.

    Science.gov (United States)

    Molenaar, Dylan; de Boeck, Paul

    2018-06-01

    In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.

  18. A Bidirectional Subsurface Remote Sensing Reflectance Model Explicitly Accounting for Particle Backscattering Shapes

    Science.gov (United States)

    He, Shuangyan; Zhang, Xiaodong; Xiong, Yuanheng; Gray, Deric

    2017-11-01

    The subsurface remote sensing reflectance (rrs, sr-1), particularly its bidirectional reflectance distribution function (BRDF), depends fundamentally on the angular shape of the volume scattering functions (VSFs, m-1 sr-1). Recent technological advancement has greatly expanded the collection, and the knowledge of natural variability, of the VSFs of oceanic particles. This allows us to test the Zaneveld's theoretical rrs model that explicitly accounts for particle VSF shapes. We parameterized the rrs model based on HydroLight simulations using 114 VSFs measured in three coastal waters around the United States and in oceanic waters of North Atlantic Ocean. With the absorption coefficient (a), backscattering coefficient (bb), and VSF shape as inputs, the parameterized model is able to predict rrs with a root mean square relative error of ˜4% for solar zenith angles from 0 to 75°, viewing zenith angles from 0 to 60°, and viewing azimuth angles from 0 to 180°. A test with the field data indicates the performance of our model, when using only a and bb as inputs and selecting the VSF shape using bb, is comparable to or slightly better than the currently used models by Morel et al. and Lee et al. Explicitly expressing VSF shapes in rrs modeling has great potential to further constrain the uncertainty in the ocean color studies as our knowledge on the VSFs of natural particles continues to improve. Our study represents a first effort in this direction.

  19. Palaeomagnetic dating method accounting for post-depositional remanence and its application to geomagnetic field modelling

    Science.gov (United States)

    Nilsson, A.; Suttie, N.

    2016-12-01

    Sedimentary palaeomagnetic data may exhibit some degree of smoothing of the recorded field due to the gradual processes by which the magnetic signal is `locked-in' over time. Here we present a new Bayesian method to construct age-depth models based on palaeomagnetic data, taking into account and correcting for potential lock-in delay. The age-depth model is built on the widely used "Bacon" dating software by Blaauw and Christen (2011, Bayesian Analysis 6, 457-474) and is designed to combine both radiocarbon and palaeomagnetic measurements. To our knowledge, this is the first palaeomagnetic dating method that addresses the potential problems related post-depositional remanent magnetisation acquisition in age-depth modelling. Age-depth models, including site specific lock-in depth and lock-in filter function, produced with this method are shown to be consistent with independent results based on radiocarbon wiggle match dated sediment sections. Besides its primary use as a dating tool, our new method can also be used specifically to identify the most likely lock-in parameters for a specific record. We explore the potential to use these results to construct high-resolution geomagnetic field models based on sedimentary palaeomagnetic data, adjusting for smoothing induced by post-depositional remanent magnetisation acquisition. Potentially, this technique could enable reconstructions of Holocene geomagnetic field with the same amplitude of variability observed in archaeomagnetic field models for the past three millennia.

  20. Toward a formalized account of attitudes: The Causal Attitude Network (CAN) model.

    Science.gov (United States)

    Dalege, Jonas; Borsboom, Denny; van Harreveld, Frenk; van den Berg, Helma; Conner, Mark; van der Maas, Han L J

    2016-01-01

    This article introduces the Causal Attitude Network (CAN) model, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions. Relevant evaluative reactions include beliefs, feelings, and behaviors toward the attitude object. Interactions between these reactions arise through direct causal influences (e.g., the belief that snakes are dangerous causes fear of snakes) and mechanisms that support evaluative consistency between related contents of evaluative reactions (e.g., people tend to align their belief that snakes are useful with their belief that snakes help maintain ecological balance). In the CAN model, the structure of attitude networks conforms to a small-world structure: evaluative reactions that are similar to each other form tight clusters, which are connected by a sparser set of "shortcuts" between them. We argue that the CAN model provides a realistic formalized measurement model of attitudes and therefore fills a crucial gap in the attitude literature. Furthermore, the CAN model provides testable predictions for the structure of attitudes and how they develop, remain stable, and change over time. Attitude strength is conceptualized in terms of the connectivity of attitude networks and we show that this provides a parsimonious account of the differences between strong and weak attitudes. We discuss the CAN model in relation to possible extensions, implication for the assessment of attitudes, and possibilities for further study. (c) 2015 APA, all rights reserved).

  1. PHYSICS OF ECLIPSING BINARIES. II. TOWARD THE INCREASED MODEL FIDELITY

    Energy Technology Data Exchange (ETDEWEB)

    Prša, A.; Conroy, K. E.; Horvat, M.; Kochoska, A.; Hambleton, K. M. [Villanova University, Dept. of Astrophysics and Planetary Sciences, 800 E Lancaster Avenue, Villanova PA 19085 (United States); Pablo, H. [Université de Montréal, Pavillon Roger-Gaudry, 2900, boul. Édouard-Montpetit Montréal QC H3T 1J4 (Canada); Bloemen, S. [Radboud University Nijmegen, Department of Astrophysics, IMAPP, P.O. Box 9010, 6500 GL, Nijmegen (Netherlands); Giammarco, J. [Eastern University, Dept. of Astronomy and Physics, 1300 Eagle Road, St. Davids, PA 19087 (United States); Degroote, P. [KU Leuven, Instituut voor Sterrenkunde, Celestijnenlaan 200D, B-3001 Heverlee (Belgium)

    2016-12-01

    The precision of photometric and spectroscopic observations has been systematically improved in the last decade, mostly thanks to space-borne photometric missions and ground-based spectrographs dedicated to finding exoplanets. The field of eclipsing binary stars strongly benefited from this development. Eclipsing binaries serve as critical tools for determining fundamental stellar properties (masses, radii, temperatures, and luminosities), yet the models are not capable of reproducing observed data well, either because of the missing physics or because of insufficient precision. This led to a predicament where radiative and dynamical effects, insofar buried in noise, started showing up routinely in the data, but were not accounted for in the models. PHOEBE (PHysics Of Eclipsing BinariEs; http://phoebe-project.org) is an open source modeling code for computing theoretical light and radial velocity curves that addresses both problems by incorporating missing physics and by increasing the computational fidelity. In particular, we discuss triangulation as a superior surface discretization algorithm, meshing of rotating single stars, light travel time effects, advanced phase computation, volume conservation in eccentric orbits, and improved computation of local intensity across the stellar surfaces that includes the photon-weighted mode, the enhanced limb darkening treatment, the better reflection treatment, and Doppler boosting. Here we present the concepts on which PHOEBE is built and proofs of concept that demonstrate the increased model fidelity.

  2. Modeling Degradation in Solid Oxide Electrolysis Cells - Volume II

    Energy Technology Data Exchange (ETDEWEB)

    Manohar Motwani

    2011-09-01

    Idaho National Laboratory has an ongoing project to generate hydrogen from steam using solid oxide electrolysis cells (SOECs). To accomplish this, technical and degradation issues associated with the SOECs will need to be addressed. This report covers various approaches being pursued to model degradation issues in SOECs. An electrochemical model for degradation of SOECs is presented. The model is based on concepts in local thermodynamic equilibrium in systems otherwise in global thermodynamic non-equilibrium. It is shown that electronic conduction through the electrolyte, however small, must be taken into account for determining local oxygen chemical potential,, within the electrolyte. The within the electrolyte may lie out of bounds in relation to values at the electrodes in the electrolyzer mode. Under certain conditions, high pressures can develop in the electrolyte just near the oxygen electrode/electrolyte interface, leading to oxygen electrode delamination. These predictions are in accordance with the reported literature on the subject. Development of high pressures may be avoided by introducing some electronic conduction in the electrolyte. By combining equilibrium thermodynamics, non-equilibrium (diffusion) modeling, and first-principles, atomic scale calculations were performed to understand the degradation mechanisms and provide practical recommendations on how to inhibit and/or completely mitigate them.

  3. Accounting for sex differences in PTSD: A multi-variable mediation model.

    Science.gov (United States)

    Christiansen, Dorte M; Hansen, Maj

    2015-01-01

    Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD). However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually. The present study is the first to systematically test the hypothesis that a combination of pre-, peri-, and posttraumatic risk factors more prevalent in females can account for sex differences in PTSD severity. The study was a quasi-prospective questionnaire survey assessing PTSD and related variables in 73.3% of all Danish bank employees exposed to bank robbery during the period from April 2010 to April 2011. Participants filled out questionnaires 1 week (T1, N=450) and 6 months after the robbery (T2, N=368; 61.1% females). Mediation was examined using an analysis designed specifically to test a multiple mediator model. Females reported more PTSD symptoms than males and higher levels of neuroticism, depression, physical anxiety sensitivity, peritraumatic fear, horror, and helplessness (the A2 criterion), tonic immobility, panic, dissociation, negative posttraumatic cognitions about self and the world, and feeling let down. These variables were included in the model as potential mediators. The combination of risk factors significantly mediated the association between sex and PTSD severity, accounting for 83% of the association. The findings suggest that females report more PTSD symptoms because they experience higher levels of associated risk factors. The results are relevant to other trauma populations and to other trauma-related psychiatric disorders more prevalent in females, such as depression

  4. Accounting for sex differences in PTSD: A multi-variable mediation model

    Directory of Open Access Journals (Sweden)

    Dorte M. Christiansen

    2015-01-01

    Full Text Available Background: Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD. However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used methods that were not ideally suited to test for mediation effects. Prior research has identified a number of individual risk factors that may contribute to sex differences in PTSD severity, although these cannot fully account for the increased symptom levels in females when examined individually. Objective: The present study is the first to systematically test the hypothesis that a combination of pre-, peri-, and posttraumatic risk factors more prevalent in females can account for sex differences in PTSD severity. Method: The study was a quasi-prospective questionnaire survey assessing PTSD and related variables in 73.3% of all Danish bank employees exposed to bank robbery during the period from April 2010 to April 2011. Participants filled out questionnaires 1 week (T1, N=450 and 6 months after the robbery (T2, N=368; 61.1% females. Mediation was examined using an analysis designed specifically to test a multiple mediator model. Results: Females reported more PTSD symptoms than males and higher levels of neuroticism, depression, physical anxiety sensitivity, peritraumatic fear, horror, and helplessness (the A2 criterion, tonic immobility, panic, dissociation, negative posttraumatic cognitions about self and the world, and feeling let down. These variables were included in the model as potential mediators. The combination of risk factors significantly mediated the association between sex and PTSD severity, accounting for 83% of the association. Conclusions: The findings suggest that females report more PTSD symptoms because they experience higher levels of associated risk factors. The results are relevant to other trauma populations and to other trauma

  5. Biosorption optimization of lead(II), cadmium(II) and copper(II) using response surface methodology and applicability in isotherms and thermodynamics modeling

    International Nuclear Information System (INIS)

    Singh, Rajesh; Chadetrik, Rout; Kumar, Rajender; Bishnoi, Kiran; Bhatia, Divya; Kumar, Anil; Bishnoi, Narsi R.; Singh, Namita

    2010-01-01

    The present study was carried out to optimize the various environmental conditions for biosorption of Pb(II), Cd(II) and Cu(II) by investigating as a function of the initial metal ion concentration, temperature, biosorbent loading and pH using Trichoderma viride as adsorbent. Biosorption of ions from aqueous solution was optimized in a batch system using response surface methodology. The values of R 2 0.9716, 0.9699 and 0.9982 for Pb(II), Cd(II) and Cu(II) ions, respectively, indicated the validity of the model. The thermodynamic properties ΔG o , ΔH o , ΔE o and ΔS o by the metal ions for biosorption were analyzed using the equilibrium constant value obtained from experimental data at different temperatures. The results showed that biosorption of Pb(II) ions by T. viride adsorbent is more endothermic and spontaneous. The study was attempted to offer a better understating of representative biosorption isotherms and thermodynamics with special focuses on binding mechanism for biosorption using the FTIR spectroscopy.

  6. Biosorption optimization of lead(II), cadmium(II) and copper(II) using response surface methodology and applicability in isotherms and thermodynamics modeling

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Rajesh; Chadetrik, Rout; Kumar, Rajender; Bishnoi, Kiran; Bhatia, Divya; Kumar, Anil [Department of Environmental Science and Engineering, Guru Jambheshwar University of Science and Technology, Hisar 125001, Haryana (India); Bishnoi, Narsi R., E-mail: nrbishnoi@gmail.com [Department of Environmental Science and Engineering, Guru Jambheshwar University of Science and Technology, Hisar 125001, Haryana (India); Singh, Namita [Department of Bio and Nanotechnology, Guru Jambheshwar University of Science and Technology, Hisar 125001, Haryana (India)

    2010-02-15

    The present study was carried out to optimize the various environmental conditions for biosorption of Pb(II), Cd(II) and Cu(II) by investigating as a function of the initial metal ion concentration, temperature, biosorbent loading and pH using Trichoderma viride as adsorbent. Biosorption of ions from aqueous solution was optimized in a batch system using response surface methodology. The values of R{sup 2} 0.9716, 0.9699 and 0.9982 for Pb(II), Cd(II) and Cu(II) ions, respectively, indicated the validity of the model. The thermodynamic properties {Delta}G{sup o}, {Delta}H{sup o}, {Delta}E{sup o} and {Delta}S{sup o} by the metal ions for biosorption were analyzed using the equilibrium constant value obtained from experimental data at different temperatures. The results showed that biosorption of Pb(II) ions by T. viride adsorbent is more endothermic and spontaneous. The study was attempted to offer a better understating of representative biosorption isotherms and thermodynamics with special focuses on binding mechanism for biosorption using the FTIR spectroscopy.

  7. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    Science.gov (United States)

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  8. Radiative transfer modeling through terrestrial atmosphere and ocean accounting for inelastic processes: Software package SCIATRAN

    Science.gov (United States)

    Rozanov, V. V.; Dinter, T.; Rozanov, A. V.; Wolanin, A.; Bracher, A.; Burrows, J. P.

    2017-06-01

    SCIATRAN is a comprehensive software package which is designed to model radiative transfer processes in the terrestrial atmosphere and ocean in the spectral range from the ultraviolet to the thermal infrared (0.18-40 μm). It accounts for multiple scattering processes, polarization, thermal emission and ocean-atmosphere coupling. The main goal of this paper is to present a recently developed version of SCIATRAN which takes into account accurately inelastic radiative processes in both the atmosphere and the ocean. In the scalar version of the coupled ocean-atmosphere radiative transfer solver presented by Rozanov et al. [61] we have implemented the simulation of the rotational Raman scattering, vibrational Raman scattering, chlorophyll and colored dissolved organic matter fluorescence. In this paper we discuss and explain the numerical methods used in SCIATRAN to solve the scalar radiative transfer equation including trans-spectral processes, and demonstrate how some selected radiative transfer problems are solved using the SCIATRAN package. In addition we present selected comparisons of SCIATRAN simulations with those published benchmark results, independent radiative transfer models, and various measurements from satellite, ground-based, and ship-borne instruments. The extended SCIATRAN software package along with a detailed User's Guide is made available for scientists and students, who are undertaking their own research typically at universities, via the web page of the Institute of Environmental Physics (IUP), University of Bremen: http://www.iup.physik.uni-bremen.de.

  9. Implementation of a cost-accounting model in a biobank: practical implications.

    Science.gov (United States)

    Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C

    2014-01-01

    Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level.

  10. A Modified Model to Estimate Building Rental Multipiers Accounting for Advalorem Operating Expenses

    Directory of Open Access Journals (Sweden)

    Smolyak S.A.

    2016-09-01

    Full Text Available To develop ideas on building element valuation contained in the first article on the subject published in REMV, we propose an elaboration of the approach accounting for ad valorem expenses incidental to property management, such as land taxes, income/capital gains tax, and insurance premium costs; all such costs, being of an ad valorem nature in the first instance, cause circularity in the logic of the model, which, however, is not intractable under the proposed approach. The resulting formulas for carrying out practical estimation of building rental multipliers and, in consequence, of building values, turn out to be somewhat modified, and we demonstrate the sensitivity of the developed approach to the impact of these ad valorem factors. On the other hand, it is demonstrated that (accounting for building depreciation charges, which should seemingly be included among the considered ad valorem factors, cancel out and do not have any impact on the resulting estimates. However, treating the depreciation of buildings in quantifiable economic terms as a reduction in derivable operating benefits over time (instead of mere physical indications, such as age, we also demonstrate that the approach has implications for estimating the economic service lives of buildings and can be practical when used in conjunction with the market-related approach to valuation – from which the requisite model inputs can be extracted as shown in the final part of the paper.

  11. Standardized facility record and report model system (FARMS) for material accounting and control

    International Nuclear Information System (INIS)

    Nishimura, Hideo; Ihara, Hitoshi; Hisamatsu, Yoshinori.

    1990-07-01

    A facility in which nuclear materials are handled maintains a facility system of accounting for and control of nuclear material. Such a system contains, as one of key elements, a record and report system. This record and report information system is a rather complex one because it needs to conform to various requirements from the national or international safeguards authorities and from the plant operator who has to achieve a safe and economical operation of the plant. Therefore it is mandatory to computerize such information system. The authors have reviewed these requirements and standardized the book-keeping and reporting procedures in line with their computerization. On the basis of this result the authors have developed a computer system, FARMS, named as an acronym of standardized facility record and report model system, mainly reflecting the requirements from the national and international safeguards authorities. The development of FARMS has also been carried out as a JASPAS - Japan Support Programme for Agency Safeguards - project since 1985 and the FARMS code was demonstrated as an accountancy tool in the regional SSAC training courses held in Japan in 1985 and 1987. This report describes the standardization of a record and report system at the facility level, its computerization as a model system and the demonstration of the developed system, FARMS. (author)

  12. MODELLING OF THERMOELASTIC TRANSIENT CONTACT INTERACTION FOR BINARY BEARING TAKING INTO ACCOUNT CONVECTION

    Directory of Open Access Journals (Sweden)

    Igor KOLESNIKOV

    2016-12-01

    Full Text Available Serviceability of metal-polymeric "dry-friction" sliding bearings depends on many parameters, including the rotational speed, friction coefficient, thermal and mechanical properties of the bearing system and, as a result, the value of contact temperature. The objective of this study is to develop a computational model for the metallic-polymer bearing, determination on the basis of this model temperature distribution, equivalent and contact stresses for elements of the bearing arrangement and selection of the optimal parameters for the bearing system to achieve thermal balance. Static problem for the combined sliding bearing with the account of heat generation due to friction has been studied in [1]; the dynamic thermoelastic problem of the shaft rotation in a single and double layer bronze bearings were investigated in [2, 3].

  13. Calibration of an experimental model of tritium storage bed designed for 'in situ' accountability

    International Nuclear Information System (INIS)

    Bidica, Nicolae; Stefanescu, Ioan; Bucur, Ciprian; Bulubasa, Gheorghe; Deaconu, Mariea

    2009-01-01

    Full text: Objectives: Tritium accountancy of the storage beds in tritium facilities is an important issue for tritium inventory control. The purpose of our work was to perform calibration of an experimental model of tritium storage bed with a special design, using electric heaters to simulate tritium decay, and to evaluate the detection limit of the accountancy method. The objective of this paper is to present an experimental method used for calibration of the storage bed and the experimental results consisting of calibration curves and detection limit. Our method is based on a 'self-assaying' tritium storage bed. The basic characteristics of the design of our storage bed consists, in principle, of a uniform distribution of the storage material on several copper thin fins (in order to obtain a uniform temperature field inside the bed), an electrical heat source to simulate the tritium decay heat, a system of thermocouples for measuring the temperature field inside the bed, and good thermal isolation of the bed from the external environment. Within this design of the tritium storage bed, the tritium accounting method is based on determining the decay heat of tritium by measuring the temperature increase of the isolated storage bed. Experimental procedure consisted in measuring of temperature field inside the bed for few values of the power injected with the aid of electrical heat source. Data have been collected for few hours and the temperature increase rate was determined for each value of the power injected. Graphical representation of temperature rise versus injected powers was obtained. This accounting method of tritium inventory stored as metal tritide is a reliable solution for in-situ tritium accountability in a tritium handling facility. Several improvements can be done regarding the design of the storage bed in order to improve the measurement accuracy and to obtain a lower detection limit as for instance use of more accurate thermocouples or special

  14. A model proposal concerning balance scorecard application integrated with resource consumption accounting in enterprise performance management

    Directory of Open Access Journals (Sweden)

    ORHAN ELMACI

    2014-06-01

    Full Text Available The present study intended to investigate the “Balance Scorecard (BSC model integrated with Resource Consumption Accounting (RCA” which helps to evaluate the enterprise as matrix structure in its all parts. It aims to measure how much tangible and intangible values (assets of enterprises contribute to the enterprises. In other words, it measures how effectively, actively, and efficiently these values (assets are used. In short, it aims to measure sustainable competency of enterprises. As expressing the effect of tangible and intangible values (assets of the enterprise on the performance in mathematical and statistical methods is insufficient, it is targeted that RCA Method integrated with BSC model is based on matrix structure and control models. The effects of all complex factors in the enterprise on the performance (productivity and efficiency estimated algorithmically with cause and effect diagram. The contributions of matrix structures for reaching the management functional targets of the enterprises that operate in market competitive environment increasing day to day, is discussed. So in the context of modern management theories, as a contribution to BSC approach which is in the foreground in today’s administrative science of enterprises in matrix organizational structures, multidimensional performance evaluation model -RCA integrated with BSC Model proposal- is presented as strategic planning and strategic evaluation instrument.

  15. A one-dimensional Q-machine model taking into account charge-exchange collisions

    International Nuclear Information System (INIS)

    Maier, H.; Kuhn, S.

    1992-01-01

    The Q-machine is a nontrivial bounded plasma system which is excellently suited not only for fundamental plasma physics investigations but also for the development and testing of new theoretical methods for modeling such systems. However, although Q-machines have now been around for over thirty years, it appears that there exist no comprehensive theoretical models taking into account their considerable geometrical and physical complexity with a reasonable degree of self-consistency. In the present context we are concerned with the low-density, single-emitter Q-machine, for which the most widely used model is probably the (one-dimensional) ''collisionless plane-diode model'', which has originally been developed for thermionic diodes. Although the validity of this model is restricted to certain ''axial'' phenomena, we consider it a suitable starting point for extensions of various kinds. While a generalization to two-dimensional geometry (with still collisionless plasma) is being reported elsewhere, the present work represents a first extension to collisional plasma (with still one-dimensional geometry). (author) 12 refs., 2 figs

  16. Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.

    Science.gov (United States)

    Martínez, C A; Khare, K; Rahman, S; Elzo, M A

    2017-10-01

    Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.

  17. A Model for Urban Environment and Resource Planning Based on Green GDP Accounting System

    Directory of Open Access Journals (Sweden)

    Linyu Xu

    2013-01-01

    Full Text Available The urban environment and resources are currently on course that is unsustainable in the long run due to excessive human pursuit of economic goals. Thus, it is very important to develop a model to analyse the relationship between urban economic development and environmental resource protection during the process of rapid urbanisation. This paper proposed a model to identify the key factors in urban environment and resource regulation based on a green GDP accounting system, which consisted of four parts: economy, society, resource, and environment. In this model, the analytic hierarchy process (AHP method and a modified Pearl curve model were combined to allow for dynamic evaluation, with higher green GDP value as the planning target. The model was applied to the environmental and resource planning problem of Wuyishan City, and the results showed that energy use was a key factor that influenced the urban environment and resource development. Biodiversity and air quality were the most sensitive factors that influenced the value of green GDP in the city. According to the analysis, the urban environment and resource planning could be improved for promoting sustainable development in Wuyishan City.

  18. An agent-based simulation model to study accountable care organizations.

    Science.gov (United States)

    Liu, Pai; Wu, Shinyi

    2016-03-01

    Creating accountable care organizations (ACOs) has been widely discussed as a strategy to control rapidly rising healthcare costs and improve quality of care; however, building an effective ACO is a complex process involving multiple stakeholders (payers, providers, patients) with their own interests. Also, implementation of an ACO is costly in terms of time and money. Immature design could cause safety hazards. Therefore, there is a need for analytical model-based decision-support tools that can predict the outcomes of different strategies to facilitate ACO design and implementation. In this study, an agent-based simulation model was developed to study ACOs that considers payers, healthcare providers, and patients as agents under the shared saving payment model of care for congestive heart failure (CHF), one of the most expensive causes of sometimes preventable hospitalizations. The agent-based simulation model has identified the critical determinants for the payment model design that can motivate provider behavior changes to achieve maximum financial and quality outcomes of an ACO. The results show nonlinear provider behavior change patterns corresponding to changes in payment model designs. The outcomes vary by providers with different quality or financial priorities, and are most sensitive to the cost-effectiveness of CHF interventions that an ACO implements. This study demonstrates an increasingly important method to construct a healthcare system analytics model that can help inform health policy and healthcare management decisions. The study also points out that the likely success of an ACO is interdependent with payment model design, provider characteristics, and cost and effectiveness of healthcare interventions.

  19. On unified field theories, dynamical torsion and geometrical models: II

    International Nuclear Information System (INIS)

    Cirilo-Lombardo, D.J.

    2011-01-01

    We analyze in this letter the same space-time structure as that presented in our previous reference (Part. Nucl, Lett. 2010. V.7, No.5. P.299-307), but relaxing now the condition a priori of the existence of a potential for the torsion. We show through exact cosmological solutions from this model, where the geometry is Euclidean RxO 3 ∼ RxSU(2), the relation between the space-time geometry and the structure of the gauge group. Precisely this relation is directly connected with the relation of the spin and torsion fields. The solution of this model is explicitly compared with our previous ones and we find that: i) the torsion is not identified directly with the Yang-Mills type strength field, ii) there exists a compatibility condition connected with the identification of the gauge group with the geometric structure of the space-time: this fact leads to the identification between derivatives of the scale factor a with the components of the torsion in order to allow the Hosoya-Ogura ansatz (namely, the alignment of the isospin with the frame geometry of the space-time), and iii) of two possible structures of the torsion the 'tratorial' form (the only one studied here) forbid wormhole configurations, leading only to cosmological instanton space-time in eternal expansion

  20. Air quality modeling for accountability research: Operational, dynamic, and diagnostic evaluation

    Science.gov (United States)

    Henneman, Lucas R. F.; Liu, Cong; Hu, Yongtao; Mulholland, James A.; Russell, Armistead G.

    2017-10-01

    Photochemical grid models play a central role in air quality regulatory frameworks, including in air pollution accountability research, which seeks to demonstrate the extent to which regulations causally impacted emissions, air quality, and public health. There is a need, however, to develop and demonstrate appropriate practices for model application and evaluation in an accountability framework. We employ a combination of traditional and novel evaluation techniques to assess four years (2001-02, 2011-12) of simulated pollutant concentrations across a decade of major emissions reductions using the Community Multiscale Air Quality (CMAQ) model. We have grouped our assessments in three categories: Operational evaluation investigates how well CMAQ captures absolute concentrations; dynamic evaluation investigates how well CMAQ captures changes in concentrations across the decade of changing emissions; diagnostic evaluation investigates how CMAQ attributes variability in concentrations and sensitivities to emissions between meteorology and emissions, and how well this attribution compares to empirical statistical models. In this application, CMAQ captures O3 and PM2.5 concentrations and change over the decade in the Eastern United States similarly to past CMAQ applications and in line with model evaluation guidance; however, some PM2.5 species-EC, OC, and sulfate in particular-exhibit high biases in various months. CMAQ-simulated PM2.5 has a high bias in winter months and low bias in the summer, mainly due to a high bias in OC during the cold months and low bias in OC and sulfate during the summer. Simulated O3 and PM2.5 changes across the decade have normalized mean bias of less than 2.5% and 17%, respectively. Detailed comparisons suggest biased EC emissions, negative wintertime SO42- sensitivities to mobile source emissions, and incomplete capture of OC chemistry in the summer and winter. Photochemical grid model-simulated O3 and PM2.5 responses to emissions and

  1. A margin model to account for respiration-induced tumour motion and its variability

    International Nuclear Information System (INIS)

    Coolens, Catherine; Webb, Steve; Evans, Phil M; Shirato, H; Nishioka, K

    2008-01-01

    In order to reduce the sensitivity of radiotherapy treatments to organ motion, compensation methods are being investigated such as gating of treatment delivery, tracking of tumour position, 4D scanning and planning of the treatment, etc. An outstanding problem that would occur with all these methods is the assumption that breathing motion is reproducible throughout the planning and delivery process of treatment. This is obviously not a realistic assumption and is one that will introduce errors. A dynamic internal margin model (DIM) is presented that is designed to follow the tumour trajectory and account for the variability in respiratory motion. The model statistically describes the variation of the breathing cycle over time, i.e. the uncertainty in motion amplitude and phase reproducibility, in a polar coordinate system from which margins can be derived. This allows accounting for an additional gating window parameter for gated treatment delivery as well as minimizing the area of normal tissue irradiated. The model was illustrated with abdominal motion for a patient with liver cancer and tested with internal 3D lung tumour trajectories. The results confirm that the respiratory phases around exhale are most reproducible and have the smallest variation in motion amplitude and phase (approximately 2 mm). More importantly, the margin area covering normal tissue is significantly reduced by using trajectory-specific margins (as opposed to conventional margins) as the angular component is by far the largest contributor to the margin area. The statistical approach to margin calculation, in addition, offers the possibility for advanced online verification and updating of breathing variation as more data become available

  2. Radiative transfer modeling through terrestrial atmosphere and ocean accounting for inelastic processes: Software package SCIATRAN

    International Nuclear Information System (INIS)

    Rozanov, V.V.; Dinter, T.; Rozanov, A.V.; Wolanin, A.; Bracher, A.; Burrows, J.P.

    2017-01-01

    SCIATRAN is a comprehensive software package which is designed to model radiative transfer processes in the terrestrial atmosphere and ocean in the spectral range from the ultraviolet to the thermal infrared (0.18–40 μm). It accounts for multiple scattering processes, polarization, thermal emission and ocean–atmosphere coupling. The main goal of this paper is to present a recently developed version of SCIATRAN which takes into account accurately inelastic radiative processes in both the atmosphere and the ocean. In the scalar version of the coupled ocean–atmosphere radiative transfer solver presented by Rozanov et al. we have implemented the simulation of the rotational Raman scattering, vibrational Raman scattering, chlorophyll and colored dissolved organic matter fluorescence. In this paper we discuss and explain the numerical methods used in SCIATRAN to solve the scalar radiative transfer equation including trans-spectral processes, and demonstrate how some selected radiative transfer problems are solved using the SCIATRAN package. In addition we present selected comparisons of SCIATRAN simulations with those published benchmark results, independent radiative transfer models, and various measurements from satellite, ground-based, and ship-borne instruments. The extended SCIATRAN software package along with a detailed User's Guide is made available for scientists and students, who are undertaking their own research typically at universities, via the web page of the Institute of Environmental Physics (IUP), University of Bremen: (http://www.iup.physik.uni-bremen.de). - Highlights: • A new version of the software package SCIATRAN is presented. • Inelastic scattering in water and atmosphere is implemented in SCIATRAN. • Raman scattering and fluorescence can be included in radiative transfer calculations. • Comparisons to other radiative transfer models show excellent agreement. • Comparisons to observations show consistent results.

  3. Accounting for Model Uncertainties Using Reliability Methods - Application to Carbon Dioxide Geologic Sequestration System. Final Report

    International Nuclear Information System (INIS)

    Mok, Chin Man; Doughty, Christine; Zhang, Keni; Pruess, Karsten; Kiureghian, Armen; Zhang, Miao; Kaback, Dawn

    2010-01-01

    A new computer code, CALRELTOUGH, which uses reliability methods to incorporate parameter sensitivity and uncertainty analysis into subsurface flow and transport models, was developed by Geomatrix Consultants, Inc. in collaboration with Lawrence Berkeley National Laboratory and University of California at Berkeley. The CALREL reliability code was developed at the University of California at Berkely for geotechnical applications and the TOUGH family of codes was developed at Lawrence Berkeley National Laboratory for subsurface flow and tranport applications. The integration of the two codes provides provides a new approach to deal with uncertainties in flow and transport modeling of the subsurface, such as those uncertainties associated with hydrogeology parameters, boundary conditions, and initial conditions of subsurface flow and transport using data from site characterization and monitoring for conditioning. The new code enables computation of the reliability of a system and the components that make up the system, instead of calculating the complete probability distributions of model predictions at all locations at all times. The new CALRELTOUGH code has tremendous potential to advance subsurface understanding for a variety of applications including subsurface energy storage, nuclear waste disposal, carbon sequestration, extraction of natural resources, and environmental remediation. The new code was tested on a carbon sequestration problem as part of the Phase I project. Phase iI was not awarded.

  4. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    Science.gov (United States)

    Mastin, Larry G.; Van Eaton, Alexa; Durant, A.J.

    2016-01-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16–17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m−3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between  ∼  2.3 and 2.7φ (0.20–0.15 mm), despite large variations in erupted mass (0.25–50 Tg), plume height (8.5–25 km), mass fraction of fine ( discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  5. Modelling of gas-metal arc welding taking into account metal vapour

    Energy Technology Data Exchange (ETDEWEB)

    Schnick, M; Fuessel, U; Hertel, M; Haessler, M [Institute of Surface and Manufacturing Technology, Technische Universitaet Dresden, D-01062 Dresden (Germany); Spille-Kohoff, A [CFX Berlin Software GmbH, Karl-Marx-Allee 90, 10243 Berlin (Germany); Murphy, A B [CSIRO Materials Science and Engineering, PO Box 218, Lindfield NSW 2070 (Australia)

    2010-11-03

    The most advanced numerical models of gas-metal arc welding (GMAW) neglect vaporization of metal, and assume an argon atmosphere for the arc region, as is also common practice for models of gas-tungsten arc welding (GTAW). These models predict temperatures above 20 000 K and a temperature distribution similar to GTAW arcs. However, spectroscopic temperature measurements in GMAW arcs demonstrate much lower arc temperatures. In contrast to measurements of GTAW arcs, they have shown the presence of a central local minimum of the radial temperature distribution. This paper presents a GMAW model that takes into account metal vapour and that is able to predict the local central minimum in the radial distributions of temperature and electric current density. The influence of different values for the net radiative emission coefficient of iron vapour, which vary by up to a factor of hundred, is examined. It is shown that these net emission coefficients cause differences in the magnitudes, but not in the overall trends, of the radial distribution of temperature and current density. Further, the influence of the metal vaporization rate is investigated. We present evidence that, for higher vaporization rates, the central flow velocity inside the arc is decreased and can even change direction so that it is directed from the workpiece towards the wire, although the outer plasma flow is still directed towards the workpiece. In support of this thesis, we have attempted to reproduce the measurements of Zielinska et al for spray-transfer mode GMAW numerically, and have obtained reasonable agreement.

  6. Accounting for exhaust gas transport dynamics in instantaneous emission models via smooth transition regression.

    Science.gov (United States)

    Kamarianakis, Yiannis; Gao, H Oliver

    2010-02-15

    Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.

  7. Material Protection, Accounting, and Control Technologies (MPACT): Modeling and Simulation Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    Cipiti, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dunn, Timothy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Durbin, Samual [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Durkee, Joe W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); England, Jeff [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, Robert [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Ketusky, Edward [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Li, Shelly [Idaho National Lab. (INL), Idaho Falls, ID (United States); Lindgren, Eric [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meier, David [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Miller, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Osburn, Laura Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pereira, Candido [Argonne National Lab. (ANL), Argonne, IL (United States); Rauch, Eric Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Scaglione, John [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Scherer, Carolynn P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sprinkle, James K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Yoo, Tae-Sic [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-05

    The development of sustainable advanced nuclear fuel cycles is a long-term goal of the Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technologies program. The Material Protection, Accounting, and Control Technologies (MPACT) campaign is supporting research and development (R&D) of advanced instrumentation, analysis tools, and integration methodologies to meet this goal. This advanced R&D is intended to facilitate safeguards and security by design of fuel cycle facilities. The lab-scale demonstration of a virtual facility, distributed test bed, that connects the individual tools being developed at National Laboratories and university research establishments, is a key program milestone for 2020. These tools will consist of instrumentation and devices as well as computer software for modeling. To aid in framing its long-term goal, during FY16, a modeling and simulation roadmap is being developed for three major areas of investigation: (1) radiation transport and sensors, (2) process and chemical models, and (3) shock physics and assessments. For each area, current modeling approaches are described, and gaps and needs are identified.

  8. Integrated Approach Model of Risk, Control and Auditing of Accounting Information Systems

    Directory of Open Access Journals (Sweden)

    Claudiu BRANDAS

    2013-01-01

    Full Text Available The use of IT in the financial and accounting processes is growing fast and this leads to an increase in the research and professional concerns about the risks, control and audit of Ac-counting Information Systems (AIS. In this context, the risk and control of AIS approach is a central component of processes for IT audit, financial audit and IT Governance. Recent studies in the literature on the concepts of risk, control and auditing of AIS outline two approaches: (1 a professional approach in which we can fit ISA, COBIT, IT Risk, COSO and SOX, and (2 a research oriented approach in which we emphasize research on continuous auditing and fraud using information technology. Starting from the limits of existing approaches, our study is aimed to developing and testing an Integrated Approach Model of Risk, Control and Auditing of AIS on three cycles of business processes: purchases cycle, sales cycle and cash cycle in order to improve the efficiency of IT Governance, as well as ensuring integrity, reality, accuracy and availability of financial statements.

  9. Antecedents and Consequences of Individual Performance Analysis of Turnover Intention Model (Empirical Study of Public Accountants in Indonesia)

    OpenAIRE

    Raza, Hendra; Maksum, Azhar; Erlina; Lumban Raja, Prihatin

    2014-01-01

    Azhar Maksum This study aims to examine empirically the antecedents of individual performance on its consequences of turnover intention in public accounting firms. There are eight variables measured which consists of auditors' empowerment, innovation professionalism, role ambiguity, role conflict, organizational commitment, individual performance and turnover intention. Data analysis is based on 163 public accountant using the Structural Equation Modeling assisted with an appli...

  10. Accounting for treatment use when validating a prognostic model: a simulation study.

    Science.gov (United States)

    Pajouheshnia, Romin; Peelen, Linda M; Moons, Karel G M; Reitsma, Johannes B; Groenwold, Rolf H H

    2017-07-14

    Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW) on the estimated model discrimination (c-index) and calibration (observed:expected ratio and calibration plots) in scenarios with different patterns and effects of treatment use. Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and should not be ignored. When treatment use is random, treated

  11. Accounting for treatment use when validating a prognostic model: a simulation study

    Directory of Open Access Journals (Sweden)

    Romin Pajouheshnia

    2017-07-01

    Full Text Available Abstract Background Prognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data. Methods We outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW on the estimated model discrimination (c-index and calibration (observed:expected ratio and calibration plots in scenarios with different patterns and effects of treatment use. Results Ignoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder. Conclusions When validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and

  12. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    Science.gov (United States)

    Czoli, Christine; Da Silva, Michael; Shaul, Randi Zlotnik; d'Agincourt-Canning, Lori; Simpson, Christy; Boydell, Katherine; Rashkovan, Natalie; Vanin, Sharon

    2011-10-05

    Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories.These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed.Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment. © 2011 Czoli et al; licensee BioMed Central Ltd.

  13. Accounting for detectability in fish distribution models: an approach based on time-to-first-detection

    Directory of Open Access Journals (Sweden)

    Mário Ferreira

    2015-12-01

    Full Text Available Imperfect detection (i.e., failure to detect a species when the species is present is increasingly recognized as an important source of uncertainty and bias in species distribution modeling. Although methods have been developed to solve this problem by explicitly incorporating variation in detectability in the modeling procedure, their use in freshwater systems remains limited. This is probably because most methods imply repeated sampling (≥ 2 of each location within a short time frame, which may be impractical or too expensive in most studies. Here we explore a novel approach to control for detectability based on the time-to-first-detection, which requires only a single sampling occasion and so may find more general applicability in freshwaters. The approach uses a Bayesian framework to combine conventional occupancy modeling with techniques borrowed from parametric survival analysis, jointly modeling factors affecting the probability of occupancy and the time required to detect a species. To illustrate the method, we modeled large scale factors (elevation, stream order and precipitation affecting the distribution of six fish species in a catchment located in north-eastern Portugal, while accounting for factors potentially affecting detectability at sampling points (stream depth and width. Species detectability was most influenced by depth and to lesser extent by stream width and tended to increase over time for most species. Occupancy was consistently affected by stream order, elevation and annual precipitation. These species presented a widespread distribution with higher uncertainty in tributaries and upper stream reaches. This approach can be used to estimate sampling efficiency and provide a practical framework to incorporate variations in the detection rate in fish distribution models.

  14. Model of medicines sales forecasting taking into account factors of influence

    Science.gov (United States)

    Kravets, A. G.; Al-Gunaid, M. A.; Loshmanov, V. I.; Rasulov, S. S.; Lempert, L. B.

    2018-05-01

    The article describes a method for forecasting sales of medicines in conditions of data sampling, which is insufficient for building a model based on historical data alone. The developed method is applicable mainly to new drugs that are already licensed and released for sale but do not yet have stable sales performance in the market. The purpose of this study is to prove the effectiveness of the suggested method forecasting drug sales, taking into account the selected factors of influence, revealed during the review of existing solutions and analysis of the specificity of the area under study. Three experiments were performed on samples of different volumes, which showed an improvement in the accuracy of forecasting sales in small samples.

  15. A Thermodamage Strength Theoretical Model of Ceramic Materials Taking into Account the Effect of Residual Stress

    Directory of Open Access Journals (Sweden)

    Weiguo Li

    2012-01-01

    Full Text Available A thermodamage strength theoretical model taking into account the effect of residual stress was established and applied to each temperature phase based on the study of effects of various physical mechanisms on the fracture strength of ultrahigh-temperature ceramics. The effects of SiC particle size, crack size, and SiC particle volume fraction on strength corresponding to different temperatures were studied in detail. This study showed that when flaw size is not large, the bigger SiC particle size results in the greater effect of tensile residual stress in the matrix grains on strength reduction, and this prediction coincides with experimental results; and the residual stress and the combined effort of particle size and crack size play important roles in controlling material strength.

  16. REGRESSION MODEL FOR RISK REPORTING IN FINANCIAL STATEMENTS OF ACCOUNTING SERVICES ENTITIES

    Directory of Open Access Journals (Sweden)

    Mirela NICHITA

    2015-06-01

    Full Text Available The purpose of financial reports is to provide useful information to users; the utility of information is defined through the qualitative characteristics (fundamental and enhancing. The financial crisis emphasized the limits of financial reporting which has been unable to prevent investors about the risks they were facing. Due to the current changes in business environment, managers have been highly motivated to rethink and improve the risk governance philosophy, processes and methodologies. The lack of quality, timely data and adequate systems to capture, report and measure the right information across the organization is a fundamental challenge for implementing and sustaining all aspects of effective risk management. Starting with the 80s, the investors are more interested in narratives (Notes to financial statements, than in primary reports (financial position and performance. The research will apply a regression model for assessment of risk reporting by the professional (accounting and taxation services for major companies from Romania during the period 2009 – 2013.

  17. A common signal detection model accounts for both perception and discrimination of the watercolor effect.

    Science.gov (United States)

    Devinck, Frédéric; Knoblauch, Kenneth

    2012-03-21

    Establishing the relation between perception and discrimination is a fundamental objective in psychophysics, with the goal of characterizing the neural mechanisms mediating perception. Here, we show that a procedure for estimating a perceptual scale based on a signal detection model also predicts discrimination performance. We use a recently developed procedure, Maximum Likelihood Difference Scaling (MLDS), to measure the perceptual strength of a long-range, color, filling-in phenomenon, the Watercolor Effect (WCE), as a function of the luminance ratio between the two components of its generating contour. MLDS is based on an equal-variance, gaussian, signal detection model and yields a perceptual scale with interval properties. The strength of the fill-in percept increased 10-15 times the estimate of the internal noise level for a 3-fold increase in the luminance ratio. Each observer's estimated scale predicted discrimination performance in a subsequent paired-comparison task. A common signal detection model accounts for both the appearance and discrimination data. Since signal detection theory provides a common metric for relating discrimination performance and neural response, the results have implications for comparing perceptual and neural response functions.

  18. An Improved Car-Following Model Accounting for Impact of Strong Wind

    Directory of Open Access Journals (Sweden)

    Dawei Liu

    2017-01-01

    Full Text Available In order to investigate the effect of strong wind on dynamic characteristic of traffic flow, an improved car-following model based on the full velocity difference model is developed in this paper. Wind force is introduced as the influence factor of car-following behavior. Among three components of wind force, lift force and side force are taken into account. The linear stability analysis is carried out and the stability condition of the newly developed model is derived. Numerical analysis is made to explore the effect of strong wind on spatial-time evolution of a small perturbation. The results show that the strong wind can significantly affect the stability of traffic flow. Driving safety in strong wind is also studied by comparing the lateral force under different wind speeds with the side friction of vehicles. Finally, the fuel consumption of vehicle in strong wind condition is explored and the results show that the fuel consumption decreased with the increase of wind speed.

  19. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    Science.gov (United States)

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  20. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    Science.gov (United States)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  1. Accounting comparability and the accuracy of peer-based valuation models

    NARCIS (Netherlands)

    Young, S.; Zeng, Y.

    2015-01-01

    We examine the link between enhanced accounting comparability and the valuation performance of pricing multiples. Using the warranted multiple method proposed by Bhojraj and Lee (2002, Journal of Accounting Research), we demonstrate how enhanced accounting comparability leads to better peer-based

  2. The potential to reduce the risk of manipulation of financial statements using the identification models of creative accounting

    Directory of Open Access Journals (Sweden)

    Zita Drábková

    2013-01-01

    Full Text Available Explanatory power of accounting information is the key question for deciding of users of financial statements. A whole range of economic indicators is available to the users of financial statements to measure the firm productivity. When the accounting statements (and applied methods are manipulated, the economic indicators may reveal clearly different results. The users of financial statements should have the possibility to assess the risk of manipulation of accounting statements in time considering potential risk of accounting fraud. The aim of this paper was based on the synthesis of knowledge from the review of literature, the CFEBT model analysis and Beneish Model proposing a convenient model for identifying risks of manipulation of financial statements. The paper summarizes the outcomes of possibilities and limits of manipulated financial statements and their identification. The testing hypothesis is assessing whether there is a close relation of a loss and an increase in the cash flow in 3–5 years time; whether the sum of the amounts for 3–5 year’s time would reveal the same results respectively. The hypothesis was verified on the accounting statements of the accounting entities of prepared case studies respecting the true and fair view of accounting based on Czech accounting standards.

  3. The cyclicality of loan loss provisions under three different accounting models: the United Kingdom, Spain, and Brazil

    Directory of Open Access Journals (Sweden)

    Antônio Maria Henri Beyle de Araújo

    2017-11-01

    Full Text Available ABSTRACT A controversy involving loan loss provisions in banks concerns their relationship with the business cycle. While international accounting standards for recognizing provisions (incurred loss model would presumably be pro-cyclical, accentuating the effects of the current economic cycle, an alternative model, the expected loss model, has countercyclical characteristics, acting as a buffer against economic imbalances caused by expansionary or contractionary phases in the economy. In Brazil, a mixed accounting model exists, whose behavior is not known to be pro-cyclical or countercyclical. The aim of this research is to analyze the behavior of these accounting models in relation to the business cycle, using an econometric model consisting of financial and macroeconomic variables. The study allowed us to identify the impact of credit risk behavior, earnings management, capital management, Gross Domestic Product (GDP behavior, and the behavior of the unemployment rate on provisions in countries that use different accounting models. Data from commercial banks in the United Kingdom (incurred loss, in Spain (expected loss, and in Brazil (mixed model were used, covering the period from 2001 to 2012. Despite the accounting models of the three countries being formed by very different rules regarding possible effects on the business cycles, the results revealed a pro-cyclical behavior of provisions in each country, indicating that when GDP grows, provisions tend to fall and vice versa. The results also revealed other factors influencing the behavior of loan loss provisions, such as earning management.

  4. Modelling the range expansion of the Tiger mosquito in a Mediterranean Island accounting for imperfect detection.

    Science.gov (United States)

    Tavecchia, Giacomo; Miranda, Miguel-Angel; Borrás, David; Bengoa, Mikel; Barceló, Carlos; Paredes-Esquivel, Claudia; Schwarz, Carl

    2017-01-01

    Aedes albopictus (Diptera; Culicidae) is a highly invasive mosquito species and a competent vector of several arboviral diseases that have spread rapidly throughout the world. Prevalence and patterns of dispersal of the mosquito are of central importance for an effective control of the species. We used site-occupancy models accounting for false negative detections to estimate the prevalence, the turnover, the movement pattern and the growth rate in the number of sites occupied by the mosquito in 17 localities throughout Mallorca Island. Site-occupancy probability increased from 0.35 in the 2012, year of first reported observation of the species, to 0.89 in 2015. Despite a steady increase in mosquito presence, the extinction probability was generally high indicating a high turnover in the occupied sites. We considered two site-dependent covariates, namely the distance from the point of first observation and the estimated yearly occupancy rate in the neighborhood, as predicted by diffusion models. Results suggested that mosquito distribution during the first year was consistent with what predicted by simple diffusion models, but was not consistent with the diffusion model in subsequent years when it was similar to those expected from leapfrog dispersal events. Assuming a single initial colonization event, the spread of Ae. albopictus in Mallorca followed two distinct phases, an early one consistent with diffusion movements and a second consistent with long distance, 'leapfrog', movements. The colonization of the island was fast, with ~90% of the sites estimated to be occupied 3 years after the colonization. The fast spread was likely to have occurred through vectors related to human mobility such as cars or other vehicles. Surveillance and management actions near the introduction point would only be effective during the early steps of the colonization.

  5. A mass-density model can account for the size-weight illusion

    Science.gov (United States)

    Bergmann Tiest, Wouter M.; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness

  6. A mass-density model can account for the size-weight illusion.

    Science.gov (United States)

    Wolf, Christian; Bergmann Tiest, Wouter M; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object's mass, and the other from the object's density, with estimates' weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects' density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object's density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception.

  7. Carbon accounting of forest bioenergy: from model calibrations to policy options (Invited)

    Science.gov (United States)

    Lamers, P.

    2013-12-01

    knowledge in the field by comparing different state-of-the-art temporal forest carbon modeling efforts, and discusses whether or to what extent a deterministic ';carbon debt' accounting is possible and appropriate. It concludes upon the possible scientific and eventually political choices in temporal carbon accounting for regulatory frameworks including alternative options to address unintentional carbon losses within forest ecosystems/bioenergy systems.

  8. Long-term fiscal implications of funding assisted reproduction: a generational accounting model for Spain

    Directory of Open Access Journals (Sweden)

    R. Matorras

    2015-12-01

    Full Text Available The aim of this study was to assess the lifetime economic benefits of assisted reproduction in Spain by calculating the return on this investment. We developed a generational accounting model that simulates the flow of taxes paid by the individual, minus direct government transfers received over the individual’s lifetime. The difference between discounted transfers and taxes minus the cost of either IVF or artificial insemination (AI equals the net fiscal contribution (NFC of a child conceived through assisted reproduction. We conducted sensitivity analysis to test the robustness of our results under various macroeconomic scenarios. A child conceived through assisted reproduction would contribute €370,482 in net taxes to the Spanish Treasury and would receive €275,972 in transfers over their lifetime. Taking into account that only 75% of assisted reproduction pregnancies are successful, the NFC was estimated at €66,709 for IVF-conceived children and €67,253 for AI-conceived children. The return on investment for each euro invested was €15.98 for IVF and €18.53 for AI. The long-term NFC of a child conceived through assisted reproduction could range from €466,379 to €-9,529 (IVF and from €466,923 to €-8,985 (AI. The return on investment would vary between €-2.28 and €111.75 (IVF, and €-2.48 and €128.66 (AI for each euro invested. The break-even point at which the financial position would begin to favour the Spanish Treasury ranges between 29 and 41 years of age. Investment in assisted reproductive techniques may lead to positive discounted future fiscal revenue, notwithstanding its beneficial psychological effect for infertile couples in Spain.

  9. Study of experimentally undetermined neutrino parameters in the light of baryogenesis considering type I and type II Seesaw models

    International Nuclear Information System (INIS)

    Kalita, Rupam

    2017-01-01

    We study to connect all the experimentally undetermined neutrino parameters namely lightest neutrino mass, neutrino CP phases and baryon asymmetry of the Universe within the framework of a model where both type I and type II seesaw mechanisms can contribute to tiny neutrino masses. In this work we study the effects of Dirac and Majorana neutrino phases in the origin of matter-antimatter asymmetry through the mechanism of leptogenesis. Type I seesaw mass matrix considered to a tri-bimaximal (TBM) type neutrino mixing which always gives non zero reactor mixing angle. The type II seesaw mass matrix is then considered in such a way that the necessary deviation from TBM mixing and the best fit values of neutrino parameters can be obtained when both type I and type II seesaw contributions are taken into account. We consider different contribution from type I and type II seesaw mechanism to study the effects of neutrino CP phases in the baryon asymmetry of the universe. We further study to connect all these experimentally undetermined neutrino parameters by considering various contribution of type I and type II seesaw. (author)

  10. User's Guide To CHEAP0 II-Economic Analysis of Stand Prognosis Model Outputs

    Science.gov (United States)

    Joseph E. Horn; E. Lee Medema; Ervin G. Schuster

    1986-01-01

    CHEAP0 II provides supplemental economic analysis capability for users of version 5.1 of the Stand Prognosis Model, including recent regeneration and insect outbreak extensions. Although patterned after the old CHEAP0 model, CHEAP0 II has more features and analytic capabilities, especially for analysis of existing and uneven-aged stands....

  11. ACCOUNTING HARMONIZATION AND HISTORICAL COST ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Valentin Gabriel CRISTEA

    2017-05-01

    Full Text Available There is a huge interest in accounting harmonization and historical costs accounting, in what they offer us. In this article, different valuation models are discussed. Although one notices the movement from historical cost accounting to fair value accounting, each one has its advantages.

  12. Accounting for Forest Harvest and Wildfire in a Spatially-distributed Carbon Cycle Process Model

    Science.gov (United States)

    Turner, D. P.; Ritts, W.; Kennedy, R. E.; Yang, Z.; Law, B. E.

    2009-12-01

    Forests are subject to natural disturbances in the form of wildfire, as well as management-related disturbances in the form of timber harvest. These disturbance events have strong impacts on local and regional carbon budgets, but quantifying the associated carbon fluxes remains challenging. The ORCA Project aims to quantify regional net ecosystem production (NEP) and net biome production (NBP) in Oregon, California, and Washington, and we have adopted an integrated approach based on Landsat imagery and ecosystem modeling. To account for stand-level carbon fluxes, the Biome-BGC model has been adapted to simulate multiple severities of fire and harvest. New variables include snags, direct fire emissions, and harvest removals. New parameters include fire-intensity-specific combustion factors for each carbon pool (based on field measurements) and proportional removal rates for harvest events. To quantify regional fluxes, the model is applied in a spatially-distributed mode over the domain of interest, with disturbance history derived from a time series of Landsat images. In stand-level simulations, the post disturbance transition from negative (source) to positive (sink) NEP is delayed approximately a decade in the case of high severity fire compared to harvest. Simulated direct pyrogenic emissions range from 11 to 25 % of total non-soil ecosystem carbon. In spatial mode application over Oregon and California, the sum of annual pyrogenic emissions and harvest removals was generally less that half of total NEP, resulting in significant carbon sequestration on the land base. Spatially and temporally explicit simulation of disturbance-related carbon fluxes will contribute to our ability to evaluate effects of management on regional carbon flux, and in our ability to assess potential biospheric feedbacks to climate change mediated by changing disturbance regimes.

  13. The Charitable Trust Model: An Alternative Approach For Department Of Defense Accounting

    Science.gov (United States)

    2016-12-01

    DEFENSE ACCOUNTING 5. FUNDING NUMBERS 6. AUTHOR (S) Gerald V. Weers Jr. 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School...prohibits the incurrence of costs until budget authority is provided; reversing the conditionality of the matching principle accounting logic. In summary...the Board did not believe applying depreciation accounting for these assets would contribute to measuring the cost of outputs produced, or to

  14. Modeling transducer impulse responses for predicting calibrated pressure pulses with the ultrasound simulation program Field II

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2010-01-01

    FIELD II is a simulation software capable of predicting the field pressure in front of transducers having any complicated geometry. A calibrated prediction with this program is, however, dependent on an exact voltage-to-surface acceleration impulse response of the transducer. Such impulse response...... is not calculated by FIELD II. This work investigates the usability of combining a one-dimensional multilayer transducer modeling principle with the FIELD II software. Multilayer here refers to a transducer composed of several material layers. Measurements of pressure and current from Pz27 piezoceramic disks...... transducer model and the FIELD II software in combination give good agreement with measurements....

  15. A thermoelectric power generating heat exchanger: Part II – Numerical modeling and optimization

    International Nuclear Information System (INIS)

    Sarhadi, Ali; Bjørk, Rasmus; Lindeburg, Niels; Viereck, Peter; Pryds, Nini

    2016-01-01

    Highlights: • A comprehensive model was developed to optimize the integrated TEG-heat exchanger. • The developed model was validated with the experimental data. • The effect of using different interface materials on the output power was assessed. • The influence of TEG arrangement on the power production was investigated. • Optimized geometrical parameters and proper interface materials were suggested. - Abstract: In Part I of this study, the performance of an experimental integrated thermoelectric generator (TEG)-heat exchanger was presented. In the current study, Part II, the obtained experimental results are compared with those predicted by a finite element (FE) model. In the simulation of the integrated TEG-heat exchanger, the thermal contact resistance between the TEG and the heat exchanger is modeled assuming either an ideal thermal contact or using a combined Cooper–Mikic–Yovanovich (CMY) and parallel plate gap formulation, which takes into account the contact pressure, roughness and hardness of the interface surfaces as well as the air gap thermal resistance at the interface. The combined CMY and parallel plate gap model is then further developed to simulate the thermal contact resistance for the case of an interface material. The numerical results show good agreement with the experimental data with an average deviation of 17% for the case without interface material and 12% in the case of including additional material at the interfaces. The model is then employed to evaluate the power production of the integrated system using different interface materials, including graphite, aluminum (Al), tin (Sn) and lead (Pb) in a form of thin foils. The numerical results show that lead foil at the interface has the best performance, with an improvement in power production of 34% compared to graphite foil. Finally, the model predicts that for a certain flow rate, increasing the parallel TEG channels for the integrated systems with 4, 8, and 12 TEGs

  16. Reference methodologies for radioactive controlled discharges an activity within the IAEA's Program Environmental Modelling for Radiation Safety II (EMRAS II)

    International Nuclear Information System (INIS)

    Stocki, T.J.; Bergman, L.; Tellería, D.M.; Proehl, G.; Amado, V.; Curti, A.; Bonchuk, I.; Boyer, P.; Mourlon, C.; Chyly, P.; Heling, R.; Sági, L.; Kliaus, V.; Krajewski, P.; Latouche, G.; Lauria, D.C.; Newsome, L.; Smith, J.

    2011-01-01

    In January 2009, the IAEA EMRAS II (Environmental Modelling for Radiation Safety II) program was launched. The goal of the program is to develop, compare and test models for the assessment of radiological impacts to the public and the environment due to radionuclides being released or already existing in the environment; to help countries build and harmonize their capabilities; and to model the movement of radionuclides in the environment. Within EMRAS II, nine working groups are active; this paper will focus on the activities of Working Group 1: Reference Methodologies for Controlling Discharges of Routine Releases. Within this working group environmental transfer and dose assessment models are tested under different scenarios by participating countries and the results compared. This process allows each participating country to identify characteristics of their models that need to be refined. The goal of this working group is to identify reference methodologies for the assessment of exposures to the public due to routine discharges of radionuclides to the terrestrial and aquatic environments. Several different models are being applied to estimate the transfer of radionuclides in the environment for various scenarios. The first phase of the project involves a scenario of nuclear power reactor with a coastal location which routinely (continuously) discharges 60Co, 85Kr, 131I, and 137Cs to the atmosphere and 60Co, 137Cs, and 90Sr to the marine environment. In this scenario many of the parameters and characteristics of the representative group were given to the modelers and cannot be altered. Various models have been used by the different participants in this inter-comparison (PC-CREAM, CROM, IMPACT, CLRP POSEIDON, SYMBIOSE and others). This first scenario is to enable a comparison of the radionuclide transport and dose modelling. These scenarios will facilitate the development of reference methodologies for controlled discharges. (authors)

  17. A near-real-time material accountancy model and its preliminary demonstration in the Tokai reprocessing plant

    International Nuclear Information System (INIS)

    Ikawa, K.; Ihara, H.; Nishimura, H.; Tsutsumi, M.; Sawahata, T.

    1983-01-01

    The study of a near-real-time (n.r.t.) material accountancy system as applied to small or medium-sized spent fuel reprocessing facilities has been carried out since 1978 under the TASTEX programme. In this study, a model of the n.r.t. accountancy system, called the ten-day-detection-time model, was developed and demonstrated in the actual operating plant. The programme was closed on May 1981, but the study has been extended. The effectiveness of the proposed n.r.t. accountancy model was evaluated by means of simulation techniques. The results showed that weekly material balances covering the entire process MBA could provide sufficient information to satisfy the IAEA guidelines for small or medium-sized facilities. The applicability of the model to the actual plant has been evaluated by a series of field tests which covered four campaigns. In addition to the material accountancy data, many valuable operational data with regard to additional locations for an in-process inventory, the time needed for an in-process inventory, etc., have been obtained. A CUMUF (cumulative MUF) chart of the resulting MUF data in the C-1 and C-2 campaigns clearly showed that there had been a measurement bias across the process MBA. This chart gave a dramatic picture of the power of the n.r.t. accountancy concept by showing the nature of this bias, which was not clearly shown in the conventional material accountancy data. (author)

  18. Optimal dose selection accounting for patient subpopulations in a randomized Phase II trial to maximize the success probability of a subsequent Phase III trial.

    Science.gov (United States)

    Takahashi, Fumihiro; Morita, Satoshi

    2018-02-08

    Phase II clinical trials are conducted to determine the optimal dose of the study drug for use in Phase III clinical trials while also balancing efficacy and safety. In conducting these trials, it may be important to consider subpopulations of patients grouped by background factors such as drug metabolism and kidney and liver function. Determining the optimal dose, as well as maximizing the effectiveness of the study drug by analyzing patient subpopulations, requires a complex decision-making process. In extreme cases, drug development has to be terminated due to inadequate efficacy or severe toxicity. Such a decision may be based on a particular subpopulation. We propose a Bayesian utility approach (BUART) to randomized Phase II clinical trials which uses a first-order bivariate normal dynamic linear model for efficacy and safety in order to determine the optimal dose and study population in a subsequent Phase III clinical trial. We carried out a simulation study under a wide range of clinical scenarios to evaluate the performance of the proposed method in comparison with a conventional method separately analyzing efficacy and safety in each patient population. The proposed method showed more favorable operating characteristics in determining the optimal population and dose.

  19. Modeling and analyses for an extended car-following model accounting for drivers' situation awareness from cyber physical perspective

    Science.gov (United States)

    Chen, Dong; Sun, Dihua; Zhao, Min; Zhou, Tong; Cheng, Senlin

    2018-07-01

    In fact, driving process is a typical cyber physical process which couples tightly the cyber factor of traffic information with the physical components of the vehicles. Meanwhile, the drivers have situation awareness in driving process, which is not only ascribed to the current traffic states, but also extrapolates the changing trend. In this paper, an extended car-following model is proposed to account for drivers' situation awareness. The stability criterion of the proposed model is derived via linear stability analysis. The results show that the stable region of proposed model will be enlarged on the phase diagram compared with previous models. By employing the reductive perturbation method, the modified Korteweg de Vries (mKdV) equation is obtained. The kink-antikink soliton of mKdV equation reveals theoretically the evolution of traffic jams. Numerical simulations are conducted to verify the analytical results. Two typical traffic Scenarios are investigated. The simulation results demonstrate that drivers' situation awareness plays a key role in traffic flow oscillations and the congestion transition.

  20. Associative account of self-cognition: extended forward model and multi-layer structure

    Directory of Open Access Journals (Sweden)

    Motoaki eSugiura

    2013-08-01

    Full Text Available The neural correlates of self identified by neuroimaging studies differ depending on which aspects of self are addressed. Here, three categories of self are proposed based on neuroimaging findings and an evaluation of the likely underlying cognitive processes. The physical self, representing self-agency of action, body ownership, and bodily self-recognition, is supported by the sensory and motor association cortices located primarily in the right hemisphere. The interpersonal self, representing the attention or intentions of others directed at the self, is supported by several amodal association cortices in the dorsomedial frontal and lateral posterior cortices. The social self, representing the self as a collection of context-dependent social values, is supported by the ventral aspect of the medial prefrontal cortex and the posterior cingulate cortex. Despite differences in the underlying cognitive processes and neural substrates, all three categories of self are likely to share the computational characteristics of the forward model, which is underpinned by internal schema or learned associations between one’s behavioral output and the consequential input. Additionally, these three categories exist within a hierarchical layer structure based on developmental processes that updates the schema through the attribution of prediction error. In this account, most of the association cortices critically contribute to some aspect of the self through associative learning while the primary regions involved shift from the lateral to the medial cortices in a sequence from the physical to the interpersonal to the social self.

  1. Working memory load and the retro-cue effect: A diffusion model account.

    Science.gov (United States)

    Shepherdson, Peter; Oberauer, Klaus; Souza, Alessandra S

    2018-02-01

    Retro-cues (i.e., cues presented between the offset of a memory array and the onset of a probe) have consistently been found to enhance performance in working memory tasks, sometimes ameliorating the deleterious effects of increased memory load. However, the mechanism by which retro-cues exert their influence remains a matter of debate. To inform this debate, we applied a hierarchical diffusion model to data from 4 change detection experiments using single item, location-specific probes (i.e., a local recognition task) with either visual or verbal memory stimuli. Results showed that retro-cues enhanced the quality of information entering the decision process-especially for visual stimuli-and decreased the time spent on nondecisional processes. Further, cues interacted with memory load primarily on nondecision time, decreasing or abolishing load effects. To explain these findings, we propose an account whereby retro-cues act primarily to reduce the time taken to access the relevant representation in memory upon probe presentation, and in addition protect cued representations from visual interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. The BUMP model of response planning: intermittent predictive control accounts for 10 Hz physiological tremor.

    Science.gov (United States)

    Bye, Robin T; Neilson, Peter D

    2010-10-01

    Physiological tremor during movement is characterized by ∼10 Hz oscillation observed both in the electromyogram activity and in the velocity profile. We propose that this particular rhythm occurs as the direct consequence of a movement response planning system that acts as an intermittent predictive controller operating at discrete intervals of ∼100 ms. The BUMP model of response planning describes such a system. It forms the kernel of Adaptive Model Theory which defines, in computational terms, a basic unit of motor production or BUMP. Each BUMP consists of three processes: (1) analyzing sensory information, (2) planning a desired optimal response, and (3) execution of that response. These processes operate in parallel across successive sequential BUMPs. The response planning process requires a discrete-time interval in which to generate a minimum acceleration trajectory to connect the actual response with the predicted future state of the target and compensate for executional error. We have shown previously that a response planning time of 100 ms accounts for the intermittency observed experimentally in visual tracking studies and for the psychological refractory period observed in double stimulation reaction time studies. We have also shown that simulations of aimed movement, using this same planning interval, reproduce experimentally observed speed-accuracy tradeoffs and movement velocity profiles. Here we show, by means of a simulation study of constant velocity tracking movements, that employing a 100 ms planning interval closely reproduces the measurement discontinuities and power spectra of electromyograms, joint-angles, and angular velocities of physiological tremor reported experimentally. We conclude that intermittent predictive control through sequential operation of BUMPs is a fundamental mechanism of 10 Hz physiological tremor in movement. Copyright © 2010 Elsevier B.V. All rights reserved.

  3. Design of a Competency-Based Assessment Model in the Field of Accounting

    Science.gov (United States)

    Ciudad-Gómez, Adelaida; Valverde-Berrocoso, Jesús

    2012-01-01

    This paper presents the phases involved in the design of a methodology to contribute both to the acquisition of competencies and to their assessment in the field of Financial Accounting, within the European Higher Education Area (EHEA) framework, which we call MANagement of COMpetence in the areas of Accounting (MANCOMA). Having selected and…

  4. Models of infrared emission from dusty and diffuse H II regions

    International Nuclear Information System (INIS)

    Aannestad, P.A.

    1978-01-01

    Models for the infrared emission from amorphous core-mantle dust within diffuse (n/sub e/ 3 cm -3 ) H II regions with neutral shells that are optically thin in the infrared have been calculated. The icy mantles sublimate only within a fractional radius of 0.2--0.5, affecting the overall gas-to-dust ratio only slightly. A region with variable grain composition may have a much smaller infrared luminosity than a similar region with uniform grain properties. Calculations of the total infrared luminosity, the relative contribution by Lα photons, the infrared spectral distribution, and the size of the dust-depleted regions are presented as functions of the ultraviolet optical depths in the ionized and neutral regions and for stellar temperatures of 35,000 and 48,000 K. Comparison with observations indicate that at least 20% of the Lyman-continuum photons are absorbed by the dust, and that the dust optical depth in the Lyman continuum is likely to be of the order of unity. For core-mantle grains most of the infrared energy is emitted between 30 and 70 μm, relatively independent of whether the dust is within or outside the H II region. Amorphous silicate particles tend to emit more energy below 30 μm, but also emit efficiently at far-infrared wavelengths. In order to illustrate the model calculations, we present infrared spectra for the Orion A region and compare them with observed fluxed, accounting for beam-width effects. A reasonable agreement is obtained with most of the near- to middle-infrared observations if the total ultraviolet optical depth is about unity and about equally divided between the ionized region and an outside neutral shell. Intensity profiles for Orion A are presented for wavelengths in the ragne 20--1000 μm, and show a strong increase in width beyond 20 μm

  5. Materials measurement and accounting in an operating plutonium conversion and purification process. Phase I. Process modeling and simulation

    International Nuclear Information System (INIS)

    Thomas, C.C. Jr.; Ostenak, C.A.; Gutmacher, R.G.; Dayem, H.A.; Kern, E.A.

    1981-04-01

    A model of an operating conversion and purification process for the production of reactor-grade plutonium dioxide was developed as the first component in the design and evaluation of a nuclear materials measurement and accountability system. The model accurately simulates process operation and can be used to identify process problems and to predict the effect of process modifications

  6. Carbon footprint estimator, phase II : volume I - GASCAP model & volume II - technical appendices [technical brief].

    Science.gov (United States)

    2014-03-01

    This study resulted in the development of the GASCAP model (the Greenhouse Gas Assessment : Spreadsheet for Transportation Capital Projects). This spreadsheet model provides a user-friendly interface for determining the greenhouse gas (GHG) emissions...

  7. Accounting for water management issues within hydrological simulation: Alternative modelling options and a network optimization approach

    Science.gov (United States)

    Efstratiadis, Andreas; Nalbantis, Ioannis; Rozos, Evangelos; Koutsoyiannis, Demetris

    2010-05-01

    In mixed natural and artificialized river basins, many complexities arise due to anthropogenic interventions in the hydrological cycle, including abstractions from surface water bodies, groundwater pumping or recharge and water returns through drainage systems. Typical engineering approaches adopt a multi-stage modelling procedure, with the aim to handle the complexity of process interactions and the lack of measured abstractions. In such context, the entire hydrosystem is separated into natural and artificial sub-systems or components; the natural ones are modelled individually, and their predictions (i.e. hydrological fluxes) are transferred to the artificial components as inputs to a water management scheme. To account for the interactions between the various components, an iterative procedure is essential, whereby the outputs of the artificial sub-systems (i.e. abstractions) become inputs to the natural ones. However, this strategy suffers from multiple shortcomings, since it presupposes that pure natural sub-systems can be located and that sufficient information is available for each sub-system modelled, including suitable, i.e. "unmodified", data for calibrating the hydrological component. In addition, implementing such strategy is ineffective when the entire scheme runs in stochastic simulation mode. To cope with the above drawbacks, we developed a generalized modelling framework, following a network optimization approach. This originates from the graph theory, which has been successfully implemented within some advanced computer packages for water resource systems analysis. The user formulates a unified system which is comprised of the hydrographical network and the typical components of a water management network (aqueducts, pumps, junctions, demand nodes etc.). Input data for the later include hydraulic properties, constraints, targets, priorities and operation costs. The real-world system is described through a conceptual graph, whose dummy properties

  8. Process Accounting

    OpenAIRE

    Gilbertson, Keith

    2002-01-01

    Standard utilities can help you collect and interpret your Linux system's process accounting data. Describes the uses of process accounting, standard process accounting commands, and example code that makes use of process accounting utilities.

  9. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  10. Accountability and non-proliferation nuclear regime: a review of the mutual surveillance Brazilian-Argentine model for nuclear safeguards

    International Nuclear Information System (INIS)

    Xavier, Roberto Salles

    2014-01-01

    The regimes of accountability, the organizations of global governance and institutional arrangements of global governance of nuclear non-proliferation and of Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards are the subject of research. The starting point is the importance of the institutional model of global governance for the effective control of non-proliferation of nuclear weapons. In this context, the research investigates how to structure the current arrangements of the international nuclear non-proliferation and what is the performance of model Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards in relation to accountability regimes of global governance. For that, was searched the current literature of three theoretical dimensions: accountability, global governance and global governance organizations. In relation to the research method was used the case study and the treatment technique of data the analysis of content. The results allowed: to establish an evaluation model based on accountability mechanisms; to assess how behaves the model Mutual Vigilance Brazilian-Argentine Nuclear Safeguards front of the proposed accountability regime; and to measure the degree to which regional arrangements that work with systems of global governance can strengthen these international systems. (author)

  11. Account of the Pauli principle in the quasiparticle-phonon nuclear model

    International Nuclear Information System (INIS)

    Molina, Kh.L.

    1980-01-01

    The correlation effects in the ground states of even-even deformed nuclei on their one- and two-phonon states are studied in terms of the semimicroscopic nuclear theory. A secular equation for one-phonon excitations is derived, which take into account, in average, exact commutation relations between quasiparticle operators. It is demonstrated, that the account of the correlation in the ground state can significantly influence the values of the wave function two-phonon components

  12. A probabilistic Poisson-based model accounts for an extensive set of absolute auditory threshold measurements.

    Science.gov (United States)

    Heil, Peter; Matysiak, Artur; Neubauer, Heinrich

    2017-09-01

    Thresholds for detecting sounds in quiet decrease with increasing sound duration in every species studied. The neural mechanisms underlying this trade-off, often referred to as temporal integration, are not fully understood. Here, we probe the human auditory system with a large set of tone stimuli differing in duration, shape of the temporal amplitude envelope, duration of silent gaps between bursts, and frequency. Duration was varied by varying the plateau duration of plateau-burst (PB) stimuli, the duration of the onsets and offsets of onset-offset (OO) stimuli, and the number of identical bursts of multiple-burst (MB) stimuli. Absolute thresholds for a large number of ears (>230) were measured using a 3-interval-3-alternative forced choice (3I-3AFC) procedure. Thresholds decreased with increasing sound duration in a manner that depended on the temporal envelope. Most commonly, thresholds for MB stimuli were highest followed by thresholds for OO and PB stimuli of corresponding durations. Differences in the thresholds for MB and OO stimuli and in the thresholds for MB and PB stimuli, however, varied widely across ears, were negative in some ears, and were tightly correlated. We show that the variation and correlation of MB-OO and MB-PB threshold differences are linked to threshold microstructure, which affects the relative detectability of the sidebands of the MB stimuli and affects estimates of the bandwidth of auditory filters. We also found that thresholds for MB stimuli increased with increasing duration of the silent gaps between bursts. We propose a new model and show that it accurately accounts for our results and does so considerably better than a leaky-integrator-of-intensity model and a probabilistic model proposed by others. Our model is based on the assumption that sensory events are generated by a Poisson point process with a low rate in the absence of stimulation and higher, time-varying rates in the presence of stimulation. A subject in a 3I-3AFC

  13. A semiconductor device thermal model taking into account non-linearity and multhipathing of the cooling system

    International Nuclear Information System (INIS)

    Górecki, K; Zarȩbski, J

    2014-01-01

    The paper is devoted to modelling thermal properties of semiconductor devices at the steady state. The dc thermal model of a semiconductor device taking into account the multipath heat flow is proposed. Some results of calculations and measurements of thermal resistance of a power MOSFET operating at different cooling conditions are presented. The obtained results of calculations fit the results of measurements, which proves the correctness of the proposed model.

  14. Accounting for Framing-Effects - an informational approach to intensionality in the Bolker-Jeffrey decision model

    OpenAIRE

    Bourgeois-Gironde , Sacha; Giraud , Raphaël

    2005-01-01

    We suscribe to an account of framing-effects in decision theory in terms of an inference to a background informationa by the hearer when a speaker uses a certain frame while other equivalent frames were also available. This account was sketched by Craig McKenzie. We embed it in Bolker-Jeffrey decision model (or logic of action) - one main reason of this is that this latter model makes preferences bear on propositions. We can deduce a given anomaly or cognitive bias (namely framing-effects) in...

  15. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    Energy Technology Data Exchange (ETDEWEB)

    Bengtsson, J.

    2010-10-08

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al

  16. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    International Nuclear Information System (INIS)

    Bengtsson, J.

    2010-01-01

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was ∼ 1 x 10 -5 for 1024 turns (to calibrate the linear optics) and ∼ 1 x 10 -4 for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is ∼0.1. Since the transverse damping time is ∼20 msec, i.e., ∼4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain (delta)ν ∼ 1 x 10 -5 . A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al since the 40s for that matter. Conclusion: what

  17. Financial Accounting: Classifications and Standard Terminology for Local and State School Systems. State Educational Records and Reports Series: Handbook II, Revised.

    Science.gov (United States)

    Roberts, Charles T., Comp.; Lichtenberger, Allan R., Comp.

    This handbook has been prepared as a vehicle or mechanism for program cost accounting and as a guide to standard school accounting terminology for use in all types of local and intermediate education agencies. In addition to classification descriptions, program accounting definitions, and proration of cost procedures, some units of measure and…

  18. Scaled Model Technology for Flight Research of General Aviation Aircraft, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Our proposed future Phase II activities are aimed at developing a scientifically based "tool box" for flight research using scaled models. These tools will be of...

  19. A mixed multiscale model better accounting for the cross term of the subgrid-scale stress and for backscatter

    Science.gov (United States)

    Thiry, Olivier; Winckelmans, Grégoire

    2016-02-01

    In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that

  20. Accounting for sampling error when inferring population synchrony from time-series data: a Bayesian state-space modelling approach with applications.

    Directory of Open Access Journals (Sweden)

    Hugues Santin-Janin

    Full Text Available BACKGROUND: Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal with respect to extrinsic factors (the Moran effect in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. METHODOLOGY/PRINCIPAL FINDINGS: The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i has been previously estimated, and (ii has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. CONCLUSION/SIGNIFICANCE: The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for

  1. Basic equations of quasiparticle-phonon model of nucleus with account of Pauli principle and phonons interactions in ground state

    International Nuclear Information System (INIS)

    Voronov, V.V.; Dang, N.D.

    1984-01-01

    the system of equations, enabling to calculate the energy and the structure of excited states, described by the wave function, containing one- and two-phon components was obtained in the framework of quasiparticlephonon model. The requirements of Pauli principle for two-phonon components and phonon correlation in the ground nucleus state are taken into account

  2. Closing the Gaps : Taking into Account the Effects of Heat stress and Fatique Modeling in an Operational Analysis

    NARCIS (Netherlands)

    Woodill, G.; Barbier, R.R.; Fiamingo, C.

    2010-01-01

    Traditional, combat model based analysis of Dismounted Combatant Operations (DCO) has focused on the ‘lethal’ aspects in an engagement, and to a limited extent the environment in which the engagement takes place. These are however only two of the factors that should be taken into account when

  3. Current-account effects of a devaluation in an optimizing model with capital accumulation

    DEFF Research Database (Denmark)

    Nielsen, Søren Bo

    1991-01-01

    short, the devaluation is bound to improve the current account on impact, whereas this will deteriorate in the case of a long contract period, and the more so the smaller are adjustment costs in investment. In addition, we study the consequences for the terms of trade and for the stocks of foreign...

  4. Accounting Department Chairpersons' Perceptions of Business School Performance Using a Market Orientation Model

    Science.gov (United States)

    Webster, Robert L.; Hammond, Kevin L.; Rothwell, James C.

    2013-01-01

    This manuscript is part of a stream of continuing research examining market orientation within higher education and its potential impact on organizational performance. The organizations researched are business schools and the data collected came from chairpersons of accounting departments of AACSB member business schools. We use a reworded Narver…

  5. Theory of extended stellar atmospheres. II. A grid of static spherical models for O stars and planetary nebula nuclei

    International Nuclear Information System (INIS)

    Kunasz, P.B.; Hummer, D.G.; Mihalas, D.

    1975-01-01

    Spherical static non-LTE model atmospheres are presented for stars with M/M/sub sun/=30 and 60 at various points on their evolutionary tracks, and for some nuclei of planetary nebulae at two points of a modified Harman-Seaton sequence. The method of Mihalas and Hummer was employed, which uses a parametrized radiation force multiplier to simulate the force of radiation arising from the entire line spectrum. However, in the present work the density structure computed in the LTE models was held fixed in the calculation of the corresponding non-LTE models; in addition, the opacity of an ''average light ion'' was taken into account. The temperatures for the non-LTE models are generally lower, at a given depth, than for the corresponding LTE models when T/sub eff/<45,000 K, while the situation is reversed at higher temperatures. The continuous energy distributions are generally flattened by extension. The Lyman jump is in emission for extended models of massive stars, but never for the models of nuclei of planetary nebulae (this is primarily a temperature effect). The Balmer jumps are always in absorption. The Lyman lines are in emission, and the Balmer lines in absorption; He ii lambda4686 comes into emission in the most extended models without hydrogen line pumping, showing that it is an indicator of atmospheric extension. Very severe limb darkening is found for extended models, which have apparent angular sized significantly smaller than expected from the geometrical size of the star. Extensive tables are given of monochromatic magnitudes, continuum jumps and gradients, Stomgren-system colors, monochromatic extensions, and the profiles and equivalent widths of the hydrogen lines for all models, and of the He ii lines for some of the 60 M/sub X/ models

  6. A regional-scale, high resolution dynamical malaria model that accounts for population density, climate and surface hydrology.

    Science.gov (United States)

    Tompkins, Adrian M; Ermert, Volker

    2013-02-18

    The relative roles of climate variability and population related effects in malaria transmission could be better understood if regional-scale dynamical malaria models could account for these factors. A new dynamical community malaria model is introduced that accounts for the temperature and rainfall influences on the parasite and vector life cycles which are finely resolved in order to correctly represent the delay between the rains and the malaria season. The rainfall drives a simple but physically based representation of the surface hydrology. The model accounts for the population density in the calculation of daily biting rates. Model simulations of entomological inoculation rate and circumsporozoite protein rate compare well to data from field studies from a wide range of locations in West Africa that encompass both seasonal endemic and epidemic fringe areas. A focus on Bobo-Dioulasso shows the ability of the model to represent the differences in transmission rates between rural and peri-urban areas in addition to the seasonality of malaria. Fine spatial resolution regional integrations for Eastern Africa reproduce the malaria atlas project (MAP) spatial distribution of the parasite ratio, and integrations for West and Eastern Africa show that the model grossly reproduces the reduction in parasite ratio as a function of population density observed in a large number of field surveys, although it underestimates malaria prevalence at high densities probably due to the neglect of population migration. A new dynamical community malaria model is publicly available that accounts for climate and population density to simulate malaria transmission on a regional scale. The model structure facilitates future development to incorporate migration, immunity and interventions.

  7. Physical and Theoretical Models of Heat Pollution Applied to Cramped Conditions Welding Taking into Account the Different Types of Heat

    Science.gov (United States)

    Bulygin, Y. I.; Koronchik, D. A.; Legkonogikh, A. N.; Zharkova, M. G.; Azimova, N. N.

    2017-05-01

    The standard k-epsilon turbulence model, adapted for welding workshops, equipped with fixed workstations with sources of pollution took into account only the convective component of heat transfer, which is quite reasonable for large-volume rooms (with low density distribution of sources of pollution) especially the results of model calculations taking into account only the convective component correlated well with experimental data. For the purposes of this study, when we are dealing with a small confined space where necessary to take account of the body heated to a high temperature (for welding), located next to each other as additional sources of heat, it can no longer be neglected radiative heat exchange. In the task - to experimentally investigate the various types of heat transfer in a limited closed space for welding and behavior of a mathematical model, describing the contribution of the various components of the heat exchange, including radiation, influencing the formation of fields of concentration, temperature, air movement and thermal stress in the test environment. Conducted field experiments to model cubic body, allowing you to configure and debug the model of heat and mass transfer processes with the help of the developed approaches, comparing the measurement results of air flow velocity and temperature with the calculated data showed qualitative and quantitative agreement between process parameters, that is an indicator of the adequacy of heat and mass transfer model.

  8. Lanchester-Type Models of Warfare, Volume II

    OpenAIRE

    Taylor, James G.

    1980-01-01

    This monograph is a comprehensive treatist on Lanchester-type models of warfare, i.e. differential-equation models of attrition in force-on-force combat operations. Its goal is to provide both an introduction to and current-state-of-the-art overview of Lanchester-type models of warfare as well as a comprehensive and unified in-depth treatment of them. Both deterministic as well as stochastic models are considered. Such models have been widely used in the United States and elsewhere for the...

  9. Study of the application of a near-real-time materials accountancy system for a model plutonium conversion plants

    International Nuclear Information System (INIS)

    Ihara, Hitoshi; Ikawa, Koji

    1986-11-01

    An assessment was done on the potential capability of a Near-Real-Time materials accountancy system for a model plutonium conversion plant. To this end, a computer simulation system, DYSAS-C, has been developed and evaluated through this assessment study. This study showed that N.R.T.A system could be used not only as a good operator's accounting system but also as a useful inspectorate's system to detect an abrupt diversion. It also showed, however, that more elaborated NRTA system which have not yet evaluated in this study should be considerered when we wish to improve of detecting protracted diversion. (author)

  10. Development of nonfibrotic left ventricular hypertrophy in an ANG II-induced chronic ovine hypertension model

    DEFF Research Database (Denmark)

    Klatt, Niklas; Scherschel, Katharina; Schad, Claudia

    2016-01-01

    setting. Therefore, the aim of this study was to establish a minimally invasive ovine hypertension model using chronic angiotensin II (ANG II) treatment and to characterize its effects on cardiac remodeling after 8 weeks. Sheep were implanted with osmotic minipumps filled with either vehicle control (n...... = 7) or ANG II (n = 9) for 8 weeks. Mean arterial blood pressure in the ANG II-treated group increased from 87.4 ± 5.3 to 111.8 ± 6.9 mmHg (P = 0.00013). Cardiovascular magnetic resonance imaging showed an increase in left ventricular mass from 112 ± 12.6 g to 131 ± 18.7 g after 7 weeks (P = 0...... any differences in epicardial conduction velocity and heterogeneity. These data demonstrate that chronic ANG II treatment using osmotic minipumps presents a reliable, minimally invasive approach to establish hypertension and nonfibrotic LVH in sheep....

  11. Dynamic modeling and simulation of EBR-II steam generator system

    International Nuclear Information System (INIS)

    Berkan, R.C.; Upadhyaya, B.R.

    1989-01-01

    This paper presents a low order dynamic model of the Experimental breeder Reactor-II (EBR-II) steam generator system. The model development includes the application of energy, mass and momentum balance equations in state-space form. The model also includes a three-element controller for the drum water level control problem. The simulation results for low-level perturbations exhibit the inherently stable characteristics of the steam generator. The predictions of test transients also verify the consistency of this low order model

  12. Modeling of normal contact of elastic bodies with surface relief taken into account

    Science.gov (United States)

    Goryacheva, I. G.; Tsukanov, I. Yu

    2018-04-01

    An approach to account the surface relief in normal contact problems for rough bodies on the basis of an additional displacement function for asperities is considered. The method and analytic expressions for calculating the additional displacement function for one-scale and two-scale wavy relief are presented. The influence of the microrelief geometric parameters, including the number of scales and asperities density, on additional displacements of the rough layer is analyzed.

  13. Accounting for Slipping and Other False Negatives in Logistic Models of Student Learning

    Science.gov (United States)

    MacLellan, Christopher J.; Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    Additive Factors Model (AFM) and Performance Factors Analysis (PFA) are two popular models of student learning that employ logistic regression to estimate parameters and predict performance. This is in contrast to Bayesian Knowledge Tracing (BKT) which uses a Hidden Markov Model formalism. While all three models tend to make similar predictions,…

  14. A simple model to quantitatively account for periodic outbreaks of the measles in the Dutch Bible Belt

    Science.gov (United States)

    Bier, Martin; Brak, Bastiaan

    2015-04-01

    In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.

  15. Computational Models for Nonlinear Aeroelastic Systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  16. Physical Modeling for Anomaly Diagnostics and Prognostics, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Ridgetop developed an innovative, model-driven anomaly diagnostic and fault characterization system for electromechanical actuator (EMA) systems to mitigate...

  17. Model Updating Nonlinear System Identification Toolbox, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — ZONA Technology (ZONA) proposes to develop an enhanced model updating nonlinear system identification (MUNSID) methodology that utilizes flight data with...

  18. Alternative biosphere modeling for safety assessment of HLW disposal taking account of geosphere-biosphere interface of marine environment

    International Nuclear Information System (INIS)

    Kato, Tomoko; Ishiguro, Katsuhiko; Naito, Morimasa; Ikeda, Takao; Little, Richard

    2001-03-01

    In the safety assessment of a high-level radioactive waste (HLW) disposal system, it is required to estimate radiological impacts on future human beings arising from potential radionuclide releases from a deep repository into the surface environment. In order to estimated the impacts, a biosphere model is developed by reasonably assuming radionuclide migration processes in the surface environment and relevant human lifestyles. It is important to modify the present biosphere models or to develop alternative biosphere models applying the biosphere models according to quality and quantify of the information acquired through the siting process for constructing the repository. In this study, alternative biosphere models were developed taking geosphere-biosphere interface of marine environment into account. Moreover, the flux to dose conversion factors calculated by these alternative biosphere models was compared with those by the present basic biosphere models. (author)

  19. Hydrogeological modelling of the eastern region of Areco river locally detailed on Atucha I and II nuclear power plants area

    International Nuclear Information System (INIS)

    Grattone, Natalia I.; Fuentes, Nestor O.

    2009-01-01

    Water flow behaviour of Pampeano aquifer was modeled using Visual Mod-flow software Package 2.8.1 with the assumption of a free aquifer, within the region of the Areco river and extending to the rivers of 'Canada Honda' and 'de la Cruz'. Steady state regime was simulated and grid refinement allows obtaining locally detailed calculation in the area of Atucha I and II Nuclear power plants, in order to compute unsteady situations as the consequence of water flow variations from and to the aquifer, enabling the model to study the movement of possible contaminant particles in the hydrogeologic system. In this work the effects of rivers action, the recharge conditions and the flow lines are analyzed, taking always into account the range of reliability of obtained results, considering the incidence of uncertainties introduced by data input system, the estimates and interpolation of parameters used. (author)

  20. The Adsorption of Cd(II on Manganese Oxide Investigated by Batch and Modeling Techniques

    Directory of Open Access Journals (Sweden)

    Xiaoming Huang

    2017-09-01

    Full Text Available Manganese (Mn oxide is a ubiquitous metal oxide in sub-environments. The adsorption of Cd(II on Mn oxide as function of adsorption time, pH, ionic strength, temperature, and initial Cd(II concentration was investigated by batch techniques. The adsorption kinetics showed that the adsorption of Cd(II on Mn oxide can be satisfactorily simulated by pseudo-second-order kinetic model with high correlation coefficients (R2 > 0.999. The adsorption of Cd(II on Mn oxide significantly decreased with increasing ionic strength at pH < 5.0, whereas Cd(II adsorption was independent of ionic strength at pH > 6.0, which indicated that outer-sphere and inner-sphere surface complexation dominated the adsorption of Cd(II on Mn oxide at pH < 5.0 and pH > 6.0, respectively. The maximum adsorption capacity of Mn oxide for Cd(II calculated from Langmuir model was 104.17 mg/g at pH 6.0 and 298 K. The thermodynamic parameters showed that the adsorption of Cd(II on Mn oxide was an endothermic and spontaneous process. According to the results of surface complexation modeling, the adsorption of Cd(II on Mn oxide can be satisfactorily simulated by ion exchange sites (X2Cd at low pH and inner-sphere surface complexation sites (SOCd+ and (SO2CdOH− species at high pH conditions. The finding presented herein plays an important role in understanding the fate and transport of heavy metals at the water–mineral interface.

  1. Surface complexation modeling calculation of Pb(II) adsorption onto the calcined diatomite

    Science.gov (United States)

    Ma, Shu-Cui; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia

    2015-12-01

    Removal of noxious heavy metal ions (e.g. Pb(II)) by surface adsorption of minerals (e.g. diatomite) is an important means in the environmental aqueous pollution control. Thus, it is very essential to understand the surface adsorptive behavior and mechanism. In this work, the Pb(II) apparent surface complexation reaction equilibrium constants on the calcined diatomite and distributions of Pb(II) surface species were investigated through modeling calculations of Pb(II) based on diffuse double layer model (DLM) with three amphoteric sites. Batch experiments were used to study the adsorption of Pb(II) onto the calcined diatomite as a function of pH (3.0-7.0) and different ionic strengths (0.05 and 0.1 mol L-1 NaCl) under ambient atmosphere. Adsorption of Pb(II) can be well described by Freundlich isotherm models. The apparent surface complexation equilibrium constants (log K) were obtained by fitting the batch experimental data using the PEST 13.0 together with PHREEQC 3.1.2 codes and there is good agreement between measured and predicted data. Distribution of Pb(II) surface species on the diatomite calculated by PHREEQC 3.1.2 program indicates that the impurity cations (e.g. Al3+, Fe3+, etc.) in the diatomite play a leading role in the Pb(II) adsorption and dominant formation of complexes and additional electrostatic interaction are the main adsorption mechanism of Pb(II) on the diatomite under weak acidic conditions.

  2. Multiscale geometric modeling of macromolecules II: Lagrangian representation

    Science.gov (United States)

    Feng, Xin; Xia, Kelin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei

    2013-01-01

    Geometric modeling of biomolecules plays an essential role in the conceptualization of biolmolecular structure, function, dynamics and transport. Qualitatively, geometric modeling offers a basis for molecular visualization, which is crucial for the understanding of molecular structure and interactions. Quantitatively, geometric modeling bridges the gap between molecular information, such as that from X-ray, NMR and cryo-EM, and theoretical/mathematical models, such as molecular dynamics, the Poisson-Boltzmann equation and the Nernst-Planck equation. In this work, we present a family of variational multiscale geometric models for macromolecular systems. Our models are able to combine multiresolution geometric modeling with multiscale electrostatic modeling in a unified variational framework. We discuss a suite of techniques for molecular surface generation, molecular surface meshing, molecular volumetric meshing, and the estimation of Hadwiger’s functionals. Emphasis is given to the multiresolution representations of biomolecules and the associated multiscale electrostatic analyses as well as multiresolution curvature characterizations. The resulting fine resolution representations of a biomolecular system enable the detailed analysis of solvent-solute interaction, and ion channel dynamics, while our coarse resolution representations highlight the compatibility of protein-ligand bindings and possibility of protein-protein interactions. PMID:23813599

  3. Simplicial models for trace spaces II: General higher dimensional automata

    DEFF Research Database (Denmark)

    Raussen, Martin

    of directed paths with given end points in a pre-cubical complex as the nerve of a particular category. The paper generalizes the results from Raussen [19, 18] in which we had to assume that the HDA in question arises from a semaphore model. In particular, important for applications, it allows for models...

  4. Validation of the CAR II model for Flanders, Belgium; Validatie van het model CAR II voor Vlaanderen

    Energy Technology Data Exchange (ETDEWEB)

    Marien, S.; Celis, D.; Roekens, E.

    2013-04-15

    In Flanders, Belgium, the CAR model (Calculation of Air pollution from Road traffic) for air quality along urban roads was recently extensively validated for NO2. More clarity has been gained about the quality and accuracy of this model [Dutch] In Vlaanderen is het CAR-model (Calculation of Air pollution from Road traffic) voor de luchtkwaliteit langs binnenstedelijke wegen onlangs uitvoerig gevalideerd voor NO2. Er is nu meer duidelijkheid over de kwaliteit en nauwkeurigheid van dit model.

  5. [Collaboration among health professionals (II). Usefulness of a model].

    Science.gov (United States)

    D'Amour, Danielle; San Martín Rodríguez, Leticia

    2006-09-01

    This second article provides a model which helps one to better understand the process of collaboration by interprofessional teams and makes it possible to evaluate the quality of the aforementioned collaboration. To this end, the authors first present a structural model of inter-professional collaboration followed by a typology of collaboration which is derived from the functionality of said model. This model is composed by four interrelated dimensions; the functionality of these has given rise to a typology of collaboration at three intensities: in action, in construction and collaboration during inertia. The model and the typology constitute a useful tool for managers and for health professionals since they help to better understand, manage and develop collaboration among the distinct professionals inside of the same organization as among those who belong to distinct organizations.

  6. Internet accounting

    NARCIS (Netherlands)

    Pras, Aiko; van Beijnum, Bernhard J.F.; Sprenkels, Ron; Parhonyi, R.

    2001-01-01

    This article provides an introduction to Internet accounting and discusses the status of related work within the IETF and IRTF, as well as certain research projects. Internet accounting is different from accounting in POTS. To understand Internet accounting, it is important to answer questions like

  7. Carbonate-mediated Fe(II) oxidation in the air-cathode fuel cell: a kinetic model in terms of Fe(II) speciation.

    Science.gov (United States)

    Song, Wei; Zhai, Lin-Feng; Cui, Yu-Zhi; Sun, Min; Jiang, Yuan

    2013-06-06

    Due to the high redox activity of Fe(II) and its abundance in natural waters, the electro-oxidation of Fe(II) can be found in many air-cathode fuel cell systems, such as acid mine drainage fuel cells and sediment microbial fuel cells. To deeply understand these iron-related systems, it is essential to elucidate the kinetics and mechanisms involved in the electro-oxidation of Fe(II). This work aims to develop a kinetic model that adequately describes the electro-oxidation process of Fe(II) in air-cathode fuel cells. The speciation of Fe(II) is incorporated into the model, and contributions of individual Fe(II) species to the overall Fe(II) oxidation rate are quantitatively evaluated. The results show that the kinetic model can accurately predict the electro-oxidation rate of Fe(II) in air-cathode fuel cells. FeCO3, Fe(OH)2, and Fe(CO3)2(2-) are the most important species determining the electro-oxidation kinetics of Fe(II). The Fe(II) oxidation rate is primarily controlled by the oxidation of FeCO3 species at low pH, whereas at high pH Fe(OH)2 and Fe(CO3)2(2-) are the dominant species. Solution pH, carbonate concentration, and solution salinity are able to influence the electro-oxidation kinetics of Fe(II) through changing both distribution and kinetic activity of Fe(II) species.

  8. Account of the effect of nuclear collision cascades in model of radiation damage of RPV steels

    International Nuclear Information System (INIS)

    Kevorkyan, Yu.R.; Nikolaev, Yu.A.

    1997-01-01

    A kinetic model is proposed for describing the effect of collision cascades in model of radiation damage of reactor pressure vessel steels. This is a closed system of equations which can be solved only by numerical methods in general case

  9. Factors accounting for youth suicide attempt in Hong Kong: a model building.

    Science.gov (United States)

    Wan, Gloria W Y; Leung, Patrick W L

    2010-10-01

    This study aimed at proposing and testing a conceptual model of youth suicide attempt. We proposed a model that began with family factors such as a history of physical abuse and parental divorce/separation. Family relationship, presence of psychopathology, life stressors, and suicide ideation were postulated as mediators, leading to youth suicide attempt. The stepwise entry of the risk factors to a logistic regression model defined their proximity as related to suicide attempt. Path analysis further refined our proposed model of youth suicide attempt. Our originally proposed model was largely confirmed. The main revision was dropping parental divorce/separation as a risk factor in the model due to lack of significant contribution when examined alongside with other risk factors. This model was cross-validated by gender. This study moved research on youth suicide from identification of individual risk factors to model building, integrating separate findings of the past studies.

  10. Origin and structures of solar eruptions II: Magnetic modeling

    Science.gov (United States)

    Guo, Yang; Cheng, Xin; Ding, MingDe

    2017-07-01

    The topology and dynamics of the three-dimensional magnetic field in the solar atmosphere govern various solar eruptive phenomena and activities, such as flares, coronal mass ejections, and filaments/prominences. We have to observe and model the vector magnetic field to understand the structures and physical mechanisms of these solar activities. Vector magnetic fields on the photosphere are routinely observed via the polarized light, and inferred with the inversion of Stokes profiles. To analyze these vector magnetic fields, we need first to remove the 180° ambiguity of the transverse components and correct the projection effect. Then, the vector magnetic field can be served as the boundary conditions for a force-free field modeling after a proper preprocessing. The photospheric velocity field can also be derived from a time sequence of vector magnetic fields. Three-dimensional magnetic field could be derived and studied with theoretical force-free field models, numerical nonlinear force-free field models, magnetohydrostatic models, and magnetohydrodynamic models. Magnetic energy can be computed with three-dimensional magnetic field models or a time series of vector magnetic field. The magnetic topology is analyzed by pinpointing the positions of magnetic null points, bald patches, and quasi-separatrix layers. As a well conserved physical quantity, magnetic helicity can be computed with various methods, such as the finite volume method, discrete flux tube method, and helicity flux integration method. This quantity serves as a promising parameter characterizing the activity level of solar active regions.

  11. Conceptual Modeling in the Time of the Revolution: Part II

    Science.gov (United States)

    Mylopoulos, John

    Conceptual Modeling was a marginal research topic at the very fringes of Computer Science in the 60s and 70s, when the discipline was dominated by topics focusing on programs, systems and hardware architectures. Over the years, however, the field has moved to centre stage and has come to claim a central role both in Computer Science research and practice in diverse areas, such as Software Engineering, Databases, Information Systems, the Semantic Web, Business Process Management, Service-Oriented Computing, Multi-Agent Systems, Knowledge Management, and more. The transformation was greatly aided by the adoption of standards in modeling languages (e.g., UML), and model-based methodologies (e.g., Model-Driven Architectures) by the Object Management Group (OMG) and other standards organizations. We briefly review the history of the field over the past 40 years, focusing on the evolution of key ideas. We then note some open challenges and report on-going research, covering topics such as the representation of variability in conceptual models, capturing model intentions, and models of laws.

  12. Account of External Cooling Medium Temperature while Modeling Thermal Processes in Power Oil-Immersed Transformers

    OpenAIRE

    Yu. A. Rounov; O. G. Shirokov; D. I. Zalizny; D. M. Los

    2004-01-01

    The paper proposes a thermal model of a power oil-immersed transformer as a system of four homogeneous bodies: winding, oil, core and cooling medium. On the basis of experimental data it is shown that such model describes more precisely actual thermal processes taking place in a transformer than the thermal model accepted in GOST 14209-85.

  13. Account of External Cooling Medium Temperature while Modeling Thermal Processes in Power Oil-Immersed Transformers

    Directory of Open Access Journals (Sweden)

    Yu. A. Rounov

    2004-01-01

    Full Text Available The paper proposes a thermal model of a power oil-immersed transformer as a system of four homogeneous bodies: winding, oil, core and cooling medium. On the basis of experimental data it is shown that such model describes more precisely actual thermal processes taking place in a transformer than the thermal model accepted in GOST 14209-85.

  14. Accounting for subgrid scale topographic variations in flood propagation modeling using MODFLOW

    DEFF Research Database (Denmark)

    Milzow, Christian; Kinzelbach, W.

    2010-01-01

    To be computationally viable, grid-based spatially distributed hydrological models of large wetlands or floodplains must be set up using relatively large cells (order of hundreds of meters to kilometers). Computational costs are especially high when considering the numerous model runs or model time...

  15. Modelling of the application of near real time accountancy and process monitoring to plants

    International Nuclear Information System (INIS)

    Huddleston, J.; Stockwell, M.K.

    1983-09-01

    Many statistical tests have been proposed for the analysis of accountancy data from nuclear fuel reprocessing plants. The purpose of this programme was to assess the performance of these tests by applying them to data streams which simulate the information that would be available from a real plant. In addition the problems of pre-processing the raw data from a plant were considered. A suite of programs to analyse the data has been written, which include colour graphical output to allow effective interpretation of the results. The commercial software package VisiCalc has been evaluated and found to be effective for the rapid production of material balances from plant data. (author)

  16. Integrated Visualization Environment for Science Mission Modeling, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA is emphasizing the use of larger, more integrated models in conjunction with systems engineering tools and decision support systems. These tools place a...

  17. Supersymmetric standard model from the heterotic string (II)

    International Nuclear Information System (INIS)

    Buchmueller, W.; Hamaguchi, K.; Tokyo Univ.; Lebedev, O.; Ratz, M.

    2006-06-01

    We describe in detail a Z 6 orbifold compactification of the heterotic E 8 x E 8 string which leads to the (supersymmetric) standard model gauge group and matter content. The quarks and leptons appear as three 16-plets of SO(10), two of which are localized at fixed points with local SO(10) symmetry. The model has supersymmetric vacua without exotics at low energies and is consistent with gauge coupling unification. Supersymmetry can be broken via gaugino condensation in the hidden sector. The model has large vacuum degeneracy. Certain vacua with approximate B-L symmetry have attractive phenomenological features. The top quark Yukawa coupling arises from gauge interactions and is of the order of the gauge couplings. The other Yukawa couplings are suppressed by powers of standard model singlet fields, similarly to the Froggatt-Nielsen mechanism. (Orig.)

  18. Physics-Based Pneumatic Hammer Instability Model, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project is to develop a physics-based pneumatic hammer instability model that accurately predicts the stability of hydrostatic bearings...

  19. Fixed site neutralization model programmer's manual. Volume II

    International Nuclear Information System (INIS)

    Engi, D.; Chapman, L.D.; Judnick, W.; Blum, R.; Broegler, L.; Lenz, J.; Weinthraub, A.; Ballard, D.

    1979-12-01

    This report relates to protection of nuclear materials at nuclear facilities. This volume presents the source listings for the Fixed Site Neutralization Model and its supporting modules, the Plex Preprocessor and the Data Preprocessor

  20. EMPIRE-II 2.18, Comprehensive Nuclear Model Code, Nucleons, Ions Induced Cross-Sections

    International Nuclear Information System (INIS)

    Herman, Michal Wladyslaw; Panini, Gian Carlo

    2003-01-01

    1 - Description of program or function: EMPIRE-II is a flexible code for calculation of nuclear reactions in the frame of combined optical, Multi-step Direct (TUL), Multi-step Compound (NVWY) and statistical (Hauser-Feshbach) models. Incident particle can be a nucleon or any nucleus(Heavy Ion). Isomer ratios, residue production cross sections and emission spectra for neutrons, protons, alpha-particles, gamma-rays, and one type of Light Ion can be calculated. The energy range starts just above the resonance region for neutron induced reactions and extends up to several hundreds of MeV for the Heavy Ion induced reactions. IAEA1169/06: This version corrects an error in the Absoft compile procedure. 2 - Method of solution: For projectiles with A<5 EMPIRE calculates fusion cross section using spherical optical model transmission coefficients. In the case of Heavy Ion induced reactions the fusion cross section can be determined using various approaches including simplified coupled channels method (code CCFUS). Pre-equilibrium emission is treated in terms of quantum-mechanical theories (TUL-MSD and NVWY-MSC). MSC contribution to the gamma emission is taken into account. These calculations are followed by statistical decay with arbitrary number of subsequent particle emissions. Gamma-ray competition is considered in detail for every decaying compound nucleus. Different options for level densities are available including dynamical approach with collective effects taken into account. EMPIRE contains following third party codes converted into subroutines: - SCAT2 by O. Bersillon, - ORION and TRISTAN by H. Lenske and H. Wolter, - CCFUS by C.H. Dasso and S. Landowne, - BARMOM by A. Sierk. 3 - Restrictions on the complexity of the problem: The code can be easily adjusted to the problem by changing dimensions in the dimensions.h file. The actual limits are set by the available memory. In the current formulation up to 4 ejectiles plus gamma are allowed. This limit can be relaxed

  1. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    Science.gov (United States)

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  2. Artificial neural network (ANN) approach for modeling Zn(II) adsorption in batch process

    Energy Technology Data Exchange (ETDEWEB)

    Yildiz, Sayiter [Engineering Faculty, Cumhuriyet University, Sivas (Turkmenistan)

    2017-09-15

    Artificial neural networks (ANN) were applied to predict adsorption efficiency of peanut shells for the removal of Zn(II) ions from aqueous solutions. Effects of initial pH, Zn(II) concentrations, temperature, contact duration and adsorbent dosage were determined in batch experiments. The sorption capacities of the sorbents were predicted with the aid of equilibrium and kinetic models. The Zn(II) ions adsorption onto peanut shell was better defined by the pseudo-second-order kinetic model, for both initial pH, and temperature. The highest R{sup 2} value in isotherm studies was obtained from Freundlich isotherm for the inlet concentration and from Temkin isotherm for the sorbent amount. The high R{sup 2} values prove that modeling the adsorption process with ANN is a satisfactory approach. The experimental results and the predicted results by the model with the ANN were found to be highly compatible with each other.

  3. Artificial neural network (ANN) approach for modeling Zn(II) adsorption in batch process

    International Nuclear Information System (INIS)

    Yildiz, Sayiter

    2017-01-01

    Artificial neural networks (ANN) were applied to predict adsorption efficiency of peanut shells for the removal of Zn(II) ions from aqueous solutions. Effects of initial pH, Zn(II) concentrations, temperature, contact duration and adsorbent dosage were determined in batch experiments. The sorption capacities of the sorbents were predicted with the aid of equilibrium and kinetic models. The Zn(II) ions adsorption onto peanut shell was better defined by the pseudo-second-order kinetic model, for both initial pH, and temperature. The highest R"2 value in isotherm studies was obtained from Freundlich isotherm for the inlet concentration and from Temkin isotherm for the sorbent amount. The high R"2 values prove that modeling the adsorption process with ANN is a satisfactory approach. The experimental results and the predicted results by the model with the ANN were found to be highly compatible with each other.

  4. THE CURRENT ACCOUNT DEFICIT AND THE FIXED EXCHANGE RATE. ADJUSTING MECHANISMS AND MODELS.

    Directory of Open Access Journals (Sweden)

    HATEGAN D.B. Anca

    2010-07-01

    Full Text Available The main purpose of the paper is to explain what measures can be taken in order to fix the trade deficit, and the pressure that is upon a country by imposing such measures. The international and the national supply and demand conditions change rapidly, and if a country doesn’t succeed in keeping a tight control over its deficit, a lot of factors will affect its wellbeing. In order to reduce the external trade deficit, the government needs to resort to several techniques. The desired result is to have a balanced current account, and therefore, the government is free to use measures such as fixing its exchange rate, reducing government spending etc. We have shown that all these measures will have a certain impact upon an economy, by allowing its exports to thrive and eliminate the danger from excessive imports, or vice-versa. The main conclusion our paper is that government intervention is allowed in order to maintain the balance of the current account.

  5. Accounting for spatial effects in land use regression for urban air pollution modeling.

    Science.gov (United States)

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Programming Models for Three-Dimensional Hydrodynamics on the CM-5 (Part II)

    International Nuclear Information System (INIS)

    Amala, P.A.K.; Rodrigue, G.H.

    1994-01-01

    This is a two-part presentation of a timing study on the Thinking Machines CORP. CM-5 computer. Part II is given in this study and represents domain-decomposition and message-passing models. Part I described computational problems using a SIMD model and connection machine FORTRAN (CMF)

  7. Discriminating neutrino mass models using Type-II see-saw formula

    Indian Academy of Sciences (India)

    though a fuller analysis needs the full matrix form when all terms are present. This is followed by the normal hierarchical model (Type [III]) and inverted hierarchical model with opposite CP phase (Type [IIB]). γ ≃ 10−2 for both of them. Our main results on neutrino masses and mixings in Type-II see-saw formula are presented ...

  8. Shunted-Josephson-junction model. II. The nonautonomous case

    DEFF Research Database (Denmark)

    Belykh, V. N.; Pedersen, Niels Falsig; Sørensen, O. H.

    1977-01-01

    The shunted-Josephson-junction model with a monochromatic ac current drive is discussed employing the qualitative methods of the theory of nonlinear oscillations. As in the preceding paper dealing with the autonomous junction, the model includes a phase-dependent conductance and a shunt capacitance....... The mathematical discussion makes use of the phase-space representation of the solutions to the differential equation. The behavior of the trajectories in phase space is described for different characteristic regions in parameter space and the associated features of the junction IV curve to be expected are pointed...... out. The main objective is to provide a qualitative understanding of the junction behavior, to clarify which kinds of properties may be derived from the shunted-junction model, and to specify the relative arrangement of the important domains in the parameter-space decomposition....

  9. Production, decay, and mixing models of the iota meson. II

    International Nuclear Information System (INIS)

    Palmer, W.F.; Pinsky, S.S.

    1987-01-01

    A five-channel mixing model for the ground and radially excited isoscalar pseudoscalar states and a glueball is presented. The model extends previous work by including two-body unitary corrections, following the technique of Toernqvist. The unitary corrections include contributions from three classes of two-body intermediate states: pseudoscalar-vector, pseudoscalar-scalar, and vector-vector states. All necessary three-body couplings are extracted from decay data. The solution of the mixing model provides information about the bare mass of the glueball and the fundamental quark-glue coupling. The solution also gives the composition of the wave function of the physical states in terms of the bare quark and glue states. Finally, it is shown how the coupling constants extracted from decay data can be used to calculate the decay rates of the five physical states to all two-body channels

  10. Horns Rev II, 2D-Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU). The objective of the tests was: To investigate the combined influence of the pile diameter to water depth ratio and the wave height to water...... depth ratio on wave run-up of piles. The measurements should be used to design access platforms on piles. The Model tests include: Calibration of regular and irregular sea states at the location of the pile (without structure in place). Measurement of wave run-up for the calibrated sea states...... on the front side of the pile (0 to 90 degrees). These tests have been conducted at Aalborg University from 9. October, 2006 to 8. November, 2006. Unless otherwise mentioned, all values given in this report are in model scale....

  11. GSTARS computer models and their applications, Part II: Applications

    Science.gov (United States)

    Simoes, F.J.M.; Yang, C.T.

    2008-01-01

    In part 1 of this two-paper series, a brief summary of the basic concepts and theories used in developing the Generalized Stream Tube model for Alluvial River Simulation (GSTARS) computer models was presented. Part 2 provides examples that illustrate some of the capabilities of the GSTARS models and how they can be applied to solve a wide range of river and reservoir sedimentation problems. Laboratory and field case studies are used and the examples show representative applications of the earlier and of the more recent versions of GSTARS. Some of the more recent capabilities implemented in GSTARS3, one of the latest versions of the series, are also discussed here with more detail. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  12. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    Kovtonyuk, A.; Petruzzi, A.; D'Auria, F.

    2015-01-01

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  13. An extended two-lane car-following model accounting for inter-vehicle communication

    Science.gov (United States)

    Ou, Hui; Tang, Tie-Qiao

    2018-04-01

    In this paper, we develop a novel car-following model with inter-vehicle communication to explore each vehicle's movement in a two-lane traffic system when an incident occurs on a lane. The numerical results show that the proposed model can perfectly describe each vehicle's motion when an incident occurs, i.e., no collision occurs while the classical full velocity difference (FVD) model produces collision on each lane, which shows the proposed model is more reasonable. The above results can help drivers to reasonably adjust their driving behaviors when an incident occurs in a two-lane traffic system.

  14. Modelling reverse characteristics of power LEDs with thermal phenomena taken into account

    International Nuclear Information System (INIS)

    Ptak, Przemysław; Górecki, Krzysztof

    2016-01-01

    This paper refers to modelling characteristics of power LEDs with a particular reference to thermal phenomena. Special attention is paid to modelling characteristics of the circuit protecting the considered device against the excessive value of the reverse voltage and to the description of the temperature influence on optical power. The network form of the worked out model is presented and some results of experimental verification of this model for the selected diodes operating at different cooling conditions are described. The very good agreement between the calculated and measured characteristics is obtained

  15. Accounting for correlated observations in an age-based state-space stock assessment model

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders

    2016-01-01

    Fish stock assessment models often relyon size- or age-specific observations that are assumed to be statistically independent of each other. In reality, these observations are not raw observations, but rather they are estimates from a catch-standardization model or similar summary statistics base...... the independence assumption is rejected. Less fluctuating estimates of the fishing mortality is obtained due to a reduced process error. The improved model does not suffer from correlated residuals unlike the independent model, and the variance of forecasts is decreased....

  16. Contact Modelling in Resistance Welding, Part II: Experimental Validation

    DEFF Research Database (Denmark)

    Song, Quanfeng; Zhang, Wenqi; Bay, Niels

    2006-01-01

    Contact algorithms in resistance welding presented in the previous paper are experimentally validated in the present paper. In order to verify the mechanical contact algorithm, two types of experiments, i.e. sandwich upsetting of circular, cylindrical specimens and compression tests of discs...... with a solid ring projection towards a flat ring, are carried out at room temperature. The complete algorithm, involving not only the mechanical model but also the thermal and electrical models, is validated by projection welding experiments. The experimental results are in satisfactory agreement...

  17. Mathematical Model Taking into Account Nonlocal Effects of Plasmonic Structures on the Basis of the Discrete Source Method

    Science.gov (United States)

    Eremin, Yu. A.; Sveshnikov, A. G.

    2018-04-01

    The discrete source method is used to develop and implement a mathematical model for solving the problem of scattering electromagnetic waves by a three-dimensional plasmonic scatterer with nonlocal effects taken into account. Numerical results are presented whereby the features of the scattering properties of plasmonic particles with allowance for nonlocal effects are demonstrated depending on the direction and polarization of the incident wave.

  18. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    OpenAIRE

    Degeling, Koen; IJzerman, Maarten J.; Koopman, Miriam; Koffijberg, Hendrik

    2017-01-01

    Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by ...

  19. The Adsorption of Cd(II) on Manganese Oxide Investigated by Batch and Modeling Techniques.

    Science.gov (United States)

    Huang, Xiaoming; Chen, Tianhu; Zou, Xuehua; Zhu, Mulan; Chen, Dong; Pan, Min

    2017-09-28

    Manganese (Mn) oxide is a ubiquitous metal oxide in sub-environments. The adsorption of Cd(II) on Mn oxide as function of adsorption time, pH, ionic strength, temperature, and initial Cd(II) concentration was investigated by batch techniques. The adsorption kinetics showed that the adsorption of Cd(II) on Mn oxide can be satisfactorily simulated by pseudo-second-order kinetic model with high correlation coefficients (R² > 0.999). The adsorption of Cd(II) on Mn oxide significantly decreased with increasing ionic strength at pH adsorption was independent of ionic strength at pH > 6.0, which indicated that outer-sphere and inner-sphere surface complexation dominated the adsorption of Cd(II) on Mn oxide at pH 6.0, respectively. The maximum adsorption capacity of Mn oxide for Cd(II) calculated from Langmuir model was 104.17 mg/g at pH 6.0 and 298 K. The thermodynamic parameters showed that the adsorption of Cd(II) on Mn oxide was an endothermic and spontaneous process. According to the results of surface complexation modeling, the adsorption of Cd(II) on Mn oxide can be satisfactorily simulated by ion exchange sites (X₂Cd) at low pH and inner-sphere surface complexation sites (SOCd⁺ and (SO)₂CdOH - species) at high pH conditions. The finding presented herein plays an important role in understanding the fate and transport of heavy metals at the water-mineral interface.

  20. The Adsorption of Cd(II) on Manganese Oxide Investigated by Batch and Modeling Techniques

    Science.gov (United States)

    Huang, Xiaoming; Chen, Tianhu; Zou, Xuehua; Zhu, Mulan; Chen, Dong

    2017-01-01

    Manganese (Mn) oxide is a ubiquitous metal oxide in sub-environments. The adsorption of Cd(II) on Mn oxide as function of adsorption time, pH, ionic strength, temperature, and initial Cd(II) concentration was investigated by batch techniques. The adsorption kinetics showed that the adsorption of Cd(II) on Mn oxide can be satisfactorily simulated by pseudo-second-order kinetic model with high correlation coefficients (R2 > 0.999). The adsorption of Cd(II) on Mn oxide significantly decreased with increasing ionic strength at pH adsorption was independent of ionic strength at pH > 6.0, which indicated that outer-sphere and inner-sphere surface complexation dominated the adsorption of Cd(II) on Mn oxide at pH 6.0, respectively. The maximum adsorption capacity of Mn oxide for Cd(II) calculated from Langmuir model was 104.17 mg/g at pH 6.0 and 298 K. The thermodynamic parameters showed that the adsorption of Cd(II) on Mn oxide was an endothermic and spontaneous process. According to the results of surface complexation modeling, the adsorption of Cd(II) on Mn oxide can be satisfactorily simulated by ion exchange sites (X2Cd) at low pH and inner-sphere surface complexation sites (SOCd+ and (SO)2CdOH− species) at high pH conditions. The finding presented herein plays an important role in understanding the fate and transport of heavy metals at the water–mineral interface. PMID:28956849

  1. Homology modeling and docking of AahII-Nanobody complexes reveal the epitope binding site on AahII scorpion toxin.

    Science.gov (United States)

    Ksouri, Ayoub; Ghedira, Kais; Ben Abderrazek, Rahma; Shankar, B A Gowri; Benkahla, Alia; Bishop, Ozlem Tastan; Bouhaouala-Zahar, Balkiss

    2018-02-19

    Scorpion envenoming and its treatment is a public health problem in many parts of the world due to highly toxic venom polypeptides diffusing rapidly within the body of severely envenomed victims. Recently, 38 AahII-specific Nanobody sequences (Nbs) were retrieved from which the performance of NbAahII10 nanobody candidate, to neutralize the most poisonous venom compound namely AahII acting on sodium channels, was established. Herein, structural computational approach is conducted to elucidate the Nb-AahII interactions that support the biological characteristics, using Nb multiple sequence alignment (MSA) followed by modeling and molecular docking investigations (RosettaAntibody, ZDOCK software tools). Sequence and structural analysis showed two dissimilar residues of NbAahII10 CDR1 (Tyr27 and Tyr29) and an inserted polar residue Ser30 that appear to play an important role. Indeed, CDR3 region of NbAahII10 is characterized by a specific Met104 and two negatively charged residues Asp115 and Asp117. Complex dockings reveal that NbAahII17 and NbAahII38 share one common binding site on the surface of the AahII toxin divergent from the NbAahII10 one's. At least, a couple of NbAahII10 - AahII residue interactions (Gln38 - Asn44 and Arg62, His64, respectively) are mainly involved in the toxic AahII binding site. Altogether, this study gives valuable insights in the design and development of next generation of antivenom. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Mathematical model of rod oscillations with account of material relaxation behaviour

    Science.gov (United States)

    Kudinov, I. V.; Kudinov, V. A.; Eremin, A. V.; Zhukov, V. V.

    2018-03-01

    Taking into account the bounded velocity of strains and deformations propagation in the formula given in the Hooke’s law, the authors have obtained the differential equation of rod damped oscillations that includes the first and the third time derivatives of displacement as well as the mixed derivative (with respect to space and time variables). Study of its precise analytical solution found by means of separation of variables has shown that rod recovery after being disturbed is accompanied by low-amplitude damped oscillations that occur at the start time and only within the range of positive displacement values. The oscillations amplitude decreases with increase of relaxation factor. Rod is recovered virtually without an oscillating process both in the limit and with any high values of the relaxation factor.

  3. Open Business Models (Latin America) - Phase II | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    English · Français ... Open business is a different way of doing business related to information, knowledge and culture, in which intellectual ... Open business models include, for example, making content or services available free of charge and ...

  4. Modeling multibody systems with uncertainties. Part II: Numerical applications

    International Nuclear Information System (INIS)

    Sandu, Corina; Sandu, Adrian; Ahmadian, Mehdi

    2006-01-01

    This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects .In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties

  5. Modeling multibody systems with uncertainties. Part II: Numerical applications

    Energy Technology Data Exchange (ETDEWEB)

    Sandu, Corina, E-mail: csandu@vt.edu; Sandu, Adrian; Ahmadian, Mehdi [Virginia Polytechnic Institute and State University, Mechanical Engineering Department (United States)

    2006-04-15

    This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects .In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties.

  6. Bianchi Type-II inflationary models with constant deceleration ...

    Indian Academy of Sciences (India)

    ginning of the 1980s, nowadays receives a great deal of attention. Guth [1] proposed inflationary model in the context of grand unified theory (GUT), which has been accepted soon as the ..... where m1(> 0) is a constant of integration and n = 3. .... interesting feature of the present solution is that it is possible to exit from expo-.

  7. Demonstrations in Solute Transport Using Dyes: Part II. Modeling.

    Science.gov (United States)

    Butters, Greg; Bandaranayake, Wije

    1993-01-01

    A solution of the convection-dispersion equation is used to describe the solute breakthrough curves generated in the demonstrations in the companion paper. Estimation of the best fit model parameters (solute velocity, dispersion, and retardation) is illustrated using the method of moments for an example data set. (Author/MDH)

  8. Multilayer piezoelectric transducer models combined with Field II

    DEFF Research Database (Denmark)

    Bæk, David; Willatzen, Morten; Jensen, Jørgen Arendt

    2012-01-01

    One-dimensional and three-dimensional axisymmetric transducer model have been compared to determine their feasibility to predict the volt-to-surface impulse response of a circular Pz27 piezoceramic disc. The ceramic is assumed mounted with silver electrodes, bounded at the outer circular boundary...

  9. Simulation of reactive nanolaminates using reduced models: II. Normal propagation

    Energy Technology Data Exchange (ETDEWEB)

    Salloum, Maher; Knio, Omar M. [Department of Mechanical Engineering, The Johns Hopkins University, Baltimore, MD 21218-2686 (United States)

    2010-03-15

    Transient normal flame propagation in reactive Ni/Al multilayers is analyzed computationally. Two approaches are implemented, based on generalization of earlier methodology developed for axial propagation, and on extension of the model reduction formalism introduced in Part I. In both cases, the formulation accommodates non-uniform layering as well as the presence of inert layers. The equations of motion for the reactive system are integrated using a specially-tailored integration scheme, that combines extended-stability, Runge-Kutta-Chebychev (RKC) integration of diffusion terms with exact treatment of the chemical source term. The detailed and reduced models are first applied to the analysis of self-propagating fronts in uniformly-layered materials. Results indicate that both the front velocities and the ignition threshold are comparable for normal and axial propagation. Attention is then focused on analyzing the effect of a gap composed of inert material on reaction propagation. In particular, the impacts of gap width and thermal conductivity are briefly addressed. Finally, an example is considered illustrating reaction propagation in reactive composites combining regions corresponding to two bilayer widths. This setup is used to analyze the effect of the layering frequency on the velocity of the corresponding reaction fronts. In all cases considered, good agreement is observed between the predictions of the detailed model and the reduced model, which provides further support for adoption of the latter. (author)

  10. Storm Water Management Model Reference Manual Volume II – Hydraulics

    Science.gov (United States)

    SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and gene...

  11. PULSATING REVERSE DETONATION MODELS OF TYPE Ia SUPERNOVAE. II. EXPLOSION

    International Nuclear Information System (INIS)

    Bravo, Eduardo; Garcia-Senz, Domingo; Cabezon, Ruben M.; DomInguez, Inmaculada

    2009-01-01

    Observational evidences point to a common explosion mechanism of Type Ia supernovae based on a delayed detonation of a white dwarf (WD). However, all attempts to find a convincing ignition mechanism based on a delayed detonation in a destabilized, expanding, white dwarf have been elusive so far. One of the possibilities that has been invoked is that an inefficient deflagration leads to pulsation of a Chandrasekhar-mass WD, followed by formation of an accretion shock that confines a carbon-oxygen rich core, while transforming the kinetic energy of the collapsing halo into thermal energy of the core, until an inward moving detonation is formed. This chain of events has been termed Pulsating Reverse Detonation (PRD). In this work, we present three-dimensional numerical simulations of PRD models from the time of detonation initiation up to homologous expansion. Different models characterized by the amount of mass burned during the deflagration phase, M defl , give explosions spanning a range of kinetic energies, K ∼ (1.0-1.2) x 10 51 erg, and 56 Ni masses, M( 56 Ni) ∼ 0.6-0.8 M sun , which are compatible with what is expected for typical Type Ia supernovae. Spectra and light curves of angle-averaged spherically symmetric versions of the PRD models are discussed. Type Ia supernova spectra pose the most stringent requirements on PRD models.

  12. A Social Audit Model for Agro-biotechnology Initiatives in Developing Countries: Accounting for Ethical, Social, Cultural, and Commercialization Issues

    Directory of Open Access Journals (Sweden)

    Obidimma Ezezika

    2009-10-01

    Full Text Available There is skepticism and resistance to innovations associated with agro-biotechnology projects, leading to the possibility of failure. The source of the skepticism is complex, but partly traceable to how local communities view genetically engineered crops, public perception on the technology’s implications, and views on the role of the private sector in public health and agriculture, especially in the developing world. We posit that a governance and management model in which ethical, social, cultural, and commercialization issues are accounted for and addressed is important in mitigating risk of project failure and improving the appropriate adoption of agro-biotechnology in sub-Saharan Africa. We introduce a social audit model, which we term Ethical, Social, Cultural and Commercialization (ESC2 auditing and which we developed based on feedback from a number of stakeholders. We lay the foundation for its importance in agro-biotechnology development projects and show how the model can be applied to projects run by Public Private Partnerships. We argue that the implementation of the audit model can help to build public trust through facilitating project accountability and transparency. The model also provides evidence on how ESC2 issues are perceived by various stakeholders, which enables project managers to effectively monitor and improve project performance. Although this model was specifically designed for agro-biotechnology initiatives, we show how it can also be applied to other development projects.

  13. Assessing and accounting for time heterogeneity in stochastic actor oriented models

    NARCIS (Netherlands)

    Lospinoso, Joshua A.; Schweinberger, Michael; Snijders, Tom A. B.; Ripley, Ruth M.

    This paper explores time heterogeneity in stochastic actor oriented models (SAOM) proposed by Snijders (Sociological methodology. Blackwell, Boston, pp 361-395, 2001) which are meant to study the evolution of networks. SAOMs model social networks as directed graphs with nodes representing people,

  14. Bioeconomic Modelling of Wetlands and Waterfowl in Western Canada: Accounting for Amenity Values

    NARCIS (Netherlands)

    Kooten, van G.C.; Whitey, P.; Wong, L.

    2011-01-01

    This study reexamines and updates an original bioeconomic model of optimal duck harvest and wetland retention by Hammack and Brown (1974, Waterfowl and Wetlands: Toward Bioeconomic Analysis. Washington, DC: Resources for the Future). It then extends the model to include the nonmarket (in situ) value

  15. Supportive Accountability: A model for providing human support for internet and ehealth interventions

    NARCIS (Netherlands)

    Mohr, D.C.; Cuijpers, P.; Lehman, K.A.

    2011-01-01

    The effectiveness of and adherence to eHealth interventions is enhanced by human support. However, human support has largely not been manualized and has usually not been guided by clear models. The objective of this paper is to develop a clear theoretical model, based on relevant empirical

  16. An individual-based model of Zebrafish population dynamics accounting for energy dynamics

    DEFF Research Database (Denmark)

    Beaudouin, Remy; Goussen, Benoit; Piccini, Benjamin

    2015-01-01

    Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model...

  17. THE DISTRIBUTION MODELING OF IMPURITIES IN THE ATMOSPHERE WITH TAKING INTO ACCOUNT OF TERRAIN

    Directory of Open Access Journals (Sweden)

    P. B. Mashyhina

    2009-03-01

    Full Text Available The 2D numerical model to simulate the pollutant dispersion over complex terrain was proposed. The model is based on the equation of potential flow and the equation of admixture transfer. Results of the numerical experiment are presented.

  18. Accounting for perception in random regret choice models: Weberian and generalized Weberian specifications

    NARCIS (Netherlands)

    Jang, S.; Rasouli, S.; Timmermans, H.J.P.

    2016-01-01

    Recently, regret-based choice models have been introduced in the travel behavior research community as an alternative to expected/random utility models. The fundamental proposition underlying regret theory is that individuals minimize the amount of regret they (are expected to) experience when

  19. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  20. A constitutive model accounting for strain ageing effects on work-hardening. Application to a C-Mn steel

    Science.gov (United States)

    Ren, Sicong; Mazière, Matthieu; Forest, Samuel; Morgeneyer, Thilo F.; Rousselier, Gilles

    2017-12-01

    One of the most successful models for describing the Portevin-Le Chatelier effect in engineering applications is the Kubin-Estrin-McCormick model (KEMC). In the present work, the influence of dynamic strain ageing on dynamic recovery due to dislocation annihilation is introduced in order to improve the KEMC model. This modification accounts for additional strain hardening rate due to limited dislocation annihilation by the diffusion of solute atoms and dislocation pinning at low strain rate and/or high temperature. The parameters associated with this novel formulation are identified based on tensile tests for a C-Mn steel at seven temperatures ranging from 20 °C to 350 °C. The validity of the model and the improvement compared to existing models are tested using 2D and 3D finite element simulations of the Portevin-Le Chatelier effect in tension.

  1. Investigation of a new model accounting for rotors of finite tip-speed ratio in yaw or tilt

    International Nuclear Information System (INIS)

    Branlard, E; Gaunaa, M; Machefaux, E

    2014-01-01

    The main results from a recently developed vortex model are implemented into a Blade Element Momentum(BEM) code. This implementation accounts for the effect of finite tip-speed ratio, an effect which was not considered in standard BEM yaw-models. The model and its implementation are presented. Data from the MEXICO experiment are used as a basis for validation. Three tools using the same 2D airfoil coefficient data are compared: a BEM code, an Actuator-Line and a vortex code. The vortex code is further used to validate the results from the newly implemented BEM yaw-model. Significant improvements are obtained for the prediction of loads and induced velocities. Further relaxation of the main assumptions of the model are briefly presented and discussed

  2. Solving seismological problems using sgraph program: II-waveform modeling

    International Nuclear Information System (INIS)

    Abdelwahed, Mohamed F.

    2012-01-01

    One of the seismological programs to manipulate seismic data is SGRAPH program. It consists of integrated tools to perform advanced seismological techniques. SGRAPH is considered a new system for maintaining and analyze seismic waveform data in a stand-alone Windows-based application that manipulate a wide range of data formats. SGRAPH was described in detail in the first part of this paper. In this part, I discuss the advanced techniques including in the program and its applications in seismology. Because of the numerous tools included in the program, only SGRAPH is sufficient to perform the basic waveform analysis and to solve advanced seismological problems. In the first part of this paper, the application of the source parameters estimation and hypocentral location was given. Here, I discuss SGRAPH waveform modeling tools. This paper exhibits examples of how to apply the SGRAPH tools to perform waveform modeling for estimating the focal mechanism and crustal structure of local earthquakes.

  3. Unified model of current-hadronic interactions. II

    International Nuclear Information System (INIS)

    Moffat, J.W.; Wright, A.C.D.

    1975-01-01

    An analytic model of current-hadronic interactions is used to make predictions which are compared with recent data for vector-meson electroproduction and for the spin density matrix of photoproduced rho 0 mesons. The rho 0 and ω electroproduction cross sections are predicted to behave differently as the mass of the virtual photon varies; the diffraction peak broadens with increasing -q 2 at fixed ν and narrows with increasing energy. The predicted rho 0 density matrix elements do not possess the approximate s-channel helicity conservation seen experimentally. The model is continued to the inclusive electron-positron annihilation region, where parameter-free predictions are given for the inclusive prosess e + + e - → p + hadrons. The annihilation structure functions are found to have nontrivial scale-invariance limits. By using total cross-section data for e + e - annihilation into hardrons, we predict the mean multiplicity for the production of nucleons

  4. MODELING OF TARGETED DRUG DELIVERY PART II. MULTIPLE DRUG ADMINISTRATION

    Directory of Open Access Journals (Sweden)

    A. V. Zaborovskiy

    2017-01-01

    Full Text Available In oncology practice, despite significant advances in early cancer detection, surgery, radiotherapy, laser therapy, targeted therapy, etc., chemotherapy is unlikely to lose its relevance in the near future. In this context, the development of new antitumor agents is one of the most important problems of cancer research. In spite of the importance of searching for new compounds with antitumor activity, the possibilities of the “old” agents have not been fully exhausted. Targeted delivery of antitumor agents can give them a “second life”. When developing new targeted drugs and their further introduction into clinical practice, the change in their pharmacodynamics and pharmacokinetics plays a special role. The paper describes a pharmacokinetic model of the targeted drug delivery. The conditions under which it is meaningful to search for a delivery vehicle for the active substance were described. Primary screening of antitumor agents was undertaken to modify them for the targeted delivery based on underlying assumptions of the model.

  5. Differential geometry based solvation model II: Lagrangian formulation.

    Science.gov (United States)

    Chen, Zhan; Baker, Nathan A; Wei, G W

    2011-12-01

    Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation models. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The optimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and PB equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of

  6. Statistical models of a gas diffusion electrode: II. Current resistent

    Energy Technology Data Exchange (ETDEWEB)

    Proksch, D B; Winsel, O W

    1965-07-01

    The authors describe an apparatus for measuring the flow resistance of gas diffusion electrodes which is a mechanical analog of the Wheatstone bridge for measuring electric resistance. The flow resistance of a circular DSK electrode sheet, consisting of two covering layers and a working layer between them, was measured as a function of the gas pressure. While the pressure first was increased and then decreased, a hysteresis occurred, which is discussed and explained by a statistical model of a porous electrode.

  7. Physics Of Eclipsing Binaries. II. Towards the Increased Model Fidelity

    OpenAIRE

    Prša, Andrej; Conroy, Kyle E.; Horvat, Martin; Pablo, Herbert; Kochoska, Angela; Bloemen, Steven; Giammarco, Joseph; Hambleton, Kelly M.; Degroote, Pieter

    2016-01-01

    The precision of photometric and spectroscopic observations has been systematically improved in the last decade, mostly thanks to space-borne photometric missions and ground-based spectrographs dedicated to finding exoplanets. The field of eclipsing binary stars strongly benefited from this development. Eclipsing binaries serve as critical tools for determining fundamental stellar properties (masses, radii, temperatures and luminosities), yet the models are not capable of reproducing observed...

  8. An improved car-following model accounting for the preceding car's taillight

    Science.gov (United States)

    Zhang, Jian; Tang, Tie-Qiao; Yu, Shao-Wei

    2018-02-01

    During the deceleration process, the preceding car's taillight may have influences on its following car's driving behavior. In this paper, we propose an extended car-following model with consideration of the preceding car's taillight. Two typical situations are used to simulate each car's movement and study the effects of the preceding car's taillight on the driving behavior. Meanwhile, sensitivity analysis of the model parameter is in detail discussed. The numerical results show that the proposed model can improve the stability of traffic flow and the traffic safety can be enhanced without a decrease of efficiency especially when cars pass through a signalized intersection.

  9. Horns Rev II, 2D-Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Brorsen, Michael

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), Denmark. The starting point for the present report is the previously carried out run-up tests described in Lykke Andersen & Frigaard, 2006. The......-shaped access platforms on piles. The Model tests include mainly regular waves and a few irregular wave tests. These tests have been conducted at Aalborg University from 9. November, 2006 to 17. November, 2006.......This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), Denmark. The starting point for the present report is the previously carried out run-up tests described in Lykke Andersen & Frigaard, 2006....... The objective of the tests was to investigate the impact pressures generated on a horizontal platform and a cone platform for selected sea states calibrated by Lykke Andersen & Frigaard, 2006. The measurements should be used for assessment of slamming coefficients for the design of horizontal and cone...

  10. Model of comet comae. II. Effects of solar photodissociative ionization

    International Nuclear Information System (INIS)

    Huebner, W.F.; Giguere, P.T.

    1980-01-01

    Improvements to our computer model of coma plotochemistry are described. These include an expansion of the chemical reactions network and new rate constants that have been measured only recently. Photolytic reactions of additional molecules are incorporated, and photolytic branching ratios are treated in far greater detail than in our previous work. A total of 25 photodissociative ionization (PDI) reactions are now considered (as compared to only 3 PDI reactions previously). Solar PDI of the mother molecule CO 2 is shown to compete effectively with photoionization of CO in the production of observed CO + . The CO + density peak predicted by our improved model, for COP 2 or CO mother molecules, is deep in the inner coma, in better agreement with observation than our old CO 2 model. However, neither CO 2 nor CO mother molecule calculations reproduce the CO + /H 2 O + ratio observed in comet Kohoutek. PDI products of CO 2 , CO, CH 4 , and NH 3 mother molecules fuel a complex chemistry scheme, producing inner coma abundances of CN, C 2 , and C 3 much greater than previously calculated

  11. Slag Behavior in Gasifiers. Part II: Constitutive Modeling of Slag

    Energy Technology Data Exchange (ETDEWEB)

    Massoudi, Mehrdad [National Energy Technology Laboratory; Wang, Ping

    2013-02-07

    The viscosity of slag and the thermal conductivity of ash deposits are among two of the most important constitutive parameters that need to be studied. The accurate formulation or representations of the (transport) properties of coal present a special challenge of modeling efforts in computational fluid dynamics applications. Studies have indicated that slag viscosity must be within a certain range of temperatures for tapping and the membrane wall to be accessible, for example, between 1,300 °C and 1,500 °C, the viscosity is approximately 25 Pa·s. As the operating temperature decreases, the slag cools and solid crystals begin to form. Since slag behaves as a non-linear fluid, we discuss the constitutive modeling of slag and the important parameters that must be studied. We propose a new constitutive model, where the stress tensor not only has a yield stress part, but it also has a viscous part with a shear rate dependency of the viscosity, along with temperature and concentration dependency, while allowing for the possibility of the normal stress effects. In Part I, we reviewed, identify and discuss the key coal ash properties and the operating conditions impacting slag behavior.

  12. A GLOBAL MAGNETIC TOPOLOGY MODEL FOR MAGNETIC CLOUDS. II

    Energy Technology Data Exchange (ETDEWEB)

    Hidalgo, M. A., E-mail: miguel.hidalgo@uah.es [Departamento de Fisica, Universidad de Alcala, Apartado 20, E-28871 Alcala de Henares, Madrid (Spain)

    2013-04-01

    In the present work, we extensively used our analytical approach to the global magnetic field topology of magnetic clouds (MCs), introduced in a previous paper, in order to show its potential and to study its physical consistency. The model assumes toroidal topology with a non-uniform (variable maximum radius) cross-section along them. Moreover, it has a non-force-free character and also includes the expansion of its cross-section. As is shown, the model allows us, first, to analyze MC magnetic structures-determining their physical parameters-with a variety of magnetic field shapes, and second, to reconstruct their relative orientation in the interplanetary medium from the observations obtained by several spacecraft. Therefore, multipoint spacecraft observations give the opportunity to infer the structure of this large-scale magnetic flux rope structure in the solar wind. For these tasks, we use data from Helios (A and B), STEREO (A and B), and Advanced Composition Explorer. We show that the proposed analytical model can explain quite well the topology of several MCs in the interplanetary medium and is a good starting point for understanding the physical mechanisms under these phenomena.

  13. A Nonlinear Transmission Line Model of the Cochlea With Temporal Integration Accounts for Duration Effects in Threshold Fine Structure

    DEFF Research Database (Denmark)

    Verhey, Jesko L.; Mauermann, Manfred; Epp, Bastian

    2017-01-01

    For normal-hearing listeners, auditory pure-tone thresholds in quiet often show quasi periodic fluctuations when measured with a high frequency resolution, referred to as threshold fine structure. Threshold fine structure is dependent on the stimulus duration, with smaller fluctuations for short...... than for long signals. The present study demonstrates how this effect can be captured by a nonlinear and active model of the cochlear in combination with a temporal integration stage. Since this cochlear model also accounts for fine structure and connected level dependent effects, it is superior...

  14. Modelling of L-valine Repeated Fed-batch Fermentation Process Taking into Account the Dissolved Oxygen Tension

    Directory of Open Access Journals (Sweden)

    Tzanko Georgiev

    2009-03-01

    Full Text Available This article deals with synthesis of dynamic unstructured model of variable volume fed-batch fermentation process with intensive droppings for L-valine production. The presented approach of the investigation includes the following main procedures: description of the process by generalized stoichiometric equations; preliminary data processing and calculation of specific rates for main kinetic variables; identification of the specific rates takes into account the dissolved oxygen tension; establishment and optimisation of dynamic model of the process; simulation researches. MATLAB is used as a research environment.

  15. Accounting for scattering in the Landauer-Datta-Lundstrom transport model

    Directory of Open Access Journals (Sweden)

    Юрій Олексійович Кругляк

    2015-03-01

    Full Text Available Scattering of carriers in the LDL transport model during the changes of the scattering times in the collision processes is considered qualitatively. The basic relationship between the transmission coefficient T and the average mean free path  is derived for 1D conductor. As an example, the experimental data for Si MOSFET are analyzed with the use of various models of reliability.

  16. Loading Processes Dynamics Modelling Taking into Account the Bucket-Soil Interaction

    Directory of Open Access Journals (Sweden)

    Carmen Debeleac

    2007-10-01

    Full Text Available The author propose three dynamic models specialized for the vibrations and resistive forces analysis that appear at the loading process with different construction equipment like frontal loaders and excavators.The models used putting into evidence the components of digging: penetration, cutting, and loading.The conclusions of this study consist by evidentiate the dynamic overloads that appear on the working state and that induced the self-oscillations into the equipment structure.

  17. A multiphase constitutive model of reinforced soils accounting for soil-inclusion interaction behaviour

    OpenAIRE

    BENNIS, M; DE BUHAN, P

    2003-01-01

    A two-phase continuum description of reinforced soil structures is proposed in which the soil mass and the reinforcement network are treated as mutually interacting superposed media. The equations governing such a model are developed in the context of elastoplasticity, with special emphasis put on the soil/reinforcement interaction constitutive law. As shown in an illustrative example, such a model paves the way for numerically efficient design methods of reinforced soil structures.

  18. Mathematical modeling of pigment dispersion taking into account the full agglomerate particle size distribution

    DEFF Research Database (Denmark)

    Kiil, Søren

    2017-01-01

    The purpose of this work is to develop a mathematical model that can quantify the dispersion of pigments, with a focus on the mechanical breakage of pigment agglomerates. The underlying physical mechanism was assumed to be surface erosion of spherical pigment agglomerates. The full agglomerate pa.......g., in the development of novel dispersion principles and for analysis of dispersion failures. The general applicability of the model, beyond the three pigments considered, needs to be confirmed....

  19. Management Accounting

    OpenAIRE

    John Burns; Martin Quinn; Liz Warren; João Oliveira

    2013-01-01

    Overview of the BookThe textbook comprises six sections which together represent a comprehensive insight into management accounting - its technical attributes, changeable wider context, and the multiple roles of management accountants. The sections cover: (1) an introduction to management accounting, (2) how organizations account for their costs, (3) the importance of tools and techniques which assist organizational planning and control, (4) the various dimensions of making business decisions...

  20. An extended continuum model accounting for the driver's timid and aggressive attributions

    International Nuclear Information System (INIS)

    Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng

    2017-01-01

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV–Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption. - Highlights: • A new continuum model is developed with the consideration of the driver's timid and aggressive behaviors simultaneously. • Applying the linear stability theory, the new model's linear stability is obtained. • Through nonlinear analysis, the KdV–Burgers equation is derived. • The energy consumption for this model is studied.

  1. Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.

    Science.gov (United States)

    Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael

    2015-03-03

    Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.

  2. An extended continuum model accounting for the driver's timid and aggressive attributions

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Rongjun; Ge, Hongxia [Faculty of Maritime and Transportation, Ningbo University, Ningbo 315211 (China); Jiangsu Province Collaborative Innovation Center for Modern Urban Traffic Technologies, Nanjing 210096 (China); National Traffic Management Engineering and Technology Research Centre Ningbo University Sub-centre, Ningbo 315211 (China); Wang, Jufeng, E-mail: wjf@nit.zju.edu.cn [Ningbo Institute of Technology, Zhejiang University, Ningbo 315100 (China)

    2017-04-18

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV–Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption. - Highlights: • A new continuum model is developed with the consideration of the driver's timid and aggressive behaviors simultaneously. • Applying the linear stability theory, the new model's linear stability is obtained. • Through nonlinear analysis, the KdV–Burgers equation is derived. • The energy consumption for this model is studied.

  3. Modelling the long term alteration of concretes: taking carbonation into account

    International Nuclear Information System (INIS)

    Badouix, Franck

    1999-01-01

    After an introduction on the storage and warehousing of wastes from the nuclear industry (principles and objectives, general historic context, classification of radioactive wastes), an overview of studies performed within the CEA on wastes (activities related to the fuel cycle, research on warehousing and storage materials), and an introduction to the development of a general code of simulation of the degradation of cement matrix material and of a modelling of concrete carbonation under water, this research thesis reports a bibliographical study on the following topics: case of a non-altered hydrated concrete, expertise performed on altered materials on industrial sites, alteration of CPA-CEM I paste (alteration by demineralized water, carbonation). Based on these observations, a simplified model is developed for the cross diffusion of calcium and carbonates in a semi-infinite inert porous matrix of portlandite. This model is used to simulate degradations performed in laboratory on a CPA-CEM I paste. This model reveals to be insufficient as far as carbonation is concerned. Tests are performed to study the influence of granulates on a concrete (from an industrial site or elaborated in laboratory with a known composition) in water with low mineral content. A model is developed to understand the behaviour of paste-granulate interfaces. Then, concretes are lixiviated in carbonated water, and by using previous results and the simplified modelling of carbonation, simulations are performed and compared with experimental results [fr

  4. An agent-based simulation model of patient choice of health care providers in accountable care organizations.

    Science.gov (United States)

    Alibrahim, Abdullah; Wu, Shinyi

    2018-03-01

    Accountable care organizations (ACO) in the United States show promise in controlling health care costs while preserving patients' choice of providers. Understanding the effects of patient choice is critical in novel payment and delivery models like ACO that depend on continuity of care and accountability. The financial, utilization, and behavioral implications associated with a patient's decision to forego local health care providers for more distant ones to access higher quality care remain unknown. To study this question, we used an agent-based simulation model of a health care market composed of providers able to form ACO serving patients and embedded it in a conditional logit decision model to examine patients capable of choosing their care providers. This simulation focuses on Medicare beneficiaries and their congestive heart failure (CHF) outcomes. We place the patient agents in an ACO delivery system model in which provider agents decide if they remain in an ACO and perform a quality improving CHF disease management intervention. Illustrative results show that allowing patients to choose their providers reduces the yearly payment per CHF patient by $320, reduces mortality rates by 0.12 percentage points and hospitalization rates by 0.44 percentage points, and marginally increases provider participation in ACO. This study demonstrates a model capable of quantifying the effects of patient choice in a theoretical ACO system and provides a potential tool for policymakers to understand implications of patient choice and assess potential policy controls.

  5. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    Science.gov (United States)

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Using a chemistry transport model to account for the spatial variability of exposure concentrations in epidemiologic air pollution studies.

    Science.gov (United States)

    Valari, Myrto; Menut, Laurent; Chatignoux, Edouard

    2011-02-01

    Environmental epidemiology and more specifically time-series analysis have traditionally used area-averaged pollutant concentrations measured at central monitors as exposure surrogates to associate health outcomes with air pollution. However, spatial aggregation has been shown to contribute to the overall bias in the estimation of the exposure-response functions. This paper presents the benefit of adding features of the spatial variability of exposure by using concentration fields modeled with a chemistry transport model instead of monitor data and accounting for human activity patterns. On the basis of county-level census data for the city of Paris, France, and a Monte Carlo simulation, a simple activity model was developed accounting for the temporal variability between working and evening hours as well as during transit. By combining activity data with modeled concentrations, the downtown, suburban, and rural spatial patterns in exposure to nitrogen dioxide, ozone, and PM2.5 (particulate matter [PM] pollution on total nonaccidental mortality for the 4-yr period from 2001 to 2004. It was shown that the time series of the exposure surrogates developed here are less correlated across co-pollutants than in the case of the area-averaged monitor data. This led to less biased exposure-response functions when all three co-pollutants were inserted simultaneously in the same regression model. This finding yields insight into pollutant-specific health effects that are otherwise masked by the high correlation among co-pollutants.

  7. Accounting standards

    NARCIS (Netherlands)

    Stellinga, B.; Mügge, D.

    2014-01-01

    The European and global regulation of accounting standards have witnessed remarkable changes over the past twenty years. In the early 1990s, EU accounting practices were fragmented along national lines and US accounting standards were the de facto global standards. Since 2005, all EU listed

  8. Accounting outsourcing

    OpenAIRE

    Linhartová, Lucie

    2012-01-01

    This thesis gives a complex view on accounting outsourcing, deals with the outsourcing process from its beginning (condition of collaboration, making of contract), through collaboration to its possible ending. This work defines outsourcing, indicates the main advatages, disadvatages and arguments for its using. The main object of thesis is mainly practical side of accounting outsourcing and providing of first quality accounting services.

  9. Three-body model of deuteron breakup and stripping, II

    International Nuclear Information System (INIS)

    Austern, N.; Vincent, C.M.; Farrell, J.P. Jr.

    1978-01-01

    A previously investigated three-body model of the deuteron-nucleus system, limited to relative angular momentum l=0 for the two active nucleons, is reevaluated. Full attention is given to self-consistency between elastic and breakup channels. Introduction of the reaction of breakup on the elastic channel now reduces the elastic reflection coefficients in low partial waves by nearly a factor of 2 and causes substantial shifts in phase. Breakup amplitudes in low partial waves are also greatly reduced. As before, the breakup part of the wavefunction contains a broad specteum of n-p continuum states. The breakup part of the wavefunction at zero n-p separation is localized at small radii, within and just outside the target nucleus, where it is comparable in magnitude with the projected elastic channel wavefunction. As a result, the projected elastic channel wavefuntion is a poor approximation to the full wavefunction at n-p coincidence. Deuteron stripping theories that use the projected elastic wavefunction in a truncated distorted waves Born series must correspondingly be quite misleading. To investigate deuteron stripping further, the exact result of the coupled channels calculation is compared with several standard approximate models. Although there is a close qualitative resemblance among the results of all the approaches, the best single approximation to the coupled channels result is found from the familiar phenomenological approach, in which a local optical potential is fitted to the elastic scattering ''observed'' in the coupled channels calculation. The coupled channels results are also used to analyze the approximations in the Johnson-Soper method. Several formal aspects of the three-body model are discussed

  10. Mechanistic Physiologically Based Pharmacokinetic (PBPK) Model of the Heart Accounting for Inter-Individual Variability: Development and Performance Verification.

    Science.gov (United States)

    Tylutki, Zofia; Mendyk, Aleksander; Polak, Sebastian

    2018-04-01

    Modern model-based approaches to cardiac safety and efficacy assessment require accurate drug concentration-effect relationship establishment. Thus, knowledge of the active concentration of drugs in heart tissue is desirable along with inter-subject variability influence estimation. To that end, we developed a mechanistic physiologically based pharmacokinetic model of the heart. The models were described with literature-derived parameters and written in R, v.3.4.0. Five parameters were estimated. The model was fitted to amitriptyline and nortriptyline concentrations after an intravenous infusion of amitriptyline. The cardiac model consisted of 5 compartments representing the pericardial fluid, heart extracellular water, and epicardial intracellular, midmyocardial intracellular, and endocardial intracellular fluids. Drug cardiac metabolism, passive diffusion, active efflux, and uptake were included in the model as mechanisms involved in the drug disposition within the heart. The model accounted for inter-individual variability. The estimates of optimized parameters were within physiological ranges. The model performance was verified by simulating 5 clinical studies of amitriptyline intravenous infusion, and the simulated pharmacokinetic profiles agreed with clinical data. The results support the model feasibility. The proposed structure can be tested with the goal of improving the patient-specific model-based cardiac safety assessment and offers a framework for predicting cardiac concentrations of various xenobiotics. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  11. Modeling of the core of Atucha II nuclear power plant

    International Nuclear Information System (INIS)

    Blanco, Anibal

    2007-01-01

    This work is part of a Nuclear Engineer degree thesis of the Instituto Balseiro and it is carried out under the development of an Argentinean Nuclear Power Plant Simulator. To obtain the best representation of the reactor physical behavior using the state of the art tools this Simulator should couple a 3D neutronics core calculation code with a thermal-hydraulics system code. Focused in the neutronic nature of this job, using PARCS, we modeled and performed calculations of the nuclear power plant Atucha 2 core. Whenever it is possible, we compare our results against results obtained with PUMA (the official core code for Atucha 2). (author) [es

  12. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  13. Accounting for misclassification in electronic health records-derived exposures using generalized linear finite mixture models.

    Science.gov (United States)

    Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M

    2017-06-01

    Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.

  14. PIO I-II tendencies. Part 2. Improving the pilot modeling

    Directory of Open Access Journals (Sweden)

    Ioan URSU

    2011-03-01

    Full Text Available The study is conceived in two parts and aims to get some contributions to the problem ofPIO aircraft susceptibility analysis. Part I, previously published in this journal, highlighted the mainsteps of deriving a complex model of human pilot. The current Part II of the paper considers a properprocedure of the human pilot mathematical model synthesis in order to analyze PIO II typesusceptibility of a VTOL-type aircraft, related to the presence of position and rate-limited actuator.The mathematical tools are those of semi global stability theory developed in recent works.

  15. Tropospheric ozone and the environment II. Effects, modeling and control

    International Nuclear Information System (INIS)

    Berglund, R.L.

    1992-01-01

    This was the sixth International Specialty Conference on ozone for the Air ampersand Waste Management Association since 1978 and the first to be held in the Southeast. Of the preceding five conferences, three were held in Houston, one in New England, and one in Los Angeles. The changing location continues to support the understanding that tropospheric ozone is a nationwide problem, requiring understanding and participation by representatives of all regions. Yet, questions such as the following continue to be raised over all aspects of the nation's efforts to control ozone. Are the existing primary and secondary National Ambient Air Quality Standards (NAAQS) for ozone the appropriate targets for the ozone control strategy, or should they be modified to more effectively accommodate new health or ecological effects information, or better fit statistical analyses of ozone modeling data? Are the modeling tools presently available adequate to predict ozone concentrations for future precursor emission trends? What ozones attainment strategy will be the best means of meeting the ozone standard? To best answer these and other questions there needs to be a continued sharing of information among researchers working on these and other questions. While answers to these questions will often be qualitative and location specific, they will help focus future research programs and assist in developing future regulatory strategies

  16. MODELING THE 1958 LITUYA BAY MEGA-TSUNAMI, II

    Directory of Open Access Journals (Sweden)

    Charles L. Mader

    2002-01-01

    Full Text Available Lituya Bay, Alaska is a T-Shaped bay, 7 miles long and up to 2 miles wide. The two arms at the head of the bay, Gilbert and Crillon Inlets, are part of a trench along the Fairweather Fault. On July 8, 1958, an 7.5 Magnitude earthquake occurred along the Fairweather fault with an epicenter near Lituya Bay.A mega-tsunami wave was generated that washed out trees to a maximum altitude of 520 meters at the entrance of Gilbert Inlet. Much of the rest of the shoreline of the Bay was denuded by the tsunami from 30 to 200 meters altitude.In the previous study it was determined that if the 520 meter high run-up was 50 to 100 meters thick, the observed inundation in the rest of Lituya Bay could be numerically reproduced. It was also concluded that further studies would require full Navier-Stokes modeling similar to those required for asteroid generated tsunami waves.During the Summer of 2000, Hermann Fritz conducted experiments that reproduced the Lituya Bay 1958 event. The laboratory experiments indicated that the 1958 Lituya Bay 524 meter run-up on the spur ridge of Gilbert Inlet could be caused by a landslide impact.The Lituya Bay impact landslide generated tsunami was modeled with the full Navier- Stokes AMR Eulerian compressible hydrodynamic code called SAGE with includes the effect of gravity.

  17. Spin and Wind Directions II: A Bell State Quantum Model.

    Science.gov (United States)

    Aerts, Diederik; Arguëlles, Jonito Aerts; Beltran, Lester; Geriente, Suzette; Sassoli de Bianchi, Massimiliano; Sozzo, Sandro; Veloz, Tomas

    2018-01-01

    In the first half of this two-part article (Aerts et al. in Found Sci. doi:10.1007/s10699-017-9528-9, 2017b), we analyzed a cognitive psychology experiment where participants were asked to select pairs of directions that they considered to be the best example of Two Different Wind Directions , and showed that the data violate the CHSH version of Bell's inequality, with same magnitude as in typical Bell-test experiments in physics. In this second part, we complete our analysis by presenting a symmetrized version of the experiment, still violating the CHSH inequality but now also obeying the marginal law, for which we provide a full quantum modeling in Hilbert space, using a singlet state and suitably chosen product measurements. We also address some of the criticisms that have been recently directed at experiments of this kind, according to which they would not highlight the presence of genuine forms of entanglement. We explain that these criticisms are based on a view of entanglement that is too restrictive, thus unable to capture all possible ways physical and conceptual entities can connect and form systems behaving as a whole. We also provide an example of a mechanical model showing that the violations of the marginal law and Bell inequalities are generally to be associated with different mechanisms.

  18. Translating Institutional Templates: A Historical Account of the Consequences of Importing Policing Models into Argentina

    Directory of Open Access Journals (Sweden)

    Matías Dewey

    2017-01-01

    Full Text Available This article focuses on the translation of the French and English law enforcement models into Argentina and analyzes its consequences in terms of social order. Whereas in the former two models the judiciary and police institutions originated in large-scale processes of historical consolidation, in the latter these institutions were implanted without the antecedents present in their countries of origin. The empirical references are Argentine police institutions, particularly the police of the Buenos Aires Province, observed at two moments in which the institutional import was particularly intense: towards the end of the nineteenth and beginning of the twentieth centuries, and at the end of the twentieth century. By way of tracing these processes of police constitution and reform, we show how new models of law enforcement and policing interacted with indigenous political structures and cultural frames, as well as how this constellation produced a social order in which legality and illegality are closely interwoven. The article is an attempt to go beyond the common observations regarding how an imported model failed; instead, it dissects the effects the translation actually produced and how the translated models transform into resources that reshape the new social order. A crucial element, the article shows, is that these resources can be instrumentalized according to »idiosyncrasies«, interests, and quotas of power.

  19. Unsupervised machine learning account of magnetic transitions in the Hubbard model

    Science.gov (United States)

    Ch'ng, Kelvin; Vazquez, Nick; Khatami, Ehsan

    2018-01-01

    We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t -distributed stochastic neighboring ensemble (t -SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t -SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t -SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.

  20. An extension of a high temperature creep model to account for fuel sheath oxidation

    International Nuclear Information System (INIS)

    Boccolini, G.; Valli, G.

    1983-01-01

    Starting from the high-temperature creep model for Zircaloy fuel sheathing, the NIRVANA (developed by AECL), a multilayer model, is proposed in this paper: it includes the outer oxide plus alpha retained layers, and the inner core of beta or alpha plus beta material, all constrained to deform with the same creep rate. The model has been incorporated into the SPARA fuel computer code developed for the transient analysis of fuel rod behaviour in the CIRENE prototype reactor, but it is in principle valid for all Zircaloy fuel sheathings. Its predictions are compared with experimental results from burst tests on BWR and PWR type sheaths; the tests were carried out at CNEN under two research contracts with Ansaldo Meccanico Nucleare and Sigen-Sopren, respectively

  1. Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality

    Energy Technology Data Exchange (ETDEWEB)

    Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis

    2015-03-01

    This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.

  2. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Science.gov (United States)

    Beheshti, Rahmatollah; Jones-Smith, Jessica C; Igusa, Takeru

    2017-01-01

    Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES), the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1) quantifying the influence of prior diet preferences when food budgets are increased and 2) simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs), or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP). Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  3. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Directory of Open Access Journals (Sweden)

    Rahmatollah Beheshti

    Full Text Available Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES, the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1 quantifying the influence of prior diet preferences when food budgets are increased and 2 simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs, or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP. Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  4. Improved signal model for confocal sensors accounting for object depending artifacts.

    Science.gov (United States)

    Mauch, Florian; Lyda, Wolfram; Gronle, Marc; Osten, Wolfgang

    2012-08-27

    The conventional signal model of confocal sensors is well established and has proven to be exceptionally robust especially when measuring rough surfaces. Its physical derivation however is explicitly based on plane surfaces or point like objects, respectively. Here we show experimental results of a confocal point sensor measurement of a surface standard. The results illustrate the rise of severe artifacts when measuring curved surfaces. On this basis, we present a systematic extension of the conventional signal model that is proven to be capable of qualitatively explaining these artifacts.

  5. Multiphoton production at high energies in the standard model. II

    International Nuclear Information System (INIS)

    Mahlon, G.

    1993-01-01

    We examine multiphoton production in the electroweak sector of the standard model in the high-energy limit using the equivalence theorem in combination with spinor helicity techniques. We utilize currents consisting of a charged scalar, spinor, or vector line that radiates n photons. Only one end of the charged line is off shell in these currents, which are known for the cases of like-helicity and one unlike-helicity photons. We obtain a wide variety of helicity amplitudes for processes involving two pairs of charged particles by considering combinations of four currents. We examine the situation with respect to currents which have both ends of the charged line off shell, and present solutions for the case of like-helicity photons. These new currents may be combined with two of the original currents to produce additional amplitudes involving Higgs bosons, longitudinal Z, or neutrino pairs

  6. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    OpenAIRE

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts, allowing the inclusion of a broader set of ecosystem services types such regulating services and cultural services. Consistent with the principles of national account, ecosystem accounting focuses on asse...

  7. A comparison of land use change accounting methods: seeking common grounds for key modeling choices in biofuel assessments

    DEFF Research Database (Denmark)

    de Bikuna Salinas, Koldo Saez; Hamelin, Lorie; Hauschild, Michael Zwicky

    2018-01-01

    Five currently used methods to account for the global warming (GW) impact of the induced land-use change (LUC) greenhouse gas (GHG) emissions have been applied to four biofuel case studies. Two of the investigated methods attempt to avoid the need of considering a definite occupation -thus...... amortization period by considering ongoing LUC trends as a dynamic baseline. This leads to the accounting of a small fraction (0.8%) of the related emissions from the assessed LUC, thus their validity is disputed. The comparison of methods and contrasting case studies illustrated the need of clearly...... distinguishing between the different time horizons involved in life cycle assessments (LCA) of land-demanding products like biofuels. Absent in ISO standards, and giving rise to several confusions, definitions for the following time horizons have been proposed: technological scope, inventory model, impact...

  8. Behavioral health and health care reform models: patient-centered medical home, health home, and accountable care organization.

    Science.gov (United States)

    Bao, Yuhua; Casalino, Lawrence P; Pincus, Harold Alan

    2013-01-01

    Discussions of health care delivery and payment reforms have largely been silent about how behavioral health could be incorporated into reform initiatives. This paper draws attention to four patient populations defined by the severity of their behavioral health conditions and insurance status. It discusses the potentials and limitations of three prominent models promoted by the Affordable Care Act to serve populations with behavioral health conditions: the Patient-Centered Medical Home, the Health Home initiative within Medicaid, and the Accountable Care Organization. To incorporate behavioral health into health reform, policymakers and practitioners may consider embedding in the reform efforts explicit tools-accountability measures and payment designs-to improve access to and quality of care for patients with behavioral health needs.

  9. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia

    Directory of Open Access Journals (Sweden)

    Supriyati

    2015-12-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the crite-ria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may dif-fer, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, theAudit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  10. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia (P.197-207

    Directory of Open Access Journals (Sweden)

    Supriyati Supriyati

    2017-01-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the criteria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may differ, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, the Audit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  11. Small strain multiphase-field model accounting for configurational forces and mechanical jump conditions

    Science.gov (United States)

    Schneider, Daniel; Schoof, Ephraim; Tschukin, Oleg; Reiter, Andreas; Herrmann, Christoph; Schwab, Felix; Selzer, Michael; Nestler, Britta

    2018-03-01

    Computational models based on the phase-field method have become an essential tool in material science and physics in order to investigate materials with complex microstructures. The models typically operate on a mesoscopic length scale resolving structural changes of the material and provide valuable information about the evolution of microstructures and mechanical property relations. For many interesting and important phenomena, such as martensitic phase transformation, mechanical driving forces play an important role in the evolution of microstructures. In order to investigate such physical processes, an accurate calculation of the stresses and the strain energy in the transition region is indispensable. We recall a multiphase-field elasticity model based on the force balance and the Hadamard jump condition at the interface. We show the quantitative characteristics of the model by comparing the stresses, strains and configurational forces with theoretical predictions in two-phase cases and with results from sharp interface calculations in a multiphase case. As an application, we choose the martensitic phase transformation process in multigrain systems and demonstrate the influence of the local homogenization scheme within the transition regions on the resulting microstructures.

  12. An analytical model for CDMA downlink rate optimization taking into account uplink coverage restriction

    NARCIS (Netherlands)

    Endrayanto, A.I.; van den Berg, Hans Leo; Boucherie, Richardus J.

    2003-01-01

    This paper models and analyzes downlink and uplink power assignment in Code Division Multiple Access (CDMA) mobile networks. By discretizing the area into small segments, the power requirements are characterized via a matrix representation that separates user and system characteristics. We obtain a

  13. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    Science.gov (United States)

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  14. Accounting for false-positive acoustic detections of bats using occupancy models

    Science.gov (United States)

    Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.

    2014-01-01

    1. Acoustic surveys have become a common survey method for bats and other vocal taxa. Previous work shows that bat echolocation may be misidentified, but common analytic methods, such as occupancy models, assume that misidentifications do not occur. Unless rare, such misidentifications could lead to incorrect inferences with significant management implications.

  15. Accounting for change of support in spatial accuracy assessment of modelled soil mineral phosphorous concentration

    NARCIS (Netherlands)

    Leopold, U.; Heuvelink, G.B.M.; Tiktak, A.; Finke, P.A.; Schoumans, O.F.

    2006-01-01

    Agricultural activities in the Netherlands cause high nitrogen and phosphorous fluxes from soil to ground- and surface water. A model chain (STONE) has been developed to study and predict the magnitude of the resulting ground- and surface water pollution under different environmental conditions.

  16. Accounting for heterogeneity in travel episode satisfaction using a random parameters panel effects regression model

    NARCIS (Netherlands)

    Rasouli, Soora; Timmermans, Harry

    2014-01-01

    Rasouli & Timmermans1 suggested a model of travel episode satisfaction that includes the degree and nature of multitasking, activity envelope, transport mode, travel party, duration and a set of contextual and socio-economic variables. In this sequel, the focus of attention shifts to the analysis of

  17. Methods for Accounting for Co-Teaching in Value-Added Models. Working Paper

    Science.gov (United States)

    Hock, Heinrich; Isenberg, Eric

    2012-01-01

    Isolating the effect of a given teacher on student achievement (value-added modeling) is complicated when the student is taught the same subject by more than one teacher. We consider three methods, which we call the Partial Credit Method, Teacher Team Method, and Full Roster Method, for estimating teacher effects in the presence of co-teaching.…

  18. Practical Model for First Hyperpolarizability Dispersion Accounting for Both Homogeneous and Inhomogeneous Broadening Effects.

    Science.gov (United States)

    Campo, Jochen; Wenseleers, Wim; Hales, Joel M; Makarov, Nikolay S; Perry, Joseph W

    2012-08-16

    A practical yet accurate dispersion model for the molecular first hyperpolarizability β is presented, incorporating both homogeneous and inhomogeneous line broadening because these affect the β dispersion differently, even if they are indistinguishable in linear absorption. Consequently, combining the absorption spectrum with one free shape-determining parameter Ginhom, the inhomogeneous line width, turns out to be necessary and sufficient to obtain a reliable description of the β dispersion, requiring no information on the homogeneous (including vibronic) and inhomogeneous line broadening mechanisms involved, providing an ideal model for practical use in extrapolating experimental nonlinear optical (NLO) data. The model is applied to the efficient NLO chromophore picolinium quinodimethane, yielding an excellent fit of the two-photon resonant wavelength-dependent data and a dependable static value β0 = 316 × 10(-30) esu. Furthermore, we show that including a second electronic excited state in the model does yield an improved description of the NLO data at shorter wavelengths but has only limited influence on β0.

  19. Accounting for religieus sensibilities in social intervention: A supplement to Donkers’ models of change

    Directory of Open Access Journals (Sweden)

    Timothy Schilling

    2006-06-01

    Full Text Available In deze bijdrage wordt nagegaan in hoeverre de drie veranderkundige modellen van Donkers adequaat zijn als het gaat om de positionering van sociale interventies die zijn geïnspireerd door het geloof. Aan de hand van het werk van de Brothers of the Christians Schools in Manhattan, betoogt de auteur dat het sociaal-technologische model, het persoongeoriënteerde model en het maatschappijkritische model ontoereikend zijn om dit soort werk te plaatsen. In geen van deze modellen wordt namelijk plaats ingeruimd voor een ervaringshorizon die verder reikt dan de persoon in relatie tot de maatschappij. Terwijl voor een gelovige een horizon van de eeuwigheid of het concept van God een belangrijke rol vervult in de motivatie, het doel en het begrip van een sociale interventie. Via gebed wordt God betrokken in de relatie tussen werker en cliënt en in het veranderingsproces. De auteur stelt daarom een vierde, op het geloof gebaseerd model voor dat, zo hoopt hij, een goed interpretatiekader kan vormen voor vanuit religieuze achtergrond ingezette sociale interventies. Op die manier kan wellicht meer inzicht worden verkregen in hoe en waarom het geloof werkzaam is in de inzet voor sociale veranderingen, niet alleen het christelijk geloof, waar zijn casus betrekking op heeft, maar mogelijk ook andere religies.

  20. Summary of model to account for inhibition of CAM corrosion by porous ceramic coating

    Energy Technology Data Exchange (ETDEWEB)

    Hopper, R., LLNL

    1998-03-31

    Corrosion occurs during five characteristic periods or regimes. These are summarized below. For more detailed discussion, see the attached Memorandum by Robert Hopper entitled `Ceramic Barrier Performance Model, Version 1.0, Description of Initial PA Input` and dated March 30, 1998.

  1. Evaluation of alternative surface runoff accounting procedures using the SWAT model

    Science.gov (United States)

    For surface runoff estimation in the Soil and Water Assessment Tool (SWAT) model, the curve number (CN) procedure is commonly adopted to calculate surface runoff by utilizing antecedent soil moisture condition (SCSI) in field. In the recent version of SWAT (SWAT2005), an alternative approach is ava...

  2. An analytical model for CDMA downlink rate optimization taking into account uplink coverage restrictions

    NARCIS (Netherlands)

    Endrayanto, A.I.; van den Berg, Hans Leo; Boucherie, Richardus J.

    2005-01-01

    This paper models and analyzes downlink and uplink power assignment in code division multiple access (CDMA) mobile networks. By discretizing the area into small segments, the power requirements are characterized via a matrix representation that separates user and system characteristics. We obtain a

  3. Revisiting Kappa to account for change in the accuracy assessment of land-use models

    NARCIS (Netherlands)

    Vliet, van J.; Bregt, A.K.; Hagen-Zanker, A.

    2011-01-01

    Land-use change models are typically calibrated to reproduce known historic changes. Calibration results can then be assessed by comparing two datasets: the simulated land-use map and the actual land-use map at the same time. A common method for this is the Kappa statistic, which expresses the

  4. Demographic Accounting and Model-Building. Education and Development Technical Reports.

    Science.gov (United States)

    Stone, Richard

    This report describes and develops a model for coordinating a variety of demographic and social statistics within a single framework. The framework proposed, together with its associated methods of analysis, serves both general and specific functions. The general aim of these functions is to give numerical definition to the pattern of society and…

  5. Models of language: towards a practice-based account of information in natural language

    NARCIS (Netherlands)

    Andrade-Lotero, E.J.

    2012-01-01

    Edgar Andrade-Lotero onderzocht twee modellen van taalkundige informatie. Hij richt zich met name op de filosofische vooronderstellingen van deze modellen. Eén van deze modellen is afkomstig uit de formele semantiek; het andere model is gebaseerd op een specifiek onderzoek naar de rol van tekens in

  6. Using state-and-transition modeling to account for imperfect detection in invasive species management

    Science.gov (United States)

    Frid, Leonardo; Holcombe, Tracy; Morisette, Jeffrey T.; Olsson, Aaryn D.; Brigham, Lindy; Bean, Travis M.; Betancourt, Julio L.; Bryan, Katherine

    2013-01-01

    Buffelgrass, a highly competitive and flammable African bunchgrass, is spreading rapidly across both urban and natural areas in the Sonoran Desert of southern and central Arizona. Damages include increased fire risk, losses in biodiversity, and diminished revenues and quality of life. Feasibility of sustained and successful mitigation will depend heavily on rates of spread, treatment capacity, and cost–benefit analysis. We created a decision support model for the wildland–urban interface north of Tucson, AZ, using a spatial state-and-transition simulation modeling framework, the Tool for Exploratory Landscape Scenario Analyses. We addressed the issues of undetected invasions, identifying potentially suitable habitat and calibrating spread rates, while answering questions about how to allocate resources among inventory, treatment, and maintenance. Inputs to the model include a state-and-transition simulation model to describe the succession and control of buffelgrass, a habitat suitability model, management planning zones, spread vectors, estimated dispersal kernels for buffelgrass, and maps of current distribution. Our spatial simulations showed that without treatment, buffelgrass infestations that started with as little as 80 ha (198 ac) could grow to more than 6,000 ha by the year 2060. In contrast, applying unlimited management resources could limit 2060 infestation levels to approximately 50 ha. The application of sufficient resources toward inventory is important because undetected patches of buffelgrass will tend to grow exponentially. In our simulations, areas affected by buffelgrass may increase substantially over the next 50 yr, but a large, upfront investment in buffelgrass control could reduce the infested area and overall management costs.

  7. A single-trace dual-process model of episodic memory: a novel computational account of familiarity and recollection.

    Science.gov (United States)

    Greve, Andrea; Donaldson, David I; van Rossum, Mark C W

    2010-02-01

    Dual-process theories of episodic memory state that retrieval is contingent on two independent processes: familiarity (providing a sense of oldness) and recollection (recovering events and their context). A variety of studies have reported distinct neural signatures for familiarity and recollection, supporting dual-process theory. One outstanding question is whether these signatures reflect the activation of distinct memory traces or the operation of different retrieval mechanisms on a single memory trace. We present a computational model that uses a single neuronal network to store memory traces, but two distinct and independent retrieval processes access the memory. The model is capable of performing familiarity and recollection-based discrimination between old and new patterns, demonstrating that dual-process models need not to rely on multiple independent memory traces, but can use a single trace. Importantly, our putative familiarity and recollection processes exhibit distinct characteristics analogous to those found in empirical data; they diverge in capacity and sensitivity to sparse and correlated patterns, exhibit distinct ROC curves, and account for performance on both item and associative recognition tests. The demonstration that a single-trace, dual-process model can account for a range of empirical findings highlights the importance of distinguishing between neuronal processes and the neuronal representations on which they operate.

  8. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models

    Directory of Open Access Journals (Sweden)

    Koen Degeling

    2017-12-01

    Full Text Available Abstract Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Methods Two approaches, 1 using non-parametric bootstrapping and 2 using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Results Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500, the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25, yielding infeasible modeling outcomes. Conclusions Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  9. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models.

    Science.gov (United States)

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-12-15

    Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  10. PENERAPAN MODEL THINK-PAIR-SHARE UNTUK MENINGKATKAN KETERAMPILAN MENULIS KELAS II SDN 3 BANJAR JAWA

    Directory of Open Access Journals (Sweden)

    Ningsi Soisana Lakilaf

    2017-12-01

    Full Text Available Penelitian ini bertujuan untuk meningkatkan keterampilan menulis siswa  setelah penerapan model pembelajaran Think-Pear-Share bermediakan gambar pada siswa kelas II Semester I di SD Negeri 3 Banjar Jawa, Tahun Pelajaran 2017/2018.Pelaksanaan penelitian ini menggunakan penelitian tindakan kelas (PTK yang dilaksanakan dalam 2 silklus,  setiap siklus  terdiri dari 2 pertemua, dengan tahapan yang terdiri dari (1 perencanaan, (2 pelaksanaan, (3 pengamatan, dan (4 refleksi. Subjek penelitian ini adalah guru dan siswa kelas II SD Negeri 3 Banjar Jawa  dalam penelitian ini adalah teknik tes dan nontes.Hasil penelitian ini menunjukan bahwa dengan menggunakan model pembelajaran Think-Pair-Share bermedia gamabar diketahui bahwa ketuntasan hasil belajar siswa mengalami peningkatan dalam pembelajaran dengan hasil presentasi mendeskripsikan secara tertulis sebelum pelaksanaan tindakan 27%, siklus I 77% dan Siklus II 90 %. Pembelajaran dengan menerapkan model Think-Pair-Share bermedia gambar dapat meningkatkan keterampilan menulis. Kesimpulan dari penelitian ini adalah melalui penerapan model Think- Pair-Share bermedia gambar dapat meningkatkan keterampilan  menulis siswa kelas II SD Negeri 3 Banjar Jawa,. Saran yang dapat diberikan adalah sebaiknya guru lebih aktif dan kreatif dalam melaksanakan pembelajaran yang inovatif dan menyenangkan.   Kata Kunci : Keterampilan menulis, model Think-Pair-Share

  11. Biomimetic model systems of rigid hair beds: Part II - Experiment

    Science.gov (United States)

    Jammalamadaka, Mani S. S.; Hood, Kaitlyn; Hosoi, Anette

    2017-11-01

    Crustaceans - such as lobsters, crabs and stomapods - have hairy appendages that they use to recognize and track odorants in the surrounding fluid. An array of rigid hairs impedes flow at different rates depending on the spacing between hairs and the Reynolds number, Re. At larger Reynolds number (Re>1), fluid travels through the hairs rather than around them, a phenomenon called leakiness. Crustaceans flick their appendages at different speeds in order to manipulate the leakiness between the hairs, allowing the hairs to either detect the odors in a sample of fluid or collect a new sample. Theoretical and numerical studies predict that there is a fast flow region near the hairs that moves closer to the hairs as Re increases. Here, we test this theory experimentally. We 3D printed rigid hairs with an aspect ratio of 30:1 in rectangular arrays with different hair packing fractions. We custom built an experimental setup which establishes poiseuille flow at intermediate Re, Re <=200. We track the flow dynamics through the hair beds using tracer particles and Particle Imaging Velocimetry. We will then compare the modelling predictions with the experimental outcomes.

  12. Modeling of Cd(II) sorption on mixed oxide

    International Nuclear Information System (INIS)

    Waseem, M.; Mustafa, S.; Naeem, A.; Shah, K.H.; Hussain, S.Y.; Safdar, M.

    2011-01-01

    Mixed oxide of iron and silicon (0.75 M Fe(OH)3:0.25 M SiO/sub 2/) was synthesized and characterized by various techniques like surface area analysis, point of zero charge (PZC), energy dispersive X-rays (EDX) spectroscopy, Thermogravimetric and differential thermal analysis (TG-DTA), Fourier transform infrared spectroscopy (FTIR) and X-rays diffraction (XRD) analysis. The uptake of Cd/sup 2+/ ions on mixed oxide increased with pH, temperature and metal ion concentration. Sorption data have been interpreted in terms of both Langmuir and Freundlich models. The Xm values at pH 7 are found to be almost twice as compared to pH 5. The values of both DH and DS were found to be positive indicating that the sorption process was endothermic and accompanied by the dehydration of Cd/sup 2+/. Further, the negative value of DG confirms the spontaneity of the reaction. The ion exchange mechanism was suggested to take place for each Cd/sup 2+/ ions at pH 5, whereas ion exchange was found coupled with non specific adsorption of metal cations at pH 7. (author)

  13. Modified Feddes type stress reduction function for modeling root water uptake: Accounting for limited aeration and low water potential

    Science.gov (United States)

    Peters, Andre; Durner, Wolfgang; Iden, Sascha C.

    2017-04-01

    Modeling water flow in the soil-plant-atmosphere continuum with the Richards equation requires a model for the sink term describing water uptake by plant roots. Despite recent progress in developing process-based models of water uptake by plant roots and water flow in aboveground parts of vegetation, effective models of root water uptake are widely applied and necessary for large-scale applications. Modeling root water uptake consists of three steps, (i) specification of the spatial distribution of potential uptake, (ii) reduction of uptake due to various stress sources, and (iii) enhancement of uptake in part of the simulation domain to describe compensation. We discuss the conceptual shortcomings of the frequently used root water uptake model of Feddes and suggest a simple but effective improvement of the model. The improved model parametrizes water stress in wet soil by a reduction scheme which is formulated as function of air content where water stress due to low soil water potential is described by the original approach of Feddes. The improved model is physically more consistent than Feddes' model because water uptake in wet soil is limited by aeration which is a function of water content. The suggested modification is particularly relevant for simulations in heterogeneous soils, because stress parameters are uniquely defined for the entire simulation domain, irrespective of soil texture. Numerical simulations of water flow and root water uptake in homogeneous and stochastic heterogeneous soils illustrate the effect of the new model on root water uptake and actual transpiration. For homogeneous fine-textured soils, root water uptake never achieves its potential rate. In stochastic heterogeneous soil, water uptake is more pronounced at the interfaces between fine and coarse regions which has potential implications for plant growth, nutrient uptake and depletion.

  14. Prediction of the binding affinities of peptides to class II MHC using a regularized thermodynamic model

    Directory of Open Access Journals (Sweden)

    Mittelmann Hans D

    2010-01-01

    Full Text Available Abstract Background The binding of peptide fragments of extracellular peptides to class II MHC is a crucial event in the adaptive immune response. Each MHC allotype generally binds a distinct subset of peptides and the enormous number of possible peptide epitopes prevents their complete experimental characterization. Computational methods can utilize the limited experimental data to predict the binding affinities of peptides to class II MHC. Results We have developed the Regularized Thermodynamic Average, or RTA, method for predicting the affinities of peptides binding to class II MHC. RTA accounts for all possible peptide binding conformations using a thermodynamic average and includes a parameter constraint for regularization to improve accuracy on novel data. RTA was shown to achieve higher accuracy, as measured by AUC, than SMM-align on the same data for all 17 MHC allotypes examined. RTA also gave the highest accuracy on all but three allotypes when compared with results from 9 different prediction methods applied to the same data. In addition, the method correctly predicted the peptide binding register of 17 out of 18 peptide-MHC complexes. Finally, we found that suboptimal peptide binding registers, which are often ignored in other prediction methods, made significant contributions of at least 50% of the total binding energy for approximately 20% of the peptides. Conclusions The RTA method accurately predicts peptide binding affinities to class II MHC and accounts for multiple peptide binding registers while reducing overfitting through regularization. The method has potential applications in vaccine design and in understanding autoimmune disorders. A web server implementing the RTA prediction method is available at http://bordnerlab.org/RTA/.

  15. A New Form of Nondestructive Strength-Estimating Statistical Models Accounting for Uncertainty of Model and Aging Effect of Concrete

    International Nuclear Information System (INIS)

    Hong, Kee Jeung; Kim, Jee Sang

    2009-01-01

    As concrete ages, the surrounding environment is expected to have growing influences on the concrete. As all the impacts of the environment cannot be considered in the strength-estimating model of a nondestructive concrete test, the increase in concrete age leads to growing uncertainty in the strength-estimating model. Therefore, the variation of the model error increases. It is necessary to include those impacts in the probability model of concrete strength attained from the nondestructive tests so as to build a more accurate reliability model for structural performance evaluation. This paper reviews and categorizes the existing strength-estimating statistical models of nondestructive concrete test, and suggests a new form of the strength-estimating statistical models to properly reflect the model uncertainty due to aging of the concrete. This new form of the statistical models will lay foundation for more accurate structural performance evaluation.

  16. The effects of drugs on human models of emotional processing: an account of antidepressant drug treatment.

    Science.gov (United States)

    Pringle, Abbie; Harmer, Catherine J

    2015-12-01

    Human models of emotional processing suggest that the direct effect of successful antidepressant drug treatment may be to modify biases in the processing of emotional information. Negative biases in emotional processing are documented in depression, and single or short-term dosing with conventional antidepressant drugs reverses these biases in depressed patients prior to any subjective change in mood. Antidepressant drug treatments also modulate emotional processing in healthy volunteers, which allows the consideration of the psychological effects of these drugs without the confound of changes in mood. As such, human models of emotional processing may prove to be useful for testing the efficacy of novel treatments and for matching treatments to individual patients or subgroups of patients.

  17. A communication model of shared decision making: accounting for cancer treatment decisions.

    Science.gov (United States)

    Siminoff, Laura A; Step, Mary M

    2005-07-01

    The authors present a communication model of shared decision making (CMSDM) that explicitly identifies the communication process as the vehicle for decision making in cancer treatment. In this view, decision making is necessarily a sociocommunicative process whereby people enter into a relationship, exchange information, establish preferences, and choose a course of action. The model derives from contemporary notions of behavioral decision making and ethical conceptions of the doctor-patient relationship. This article briefly reviews the theoretical approaches to decision making, notes deficiencies, and embeds a more socially based process into the dynamics of the physician-patient relationship, focusing on cancer treatment decisions. In the CMSDM, decisions depend on (a) antecedent factors that have potential to influence communication, (b) jointly constructed communication climate, and (c) treatment preferences established by the physician and the patient.

  18. Why does placing the question before an arithmetic word problem improve performance? A situation model account.

    Science.gov (United States)

    Thevenot, Catherine; Devidal, Michel; Barrouillet, Pierre; Fayol, Michel

    2007-01-01

    The aim of this paper is to investigate the controversial issue of the nature of the representation constructed by individuals to solve arithmetic word problems. More precisely, we consider the relevance of two different theories: the situation or mental model theory (Johnson-Laird, 1983; Reusser, 1989) and the schema theory (Kintsch & Greeno, 1985; Riley, Greeno, & Heller, 1983). Fourth-graders who differed in their mathematical skills were presented with problems that varied in difficulty and with the question either before or after the text. We obtained the classic effect of the position of the question, with better performance when the question was presented prior to the text. In addition, this effect was more marked in the case of children who had poorer mathematical skills and in the case of more difficult problems. We argue that this pattern of results is compatible only with the situation or mental model theory, and not with the schema theory.

  19. Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.

    2016-01-01

    Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.

  20. First results of GERDA Phase II and consistency with background models

    Science.gov (United States)

    Agostini, M.; Allardt, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Baudis, L.; Bauer, C.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode1, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; D'Andrea, V.; Demidova, E. V.; Di Marco, N.; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Gooch, C.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hakenmüller, J.; Hegai, A.; Heisel, M.; Hemmer, S.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Kish, A.; Klimenko, A.; Kneißl, R.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Medinaceli, E.; Miloradovic, M.; Mingazheva, R.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Palioselitis, D.; Panas, K.; Pandola, L.; Pelczar, K.; Pullia, A.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salamida, F.; Salathe, M.; Schmitt, C.; Schneider, B.; Schönert, S.; Schreiner, J.; Schulz, O.; Schütz, A.-K.; Schwingenheuer, B.; Selivanenko, O.; Shevzik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Wegmann, A.; Wester, T.; Wiesinger, C.; Wojcik, M.; Yanovich, E.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.

    2017-01-01

    The GERDA (GERmanium Detector Array) is an experiment for the search of neutrinoless double beta decay (0νββ) in 76Ge, located at Laboratori Nazionali del Gran Sasso of INFN (Italy). GERDA operates bare high purity germanium detectors submersed in liquid Argon (LAr). Phase II of data-taking started in Dec 2015 and is currently ongoing. In Phase II 35 kg of germanium detectors enriched in 76Ge including thirty newly produced Broad Energy Germanium (BEGe) detectors is operating to reach an exposure of 100 kg·yr within about 3 years data taking. The design goal of Phase II is to reduce the background by one order of magnitude to get the sensitivity for T1/20ν = O≤ft( {{{10}26}} \\right){{ yr}}. To achieve the necessary background reduction, the setup was complemented with LAr veto. Analysis of the background spectrum of Phase II demonstrates consistency with the background models. Furthermore 226Ra and 232Th contamination levels consistent with screening results. In the first Phase II data release we found no hint for a 0νββ decay signal and place a limit of this process T1/20ν > 5.3 \\cdot {1025} yr (90% C.L., sensitivity 4.0·1025 yr). First results of GERDA Phase II will be presented.

  1. Model application of Murabahah financing acknowledgement statement of Sharia accounting standard No 59 Year 2002

    Science.gov (United States)

    Muda, Iskandar; Panjaitan, Rohdearni; Erlina; Ginting, Syafruddin; Maksum, Azhar; Abubakar

    2018-03-01

    The purpose of this research is to observe murabahah financing implantation model. Observations were made on one of the sharia banks going public in Indonesia. Form of implementation of such implementation in the form of financing given the exact facilities and maximum financing, then the provision of financing should be adjusted to the type, business conditions and business plans prospective mudharib. If the financing provided is too low with the mudharib requirement not reaching the target and the financing is not refundable.

  2. An extended heterogeneous car-following model accounting for anticipation driving behavior and mixed maximum speeds

    Science.gov (United States)

    Sun, Fengxin; Wang, Jufeng; Cheng, Rongjun; Ge, Hongxia

    2018-02-01

    The optimal driving speeds of the different vehicles may be different for the same headway. In the optimal velocity function of the optimal velocity (OV) model, the maximum speed vmax is an important parameter determining the optimal driving speed. A vehicle with higher maximum speed is more willing to drive faster than that with lower maximum speed in similar situation. By incorporating the anticipation driving behavior of relative velocity and mixed maximum speeds of different percentages into optimal velocity function, an extended heterogeneous car-following model is presented in this paper. The analytical linear stable condition for this extended heterogeneous traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulted from the cooperation between anticipation driving behavior and heterogeneous maximum speeds in the optimal velocity function. The analytical and numerical results all demonstrate that strengthening driver's anticipation effect can improve the stability of heterogeneous traffic flow, and increasing the lowest value in the mixed maximum speeds will result in more instability, but increasing the value or proportion of the part already having higher maximum speed will cause different stabilities at high or low traffic densities.

  3. Modeling the distribution of Mg II absorbers around galaxies using background galaxies and quasars

    Energy Technology Data Exchange (ETDEWEB)

    Bordoloi, R.; Lilly, S. J. [Institute for Astronomy, ETH Zürich, Wolfgang-Pauli-Strasse 27, 8093 Zürich (Switzerland); Kacprzak, G. G. [Swinburne University of Technology, Victoria 3122 (Australia); Churchill, C. W., E-mail: rongmonb@phys.ethz.ch [New Mexico State University, Las Cruces, NM 88003 (United States)

    2014-04-01

    We present joint constraints on the distribution of Mg II absorption around high redshift galaxies obtained by combining two orthogonal probes, the integrated Mg II absorption seen in stacked background galaxy spectra and the distribution of parent galaxies of individual strong Mg II systems as seen in the spectra of background quasars. We present a suite of models that can be used to predict, for different two- and three-dimensional distributions, how the projected Mg II absorption will depend on a galaxy's apparent inclination, the impact parameter b and the azimuthal angle between the projected vector to the line of sight and the projected minor axis. In general, we find that variations in the absorption strength with azimuthal angles provide much stronger constraints on the intrinsic geometry of the Mg II absorption than the dependence on the inclination of the galaxies. In addition to the clear azimuthal dependence in the integrated Mg II absorption that we reported earlier in Bordoloi et al., we show that strong equivalent width Mg II absorbers (W{sub r} (2796) ≥ 0.3 Å) are also asymmetrically distributed in azimuth around their host galaxies: 72% of the absorbers in Kacprzak et al., and 100% of the close-in absorbers within 35 kpc of the center of their host galaxies, are located within 50° of the host galaxy's projected semi minor axis. It is shown that either composite models consisting of a simple bipolar component plus a spherical or disk component, or a single highly softened bipolar distribution, can well represent the azimuthal dependencies observed in both the stacked spectrum and quasar absorption-line data sets within 40 kpc. Simultaneously fitting both data sets, we find that in the composite model the bipolar cone has an opening angle of ∼100° (i.e., confined to within 50° of the disk axis) and contains about two-thirds of the total Mg II absorption in the system. The single softened cone model has an exponential fall off with

  4. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    NARCIS (Netherlands)

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts,

  5. Tree biomass in the Swiss landscape: nationwide modelling for improved accounting for forest and non-forest trees.

    Science.gov (United States)

    Price, B; Gomez, A; Mathys, L; Gardi, O; Schellenberger, A; Ginzler, C; Thürig, E

    2017-03-01

    Trees outside forest (TOF) can perform a variety of social, economic and ecological functions including carbon sequestration. However, detailed quantification of tree biomass is usually limited to forest areas. Taking advantage of structural information available from stereo aerial imagery and airborne laser scanning (ALS), this research models tree biomass using national forest inventory data and linear least-square regression and applies the model both inside and outside of forest to create a nationwide model for tree biomass (above ground and below ground). Validation of the tree biomass model against TOF data within settlement areas shows relatively low model performance (R 2 of 0.44) but still a considerable improvement on current biomass estimates used for greenhouse gas inventory and carbon accounting. We demonstrate an efficient and easily implementable approach to modelling tree biomass across a large heterogeneous nationwide area. The model offers significant opportunity for improved estimates on land use combination categories (CC) where tree biomass has either not been included or only roughly estimated until now. The ALS biomass model also offers the advantage of providing greater spatial resolution and greater within CC spatial variability compared to the current nationwide estimates.

  6. Fluorescence microscopy point spread function model accounting for aberrations due to refractive index variability within a specimen.

    Science.gov (United States)

    Ghosh, Sreya; Preza, Chrysanthe

    2015-07-01

    A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6  μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction.

  7. Covariance-based synaptic plasticity in an attractor network model accounts for fast adaptation in free operant learning.

    Science.gov (United States)

    Neiman, Tal; Loewenstein, Yonatan

    2013-01-23

    In free operant experiments, subjects alternate at will between targets that yield rewards stochastically. Behavior in these experiments is typically characterized by (1) an exponential distribution of stay durations, (2) matching of the relative time spent at a target to its relative share of the total number of rewards, and (3) adaptation after a change in the reward rates that can be very fast. The neural mechanism underlying these regularities is largely unknown. Moreover, current decision-making neural network models typically aim at explaining behavior in discrete-time experiments in which a single decision is made once in every trial, making these models hard to extend to the more natural case of free operant decisions. Here we show that a model based on attractor dynamics, in which transitions are induced by noise and preference is formed via covariance-based synaptic plasticity, can account for the characteristics of behavior in free operant experiments. We compare a specific instance of such a model, in which two recurrently excited populations of neurons compete for higher activity, to the behavior of rats responding on two levers for rewarding brain stimulation on a concurrent variable interval reward schedule (Gallistel et al., 2001). We show that the model is consistent with the rats' behavior, and in particular, with the observed fast adaptation to matching behavior. Further, we show that the neural model can be reduced to a behavioral model, and we use this model to deduce a novel "conservation law," which is consistent with the behavior of the rats.

  8. Accountability and non-proliferation nuclear regime: a review of the mutual surveillance Brazilian-Argentine model for nuclear safeguards; Accountability e regime de nao proliferacao nuclear: uma avaliacao do modelo de vigilancia mutua brasileiro-argentina de salvaguardas nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Xavier, Roberto Salles

    2014-08-01

    The regimes of accountability, the organizations of global governance and institutional arrangements of global governance of nuclear non-proliferation and of Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards are the subject of research. The starting point is the importance of the institutional model of global governance for the effective control of non-proliferation of nuclear weapons. In this context, the research investigates how to structure the current arrangements of the international nuclear non-proliferation and what is the performance of model Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards in relation to accountability regimes of global governance. For that, was searched the current literature of three theoretical dimensions: accountability, global governance and global governance organizations. In relation to the research method was used the case study and the treatment technique of data the analysis of content. The results allowed: to establish an evaluation model based on accountability mechanisms; to assess how behaves the model Mutual Vigilance Brazilian-Argentine Nuclear Safeguards front of the proposed accountability regime; and to measure the degree to which regional arrangements that work with systems of global governance can strengthen these international systems. (author)

  9. Biosphere modelling for safety assessment of geological disposal taking account of denudation of contaminated soils. Research document

    International Nuclear Information System (INIS)

    Kato, Tomoko

    2003-03-01

    Biosphere models for safety assessment of geological disposal have been developed on the assumption that the repository-derived radionuclides reach surface environment by groundwater. In the modelling, river, deep well and marine have been considered as geosphere-biosphere (GBIs) and some Japanese-specific ''reference biospheres'' have been developed using an approach consistent with the BIOMOVS II/BIOMASS Reference Biosphere Methodology. In this study, it is assumed that the repository-derived radionuclide would reach surface environment in the form of solid phase by uplift and erosion of contaminated soil and sediment. The radionuclides entered into the surface environment by these processes could be distributed between solid and liquid phases and could spread within the biosphere via solid phase and also liquid phase. Based on these concepts, biosphere model that considers variably saturated zone under surface soil (VSZ) as a GBI was developed for calculating the flux-to-dose conversion factors of three exposure groups (farming, freshwater fishing, marine fishing) based on the Reference Biosphere Methodology. The flux-to-dose conversion factors for faming exposure group were the highest, and ''inhalation of dust'', external irradiation from soil'' and ''ingestion of soil'' were the dominant exposure pathways for most of radionuclides considered in this model. It is impossible to compare the flux-to-dose conversion factors calculated by the biosphere model in this study with those calculated by the biosphere models developed in the previous studies because the migration processes considered when the radionuclides entered the surface environment through the aquifer are different among the models; i.e. it has been assumed that the repository-derived radionuclides entered the GBIs such as river, deep well and marine via groundwater without dilution and retardation at the aquifer in the previous biosphere models. Consequently, it must be modelled the migration of

  10. Modelling and experimental validation for off-design performance of the helical heat exchanger with LMTD correction taken into account

    Energy Technology Data Exchange (ETDEWEB)

    Phu, Nguyen Minh; Trinh, Nguyen Thi Minh [Vietnam National University, Ho Chi Minh City (Viet Nam)

    2016-07-15

    Today the helical coil heat exchanger is being employed widely due to its dominant advantages. In this study, a mathematical model was established to predict off-design works of the helical heat exchanger. The model was based on the LMTD and e-NTU methods, where a LMTD correction factor was taken into account to increase accuracy. An experimental apparatus was set-up to validate the model. Results showed that errors of thermal duty, outlet hot fluid temperature, outlet cold fluid temperature, shell-side pressure drop, and tube-side pressure drop were respectively +-5%, +-1%, +-1%, +-5% and +-2%. Diagrams of dimensionless operating parameters and a regression function were also presented as design-maps, a fast calculator for usage in design and operation of the exchanger. The study is expected to be a good tool to estimate off-design conditions of the single-phase helical heat exchangers.

  11. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Science.gov (United States)

    Liang, Peixin; Chai, Feng; Bi, Yunlong; Pei, Yulong; Cheng, Shukang

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization.

  12. What Time is Your Sunset? Accounting for Refraction in Sunrise/set Prediction Models

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer Lynn; Chizek Frouard, Malynda; Hilton, James; Phlips, Alan; Edgar, Roman

    2018-01-01

    Algorithms that predict sunrise and sunset times currently have an uncertainty of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, including difficulties determining whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction.We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We, then, compared these predictions with data sets of observed rise/set times taken from Mount Wilson Observatory in California, University of Alberta in Edmonton, Alberta, and onboard the SS James Franco in the Atlantic. A thorough investigation of the problem requires a more substantial data set of observed rise/set times and corresponding meteorological data from around the world.We have developed a mobile application, Sunrise & Sunset Observer, so that anyone can capture this astronomical and meteorological data using their smartphone video recorder as part of a citizen science project. The Android app for this project is available in the Google Play store. Videos can also be submitted through the project website (riseset.phy.mtu.edu). Data analysis will lead to more complete models that will provide higher accuracy rise/set predictions to benefit astronomers, navigators, and outdoorsmen everywhere.

  13. Optimization model of energy mix taking into account the environmental impact

    International Nuclear Information System (INIS)

    Gruenwald, O.; Oprea, D.

    2012-01-01

    At present, the energy system in the Czech Republic needs to decide some important issues regarding limited fossil resources, greater efficiency in producing of electrical energy and reducing emission levels of pollutants. These problems can be decided only by formulating and implementing an energy mix that will meet these conditions: rational, reliable, sustainable and competitive. The aim of this article is to find a new way of determining an optimal mix for the energy system in the Czech Republic. To achieve the aim, the linear optimization model comprising several economics, environmental and technical aspects will be applied. (Authors)

  14. Making Collaborative Innovation Accountable

    DEFF Research Database (Denmark)

    Sørensen, Eva

    The public sector is increasingly expected to be innovative, but the prize for a more innovative public sector might be that it becomes difficult to hold public authorities to account for their actions. The article explores the tensions between innovative and accountable governance, describes...... the foundation for these tensions in different accountability models, and suggest directions to take in analyzing the accountability of collaborative innovation processes....

  15. Modelling the Galactic bar using OGLE-II red clump giant stars

    NARCIS (Netherlands)

    Rattenbury, Nicholas J.; Mao, Shude; Sumi, Takahiro; Smith, Martin C.

    2007-01-01

    Red clump giant (RCG) stars can be used as distance indicators to trace the mass distribution of the Galactic bar. We use RCG stars from 44 bulge fields from the OGLE-II microlensing collaboration data base to constrain analytic triaxial models for the Galactic bar. We find the bar major-axis is

  16. Mathematical models and illustrative results for the RINGBEARER II monopole/dipole beam-propagation code

    International Nuclear Information System (INIS)

    Chambers, F.W.; Masamitsu, J.A.; Lee, E.P.

    1982-01-01

    RINGBEARER II is a linearized monopole/dipole particle simulation code for studying intense relativistic electron beam propagation in gas. In this report the mathematical models utilized for beam particle dynamics and pinch field computation are delineated. Difficulties encountered in code operations and some remedies are discussed. Sample output is presented detailing the diagnostics and the methods of display and analysis utilized

  17. A Parameter Study for Modeling Mg ii h and k Emission during Solar Flares

    Energy Technology Data Exchange (ETDEWEB)

    Rubio da Costa, Fatima [Department of Physics, Stanford University, Stanford, CA 94305 (United States); Kleint, Lucia, E-mail: frubio@stanford.edu [University of Applied Sciences and Arts Northwestern Switzerland, 5210, Windisch (Switzerland)

    2017-06-20

    Solar flares show highly unusual spectra in which the thermodynamic conditions of the solar atmosphere are encoded. Current models are unable to fully reproduce the spectroscopic flare observations, especially the single-peaked spectral profiles of the Mg ii h and k lines. We aim to understand the formation of the chromospheric and optically thick Mg ii h and k lines in flares through radiative transfer calculations. We take a flare atmosphere obtained from a simulation with the radiative hydrodynamic code RADYN as input for a radiative transfer modeling with the RH code. By iteratively changing this model atmosphere and varying thermodynamic parameters such as temperature, electron density, and velocity, we study their effects on the emergent intensity spectra. We reproduce the typical single-peaked Mg ii h and k flare spectral shape and approximate the intensity ratios to the subordinate Mg ii lines by increasing either densities, temperatures, or velocities at the line core formation height range. Additionally, by combining unresolved upflows and downflows up to ∼250 km s{sup −1} within one resolution element, we reproduce the widely broadened line wings. While we cannot unambiguously determine which mechanism dominates in flares, future modeling efforts should investigate unresolved components, additional heat dissipation, larger velocities, and higher densities and combine the analysis of multiple spectral lines.

  18. Modeling co-occurrence of northern spotted and barred owls: accounting for detection probability differences

    Science.gov (United States)

    Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.

    2009-01-01

    Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.

  19. Singing with yourself: evidence for an inverse modeling account of poor-pitch singing.

    Science.gov (United States)

    Pfordresher, Peter Q; Mantell, James T

    2014-05-01

    Singing is a ubiquitous and culturally significant activity that humans engage in from an early age. Nevertheless, some individuals - termed poor-pitch singers - are unable to match target pitches within a musical semitone while singing. In the experiments reported here, we tested whether poor-pitch singing deficits would be reduced when individuals imitate recordings of themselves as opposed to recordings of other individuals. This prediction was based on the hypothesis that poor-pitch singers have not developed an abstract "inverse model" of the auditory-vocal system and instead must rely on sensorimotor associations that they have experienced directly, which is true for sequences an individual has already produced. In three experiments, participants, both accurate and poor-pitch singers, were better able to imitate sung recordings of themselves than sung recordings of other singers. However, this self-advantage was enhanced for poor-pitch singers. These effects were not a byproduct of self-recognition (Experiment 1), vocal timbre (Experiment 2), or the absolute pitch of target recordings (i.e., the advantage remains when recordings are transposed, Experiment 3). Results support the conceptualization of poor-pitch singing as an imitative deficit resulting from a deficient inverse model of the auditory-vocal system with respect to pitch. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Accounting emergy flows to determine the best production model of a coffee plantation

    International Nuclear Information System (INIS)

    Giannetti, B.F.; Ogura, Y.; Bonilla, S.H.; Almeida, C.M.V.B.

    2011-01-01

    Cerrado, a savannah region, is Brazil's second largest ecosystem after the Amazon rainforest and is also threatened with imminent destruction. In the present study emergy synthesis was applied to assess the environmental performance of a coffee farm located in Coromandel, Minas Gerais, in the Brazilian Cerrado. The effects of land use on sustainability were evaluated by comparing the emergy indices along ten years in order to assess the energy flows driving the production process, and to determine the best production model combining productivity and environmental performance. The emergy indices are presented as a function of the annual crop. Results show that Santo Inacio farm should produce approximately 20 bags of green coffee per hectare to accomplish its best performance regarding both the production efficiency and the environment. The evaluation of coffee trade complements those obtained by contrasting productivity and environmental performance, and despite of the market prices variation, the optimum interval for Santo Inacio's farm is between 10 and 25 coffee bags/ha. - Highlights: → Emergy synthesis is used to assess the environmental performance of a coffee farm in Brazil. → The effects of land use on sustainability were evaluated along ten years. → The energy flows driving the production process were assessed. → The best production model combining productivity and environmental performance was determined.

  1. Accounting emergy flows to determine the best production model of a coffee plantation

    Energy Technology Data Exchange (ETDEWEB)

    Giannetti, B.F.; Ogura, Y.; Bonilla, S.H. [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil); Almeida, C.M.V.B., E-mail: cmvbag@terra.com.br [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil)

    2011-11-15

    Cerrado, a savannah region, is Brazil's second largest ecosystem after the Amazon rainforest and is also threatened with imminent destruction. In the present study emergy synthesis was applied to assess the environmental performance of a coffee farm located in Coromandel, Minas Gerais, in the Brazilian Cerrado. The effects of land use on sustainability were evaluated by comparing the emergy indices along ten years in order to assess the energy flows driving the production process, and to determine the best production model combining productivity and environmental performance. The emergy indices are presented as a function of the annual crop. Results show that Santo Inacio farm should produce approximately 20 bags of green coffee per hectare to accomplish its best performance regarding both the production efficiency and the environment. The evaluation of coffee trade complements those obtained by contrasting productivity and environmental performance, and despite of the market prices variation, the optimum interval for Santo Inacio's farm is between 10 and 25 coffee bags/ha. - Highlights: > Emergy synthesis is used to assess the environmental performance of a coffee farm in Brazil. > The effects of land use on sustainability were evaluated along ten years. > The energy flows driving the production process were assessed. > The best production model combining productivity and environmental performance was determined.

  2. The Influence of Feedback on Task-Switching Performance: A Drift Diffusion Modeling Account

    Directory of Open Access Journals (Sweden)

    Russell Cohen Hoffing

    2018-02-01

    Full Text Available Task-switching is an important cognitive skill that facilitates our ability to choose appropriate behavior in a varied and changing environment. Task-switching training studies have sought to improve this ability by practicing switching between multiple tasks. However, an efficacious training paradigm has been difficult to develop in part due to findings that small differences in task parameters influence switching behavior in a non-trivial manner. Here, for the first time we employ the Drift Diffusion Model (DDM to understand the influence of feedback on task-switching and investigate how drift diffusion parameters change over the course of task switch training. We trained 316 participants on a simple task where they alternated sorting stimuli by color or by shape. Feedback differed in six different ways between subjects groups, ranging from No Feedback (NFB to a variety of manipulations addressing trial-wise vs. Block Feedback (BFB, rewards vs. punishments, payment bonuses and different payouts depending upon the trial type (switch/non-switch. While overall performance was found to be affected by feedback, no effect of feedback was found on task-switching learning. Drift Diffusion Modeling revealed that the reductions in reaction time (RT switch cost over the course of training were driven by a continually decreasing decision boundary. Furthermore, feedback effects on RT switch cost were also driven by differences in decision boundary, but not in drift rate. These results reveal that participants systematically modified their task-switching performance without yielding an overall gain in performance.

  3. The Influence of Feedback on Task-Switching Performance: A Drift Diffusion Modeling Account.

    Science.gov (United States)

    Cohen Hoffing, Russell; Karvelis, Povilas; Rupprechter, Samuel; Seriès, Peggy; Seitz, Aaron R

    2018-01-01

    Task-switching is an important cognitive skill that facilitates our ability to choose appropriate behavior in a varied and changing environment. Task-switching training studies have sought to improve this ability by practicing switching between multiple tasks. However, an efficacious training paradigm has been difficult to develop in part due to findings that small differences in task parameters influence switching behavior in a non-trivial manner. Here, for the first time we employ the Drift Diffusion Model (DDM) to understand the influence of feedback on task-switching and investigate how drift diffusion parameters change over the course of task switch training. We trained 316 participants on a simple task where they alternated sorting stimuli by color or by shape. Feedback differed in six different ways between subjects groups, ranging from No Feedback (NFB) to a variety of manipulations addressing trial-wise vs. Block Feedback (BFB), rewards vs. punishments, payment bonuses and different payouts depending upon the trial type (switch/non-switch). While overall performance was found to be affected by feedback, no effect of feedback was found on task-switching learning. Drift Diffusion Modeling revealed that the reductions in reaction time (RT) switch cost over the course of training were driven by a continually decreasing decision boundary. Furthermore, feedback effects on RT switch cost were also driven by differences in decision boundary, but not in drift rate. These results reveal that participants systematically modified their task-switching performance without yielding an overall gain in performance.

  4. One-dimensional model of oxygen transport impedance accounting for convection perpendicular to the electrode

    Energy Technology Data Exchange (ETDEWEB)

    Mainka, J. [Laboratorio Nacional de Computacao Cientifica (LNCC), CMC 6097, Av. Getulio Vargas 333, 25651-075 Petropolis, RJ, Caixa Postal 95113 (Brazil); Maranzana, G.; Thomas, A.; Dillet, J.; Didierjean, S.; Lottin, O. [Laboratoire d' Energetique et de Mecanique Theorique et Appliquee (LEMTA), Universite de Lorraine, 2, avenue de la Foret de Haye, 54504 Vandoeuvre-les-Nancy (France); LEMTA, CNRS, 2, avenue de la Foret de Haye, 54504 Vandoeuvre-les-Nancy (France)

    2012-10-15

    A one-dimensional (1D) model of oxygen transport in the diffusion media of proton exchange membrane fuel cells (PEMFC) is presented, which considers convection perpendicular to the electrode in addition to diffusion. The resulting analytical expression of the convecto-diffusive impedance is obtained using a convection-diffusion equation instead of a diffusion equation in the case of classical Warburg impedance. The main hypothesis of the model is that the convective flux is generated by the evacuation of water produced at the cathode which flows through the porous media in vapor phase. This allows the expression of the convective flux velocity as a function of the current density and of the water transport coefficient {alpha} (the fraction of water being evacuated at the cathode outlet). The resulting 1D oxygen transport impedance neglects processes occurring in the direction parallel to the electrode that could have a significant impact on the cell impedance, like gas consumption or concentration oscillations induced by the measuring signal. However, it enables us to estimate the impact of convection perpendicular to the electrode on PEMFC impedance spectra and to determine in which conditions the approximation of a purely diffusive oxygen transport is valid. Experimental observations confirm the numerical results. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  5. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    Energy Technology Data Exchange (ETDEWEB)

    Beckon, William N., E-mail: William_Beckon@fws.gov

    2016-07-15

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  6. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    International Nuclear Information System (INIS)

    Beckon, William N.

    2016-01-01

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  7. Where's the problem? Considering Laing and Esterson's account of schizophrenia, social models of disability, and extended mental disorder.

    Science.gov (United States)

    Cooper, Rachel

    2017-08-01

    In this article, I compare and evaluate R. D. Laing and A. Esterson's account of schizophrenia as developed in Sanity, Madness and the Family (1964), social models of disability, and accounts of extended mental disorder. These accounts claim that some putative disorders (schizophrenia, disability, certain mental disorders) should not be thought of as reflecting biological or psychological dysfunction within the afflicted individual, but instead as external problems (to be located in the family, or in the material and social environment). In this article, I consider the grounds on which such claims might be supported. I argue that problems should not be located within an individual putative patient in cases where there is some acceptable test environment in which there is no problem. A number of cases where such an argument can show that there is no internal disorder are discussed. I argue, however, that Laing and Esterson's argument-that schizophrenia is not within diagnosed patients-does not work. The problem with their argument is that they fail to show that the diagnosed women in their study function adequately in any environment.

  8. Modeling liquid-vapor equilibria with an equation of state taking into account dipolar interactions and association by hydrogen bonding

    International Nuclear Information System (INIS)

    Perfetti, E.

    2006-11-01

    Modelling fluid-rock interactions as well as mixing and unmixing phenomena in geological processes requires robust equations of state (EOS) which must be applicable to systems containing water, gases over a broad range of temperatures and pressures. Cubic equations of state based on the Van der Waals theory (e. g. Soave-Redlich-Kwong or Peng-Robinson) allow simple modelling from the critical parameters of the studied fluid components. However, the accuracy of such equations becomes poor when water is a major component of the fluid since neither association trough hydrogen bonding nor dipolar interactions are accounted for. The Helmholtz energy of a fluid may be written as the sum of different energetic contributions by factorization of partition function. The model developed in this thesis for the pure H 2 O and H 2 S considers three contributions. The first contribution represents the reference Van der Waals fluid which is modelled by the SRK cubic EOS. The second contribution accounts for association through hydrogen bonding and is modelled by a term derived from Cubic Plus Association (CPA) theory. The third contribution corresponds to the dipolar interactions and is modelled by the Mean Spherical Approximation (MSA) theory. The resulting CPAMSA equation has six adjustable parameters, which three represent physical terms whose values are close to their experimental counterpart. This equation results in a better reproduction of the thermodynamic properties of pure water than obtained using the classical CPA equation along the vapour-liquid equilibrium. In addition, extrapolation to higher temperatures and pressure is satisfactory. Similarly, taking into account dipolar interactions together with the SRK cubic equation of state for calculating molar volume of H 2 S as a function of pressure and temperature results in a significant improvement compared to the SRK equation alone. Simple mixing rules between dipolar molecules are proposed to model the H 2 O-H 2 S

  9. Understanding variability of the Southern Ocean overturning circulation in CORE-II models

    Science.gov (United States)

    Downes, S. M.; Spence, P.; Hogg, A. M.

    2018-03-01

    The current generation of climate models exhibit a large spread in the steady-state and projected Southern Ocean upper and lower overturning circulation, with mechanisms for deep ocean variability remaining less well understood. Here, common Southern Ocean metrics in twelve models from the Coordinated Ocean-ice Reference Experiment Phase II (CORE-II) are assessed over a 60 year period. Specifically, stratification, surface buoyancy fluxes, and eddies are linked to the magnitude of the strengthening trend in the upper overturning circulation, and a decreasing trend in the lower overturning circulation across the CORE-II models. The models evolve similarly in the upper 1 km and the deep ocean, with an almost equivalent poleward intensification trend in the Southern Hemisphere westerly winds. However, the models differ substantially in their eddy parameterisation and surface buoyancy fluxes. In general, models with a larger heat-driven water mass transformation where deep waters upwell at the surface ( ∼ 55°S) transport warmer waters into intermediate depths, thus weakening the stratification in the upper 2 km. Models with a weak eddy induced overturning and a warm bias in the intermediate waters are more likely to exhibit larger increases in the upper overturning circulation, and more significant weakening of the lower overturning circulation. We find the opposite holds for a cool model bias in intermediate depths, combined with a more complex 3D eddy parameterisation that acts to reduce isopycnal slope. In summary, the Southern Ocean overturning circulation decadal trends in the coarse resolution CORE-II models are governed by biases in surface buoyancy fluxes and the ocean density field, and the configuration of the eddy parameterisation.

  10. Modelling the behaviour of uranium-series radionuclides in soils and plants taking into account seasonal variations in soil hydrology

    International Nuclear Information System (INIS)

    Pérez-Sánchez, D.; Thorne, M.C.

    2014-01-01

    In a previous paper, a mathematical model for the behaviour of 79 Se in soils and plants was described. Subsequently, a review has been published relating to the behaviour of 238 U-series radionuclides in soils and plants. Here, we bring together those two strands of work to describe a new mathematical model of the behaviour of 238 U-series radionuclides entering soils in solution and their uptake by plants. Initial studies with the model that are reported here demonstrate that it is a powerful tool for exploring the behaviour of this decay chain or subcomponents of it in soil-plant systems under different hydrological regimes. In particular, it permits studies of the degree to which secular equilibrium assumptions are appropriate when modelling this decay chain. Further studies will be undertaken and reported separately examining sensitivities of model results to input parameter values and also applying the model to sites contaminated with 238 U-series radionuclides. - Highlights: • Kinetic model of radionuclide transport in soils and uptake by plants. • Takes soil hydrology and redox conditions into account. • Applicable to the whole U-238 chain, including Rn-222, Pb-210 and Po-210. • Demonstrates intra-season and inter-season variability on timescales up to thousands of years

  11. Photoproduction of pions on nuclear in chiral bag model with account of motion effects of recoil nucleon

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Kanokov, Z.; Musakhanov, M.M.; Rakhimov, A.M.

    1989-01-01

    Pion production on a nucleon is studied in the chiral bag model (CBM). A CBM version is investigated in which the pions get into the bag and interact with quarks in a pseudovector way in the entire volume. Charged pion photoproduction amplitudes are found taking into account the recoil nucleon motion effects. Angular and energy distributions of charged pions, polarization of the recoil nucleon, multipoles are calculated. The recoil effects are shon to give an additional contribution to the static approximation of order of 10-20%. At bag radius value R=1 in the calculations are consistent with the experimental data

  12. Collaborative Russian-US work in nuclear material protection, control and accounting at the Institute of Physics and Power Engineering. II. extension to additional facilities

    International Nuclear Information System (INIS)

    Kuzin, V.V.; Pshakin, G.M.; Belov, A.P.

    1996-01-01

    During 1995, collaborative Russian-US nuclear material protection, control and accounting (MPC ampersand A) tasks at the Institute of Physics and Power Engineering (IPPE) in Obninsk, Russia focused on improving the protection of nuclear materials at the BFS Fast Critical Facility. BFS has thousands of fuel disks containing highly enriched uranium and weapons-grade plutonium that are used to simulate the core configurations of experimental reactors in two critical assemblies. Completed tasks culminated in demonstrations of newly implemented equipment and methods that enhanced the MPC ampersand A at BFS through computerized accounting, nondestructive inventory verification measurements, personnel identification and assess control, physical inventory taking, physical protection, and video surveillance. The collaborative work is now being extended. The additional tasks encompass communications and tamper-indicating devices; new storage alternatives; and systemization of the MPC ampersand A elements that are being implemented

  13. Fluorescent chemosensor based on urea/thiourea moiety for sensing of Hg(II) ions in an aqueous medium with high sensitivity and selectivity: A comparative account on effect of molecular architecture on chemosensing

    Science.gov (United States)

    Mishra, Jayanti; Kaur, Harpreet; Ganguli, Ashok K.; Kaur, Navneet

    2018-06-01

    Mercury is a well-known heavy metal ion which is extremely poisonous to health but is still employed in the form of mercury salts and organomercury compounds in various industrial, anthropological and agricultural activities. Henceforth, its sensing in aqueous medium is an area of great interest in order to avoid its hazardous effect. In the present manuscript, urea/thiourea linkage bearing four organic ligands (1a, 1b, 2a and 2b) are synthesized by a three-step synthetic approach. The organic ligands were then employed to develop organic nanoparticles by re-precipitation method which was further probed for their selective recognition behavior in an aqueous medium using fluorescence spectroscopy. The fluorescence emission profile of the ONPs is used as a tool for the tracking of sensing behavior. The ONPs of 1b has shown selective recognition towards Hg(II) in aqueous medium evidenced by enhancement of fluorescence emission intensity after complexation of 1b ONP with Hg(II), among several alkali, alkaline earth and transition metal ions with a detection limit of the order of 0.84 μM. The ability of the proposed sensor to sense Hg(II) ions with high selectivity and sensitivity could be accounted to photo-induced electron transfer (PET) "OFF" mechanism at λem = 390 nm. This study reveals the application of the proposed thiourea-based sensor for the selective recognition of the Hg(II) ions in an aqueous medium.

  14. A two-phase moisture transport model accounting for sorption hysteresis in layered porous building constructions

    DEFF Research Database (Denmark)

    Johannesson, Björn; Janz, Mårten

    2009-01-01

    Building constructions most commonly consists of layered porous materials such as masonry on bricks. The moisture distribution and its variations due to change in surrounding environment is of special interest in such layered construction since materials adsorb different amounts of water and exhi......Building constructions most commonly consists of layered porous materials such as masonry on bricks. The moisture distribution and its variations due to change in surrounding environment is of special interest in such layered construction since materials adsorb different amounts of water....... The model is developed by carefully examining the mass balance postulates for the two considered constituents together with appropriate and suitable constitutive assumptions. A test example is solved by using an implemented implicit finite element code which uses a modified Newton-Raphson scheme to tackle...

  15. Simple model for taking into account the effects of plasma screening in thermonuclear reactions

    International Nuclear Information System (INIS)

    Shalybkov, D.A.; Yakovlev, D.G.

    1988-01-01

    In the Thomas-Fermi model of high-density matter analytic calculation is made of the factor by which the rate of the thermonuclear reactions is enhanced by the effects of plasma screening in a degenerate weakly non-ideal electron gas and a strongly nonideal two-component ion liquid with large charge of the ions. The regions of densities and temperatures in which screening due to compressibility of the electron gas plays an important part are found. It is noted that the screening due to this compressibility may be influenced by strong magnetic fields B /approximately/ 10 12 -10 13 G, which quantize the motion of the electrons and change the electron charge screening length in the plasma. The results can be used for the degenerate cores of white dwarfs and shells of neutron stars

  16. Model of investment appraisal of high-rise construction with account of cost of land resources

    Science.gov (United States)

    Okolelova, Ella; Shibaeva, Marina; Trukhina, Natalya

    2018-03-01

    The article considers problems and potential of high-rise construction as a global urbanization. The results of theoretical and practical studies on the appraisal of investments in high-rise construction are provided. High-rise construction has a number of apparent upsides in modern terms of development of megapolises and primarily it is economically efficient. Amid serious lack of construction sites, skyscrapers successfully deal with the need of manufacturing, office and living premises. Nevertheless, there are plenty issues, which are related with high-rise construction, and only thorough scrutiny of them allow to estimate the real economic efficiency of this branch. The article focuses on the question of economic efficiency of high-rise construction. The suggested model allows adjusting the parameters of a facility under construction, setting the tone for market value as well as the coefficient for appreciation of the construction net cost, that depends on the number of storey's, in the form of function or discrete values.

  17. Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation

    KAUST Repository

    De La Garza Martinez, Pablo

    2016-05-01

    Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.

  18. Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation

    KAUST Repository

    De La Garza Martinez, Pablo

    2016-01-01

    Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.

  19. On the influence of debris in glacier melt modelling: a new temperature-index model accounting for the debris thickness feedback

    Science.gov (United States)

    Carenzo, Marco; Mabillard, Johan; Pellicciotti, Francesca; Reid, Tim; Brock, Ben; Burlando, Paolo

    2013-04-01

    The increase of rockfalls from the surrounding slopes and of englacial melt-out material has led to an increase of the debris cover extent on Alpine glaciers. In recent years, distributed debris energy-balance models have been developed to account for the melt rate enhancing/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya. Some of the input data such as wind or temperature are also of difficult extrapolation from station measurements. Due to their lower data requirement, empirical models have been used in glacier melt modelling. However, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of debris thickness on melt. In this paper, we present a new temperature-index model accounting for the debris thickness feedback in the computation of melt rates at the debris-ice interface. The empirical parameters (temperature factor, shortwave radiation factor, and lag factor accounting for the energy transfer through the debris layer) are optimized at the point scale for several debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter has been validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. The new model is developed on Miage Glacier, Italy, a debris cover glacier in which the ablation area is mantled in near-continuous layer of rock. Subsequently, its transferability is tested on Haut Glacier d'Arolla, Switzerland, where debris is thinner and its extension has been seen to expand in the last decades. The results show that the performance of the new debris temperature-index model (DETI) in simulating the glacier melt rate at the point scale

  20. A primer for biomedical scientists on how to execute model II linear regression analysis.

    Science.gov (United States)

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.