WorldWideScience

Sample records for model ii accounts

  1. A kinetic model for type I and II IP3R accounting for mode changes.

    Science.gov (United States)

    Siekmann, Ivo; Wagner, Larry E; Yule, David; Crampin, Edmund J; Sneyd, James

    2012-08-22

    Based upon an extensive single-channel data set, a Markov model for types I and II inositol trisphosphate receptors (IP(3)R) is developed. The model aims to represent accurately the kinetics of both receptor types of IP(3)R depending on the concentrations of inositol trisphosphate (IP(3)), adenosine trisphosphate (ATP), and intracellular calcium (Ca(2+)). In particular, the model takes into account that for some combinations of ligands the IP(3)R switches between extended periods of inactivity alternating with intervals of bursting activity (mode changes). In a first step, the inactive and active modes are modeled separately. It is found that, within modes, both receptor types are ligand-independent. In a second step, the submodels are connected by transition rates. Ligand-dependent regulation of the channel activity is achieved by modulating these transitions between active and inactive modes. As a result, a compact representation of the IP(3)R is obtained that accurately captures stochastic single-channel dynamics including mode changes in a model with six states and 10 rate constants, only two of which are ligand-dependent.

  2. On the importance of accounting for competing risks in pediatric brain cancer: II. Regression modeling and sample size.

    Science.gov (United States)

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-03-15

    To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Accounting Issues: An Essay Series. Part II--Accounts Receivable

    Science.gov (United States)

    Laux, Judith A.

    2007-01-01

    This is the second in a series of articles designed to help academics refocus the introductory accounting course on the theoretical underpinnings of accounting. Intended as a supplement for the principles course, this article connects the asset Accounts Receivable to the essential theoretical constructs, discusses the inherent tradeoffs and…

  4. Combining a polarizable force-field and a coarse-grained polarizable solvent model. II. Accounting for hydrophobic effects.

    Science.gov (United States)

    Masella, Michel; Borgis, Daniel; Cuniasse, Philippe

    2011-09-01

    A revised and improved version of our efficient polarizable force-field/coarse grained solvent combined approach (Masella, Borgis, and Cuniasse, J. Comput. Chem. 2008, 29, 1707) is described. The polarizable pseudo-particle solvent model represents the macroscopic solvent polarization by induced dipoles placed on mobile pseudo-particles. In this study, we propose a new formulation of the energy term handling the nonelectrostatic interactions among the pseudo-particles. This term is now able to reproduce the energetic and structural response of liquid water due to the presence of a hydrophobic spherical cavity. Accordingly, the parameters of the energy term handling the nonpolar solute/solvent interactions have been refined to reproduce the free-solvation energy of small solutes, based on a standard thermodynamic integration scheme. The reliability of this new approach has been checked for the properties of solvated methane and of the solvated methane dimer, as well as by performing 10 × 20 ns molecular dynamics (MD) trajectories for three solvated proteins. A long-time stability of the protein structures along the trajectories is observed. Moreover, our method still provides a measure of the protein solvation thermodynamic at the same accuracy as standard Poisson-Boltzman continuum methods. These results show the relevance of our approach and its applicability to massively coupled MD schemes to accurately and intensively explore solvated macromolecule potential energy surfaces.

  5. Modeling of the structure-specific kinetics of abiotic, dark reduction of Hg(II) complexed by O/N and S functional groups in humic acids while accounting for time-dependent structural rearrangement

    Science.gov (United States)

    Jiang, Tao; Skyllberg, Ulf; Wei, Shiqiang; Wang, Dingyong; Lu, Song; Jiang, Zhenmao; Flanagan, Dennis C.

    2015-04-01

    Dark reduction of Hg(II) to Hg(0) in deep waters, soils and sediments accounts for a large part of legacy Hg recycling back to the atmosphere. Natural organic matter (NOM) plays a dual role in the process, acting as an electron donor and complexation agent of Hg(II). Experimental determination of rates of dark Hg(II) reduction is complicated by the simultaneously ongoing kinetics of Hg(II) rearrangement from the abundant, relatively weakly bonding RO/N (carboxyl, amino) groups in NOM to the much stronger bonding RSH (thiol) group. In this study, kinetics of the rearrangement are accounted for and we report rates of dark Hg(II) reduction for two molecular structures in presence of humic acids (HA) extracted from three different sources. Values on the pseudo first-order rate constant for the proposed structure Hg(OR)2 (kredHg(OR)2) were 0.18, 0.22 and 0.35 h-1 for Peat, Coal and Soil HA, respectively, and values on the constant for the proposed structure RSHgOR (kred RSHgOR) were 0.003 and 0.006 h-1 for Peat and Soil HA, respectively. The Hg(SR)2 structure is the thermodynamically most stable, but the limited time of the experiment (53 h) did not allow for a determination of the rate of the very slow reduction of Hg(II) in this structure. For two out of three HA samples the concentration of RSH groups optimized by the kinetic model (0.6 × 10-3 RSH groups per C atoms) was in good agreement with independent estimates provided by sulfur X-ray absorption near-edge spectroscopy (S XANES). Experiments conducted at varying concentrations of Hg(II) and HA demonstrated a positive relationship between Hg(II) reduction and concentrations of specific Hg(II) structures and electron donor groups, suggesting first order in each of these two components. The limitation of the Hg(II) reduction by electron donating groups of HA, as represented by the native reducing capacity (NRC), was demonstrated for the Coal HA sample. Normalization to NRC resulted in pseudo second-order rate

  6. Modelling in Accounting. Theoretical and Practical Dimensions

    Directory of Open Access Journals (Sweden)

    Teresa Szot-Gabryś

    2010-10-01

    Full Text Available Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic measurements areas which have not been hitherto covered by any accounting system (it applies, for example, to small businesses, agricultural farms, human capital, which requires the development of an appropriate theoretical and practical model. The article illustrates the issue of modelling in accounting based on the example of an accounting model developed for small businesses, i.e. economic entities which are not obliged by law to keep accounting records.

  7. Implementing a trustworthy cost-accounting model.

    Science.gov (United States)

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  8. Modeling in Accounting, an Imperative Process?

    Directory of Open Access Journals (Sweden)

    Robu Sorin-Adrian

    2014-08-01

    Full Text Available The approach of this topic suggested to us by the fact that currently, it persists a controversy regarding the elements that influence decisively the qualitative characteristics of useful financial information. From these elements, we remark accounting models and concepts of capital maintenance in terms of the accounting result, which can be under the influence of factors such as subjectivity or even lack of neutrality. Therefore, in formulating the response to the question that is the title of the paper, we will start from the fact that the financial statements prepared by the accounting systems must be the result of processing after appropriate models, which ultimately can respond as good as possible to external user’s requirements and internal, in particular the knowledge of the financial position and performance of economic entities.

  9. Modeling habitat dynamics accounting for possible misclassification

    Science.gov (United States)

    Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.

    2012-01-01

    Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.

  10. Media Accountability Systems: Models, proposals and outlooks

    Directory of Open Access Journals (Sweden)

    Luiz Martins da Silva

    2007-06-01

    Full Text Available This paper analyzes one of the basic actions of SOS-Imprensa, the mechanism to assure Media Accountability with the goal of proposing a synthesis of models for the Brazilian reality. The article aims to address the possibilities of creating and improving mechanisms to stimulate the democratic press process and to mark out and assure freedom of speech and personal rights with respect to the media. Based on the Press Social Responsibility Theory, the hypothesis is that the experiences analyzed (Communication Council, Press Council, Ombudsman and Readers Council are alternatives for accountability, mediation and arbitration, seeking visibility, trust and public support in favor of fairer media.

  11. Guidelines for School Property Accounting in Colorado, Part II--General Fixed Asset Accounts.

    Science.gov (United States)

    Stiverson, Clare L.

    The second publication of a series of three issued by the Colorado Department of Education is designed as a guide for local school districts in the development of a property accounting system. It defines and classifies groups of accounts whereby financial information, taken from inventory records, may be transcribed into debit and credit entries…

  12. Driving Strategic Risk Planning With Predictive Modelling For Managerial Accounting

    DEFF Research Database (Denmark)

    Nielsen, Steen; Pontoppidan, Iens Christian

    for managerial accounting and shows how it can be used to determine the impact of different types of risk assessment input parameters on the variability of important outcome measures. The purpose is to: (i) point out the theoretical necessity of a stochastic risk framework; (ii) present a stochastic framework......Currently, risk management in management/managerial accounting is treated as deterministic. Although it is well-known that risk estimates are necessarily uncertain or stochastic, until recently the methodology required to handle stochastic risk-based elements appear to be impractical and too...... mathematical. The ultimate purpose of this paper is to “make the risk concept procedural and analytical” and to argue that accountants should now include stochastic risk management as a standard tool. Drawing on mathematical modelling and statistics, this paper methodically develops risk analysis approach...

  13. Fusion of expertise among accounting accounting faculty. Towards an expertise model for academia in accounting.

    NARCIS (Netherlands)

    Njoku, Jonathan C.; van der Heijden, Beatrice; Inanga, Eno L.

    2010-01-01

    This paper aims to portray an accounting faculty expert. It is argued that neither the academic nor the professional orientation alone appears adequate in developing accounting faculty expertise. The accounting faculty expert is supposed to develop into a so-called ‘flexpert’ (Van der Heijden, 2003)

  14. Accommodating environmental variation in population models: metaphysiological biomass loss accounting.

    Science.gov (United States)

    Owen-Smith, Norman

    2011-07-01

    1. There is a pressing need for population models that can reliably predict responses to changing environmental conditions and diagnose the causes of variation in abundance in space as well as through time. In this 'how to' article, it is outlined how standard population models can be modified to accommodate environmental variation in a heuristically conducive way. This approach is based on metaphysiological modelling concepts linking populations within food web contexts and underlying behaviour governing resource selection. Using population biomass as the currency, population changes can be considered at fine temporal scales taking into account seasonal variation. Density feedbacks are generated through the seasonal depression of resources even in the absence of interference competition. 2. Examples described include (i) metaphysiological modifications of Lotka-Volterra equations for coupled consumer-resource dynamics, accommodating seasonal variation in resource quality as well as availability, resource-dependent mortality and additive predation, (ii) spatial variation in habitat suitability evident from the population abundance attained, taking into account resource heterogeneity and consumer choice using empirical data, (iii) accommodating population structure through the variable sensitivity of life-history stages to resource deficiencies, affecting susceptibility to oscillatory dynamics and (iv) expansion of density-dependent equations to accommodate various biomass losses reducing population growth rate below its potential, including reductions in reproductive outputs. Supporting computational code and parameter values are provided. 3. The essential features of metaphysiological population models include (i) the biomass currency enabling within-year dynamics to be represented appropriately, (ii) distinguishing various processes reducing population growth below its potential, (iii) structural consistency in the representation of interacting populations and

  15. MODEL OF MANAGEMENT ACCOUNTING FOR MERCHANDISES SECTOR COMPANIES

    Directory of Open Access Journals (Sweden)

    Glăvan Elena Mariana

    2013-04-01

    Full Text Available Changes in Romanian accounting system have been articulated with priority to financial accounting, without leaving aside management accounting. Changes in management accounting covered a few general indicative directions giving to managers enhanced skills regarding the organisation and function of this branch of the accounting system. Roumanian regulations offered solutions[16] regarding the accounts and records in managerial accounting, but all these elements are recommendations. In the Roumanian chart of accounts there is a specific class of accounts, class number 9-accounts for management accounting, but this class of accounts is an optional one. In fact there are a lot of models applied by companies, but all are based on the accounts that are mentioned in class 9. After an analysis of Romanian accounting literature, we realized that an important part of studies focuses on management accounting models applied to manufacturing companies. As a result, we conceived a model for management accounting specific for marchendises companies. In our model there are two groups of accounts: one from chart of accounts (class 9 and another group composed by accounts proposed by authors. We think that our model could provide more information about each cost object, regarding sales, acquisition cost of sales, distribution cost, administration cost, total cost of sales, contribution margin and profitability. Our model was exemplified by a case study applied to a wholesales company.

  16. Discrete Model of Ideological Struggle Accounting for Migration

    CERN Document Server

    Vitanov, Nikolay K; Rotundo, Giulia

    2012-01-01

    A discrete in time model of ideological competition is formulated taking into account population migration. The model is based on interactions between global populations of non-believers and followers of different ideologies. The complex dynamics of the attracting manifolds is investigated. Conversion from one ideology to another by means of (i) mass media influence and (ii) interpersonal relations is considered. Moreover a different birth rate is assumed for different ideologies, the rate being assumed to be positive for the reference population, made of initially non-believers. Ideological competition can happen in one or several regions in space. In the latter case, migration of non-believers and adepts is allowed; this leads to an enrichment of the ideological dynamics. Finally, the current ideological situation in the Arab countries and China is commented upon from the point of view of the presently developed mathematical model. The massive forced conversion by Ottoman Turks in the Balkans is briefly dis...

  17. DMFCA Model as a Possible Way to Detect Creative Accounting and Accounting Fraud in an Enterprise

    Directory of Open Access Journals (Sweden)

    Jindřiška Kouřilová

    2013-05-01

    Full Text Available The quality of reported accounting data as well as the quality and behaviour of their users influence the efficiency of an enterprise’s management. Its assessment could therefore be changed as well. To identify creative accounting and fraud, several methods and tools were used. In this paper we would like to present our proposal of the DMFCA (Detection model Material Flow Cost Accounting balance model based on environmental accounting and the MFCA (Material Flow Cost Accounting as its method. The following balance areas are included: material, financial and legislative. Using the analysis of strengths and weaknesses of the model, its possible use within a production and business company were assessed. Its possible usage to the detection of some creative accounting techniques was also assessed. The Model is developed in details for practical use and describing theoretical aspects.

  18. Display of the information model accounting system

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2011-12-01

    Full Text Available This paper presents the accounting information system in public companies, business technology matrix and data flow diagram. The paper describes the purpose and goals of the accounting process, matrix sub-process and data class. Data flow in the accounting process and the so-called general ledger module are described in detail. Activities of the financial statements and determining the financial statements of the companies are mentioned as well. It is stated how the general ledger module should function and what characteristics it must have. Line graphs will depict indicators of the company’s business success, indebtedness and company’s efficiency coefficients based on financial balance reports, and profit and loss report.

  19. COMPARATIVE STUDY ON ACCOUNTING MODELS "CASH" AND "ACCRUAL"

    OpenAIRE

    Tatiana Danescu; Luminita Rus

    2013-01-01

    Accounting, as a source of information, can recognize the economic transactionstaking into account the time of payment or receipt thereof, as soon as they occur. There are twobasic models of accounting: accrual basis and cash basis. In the cash accounting method thetransactions are recorded only when cash is received or paid, shall not make the difference betweenthe purchase of an asset and the payment of expenditure - both of which are considered"payments". Accrual accounting achieves this d...

  20. 25 CFR 547.9 - What are the minimum technical standards for Class II gaming system accounting functions?

    Science.gov (United States)

    2010-04-01

    ... gaming system accounting functions? 547.9 Section 547.9 Indians NATIONAL INDIAN GAMING COMMISSION... systems. (a) Required accounting data.The following minimum accounting data, however named, shall be...) Accounting data storage. If the Class II gaming system electronically maintains accounting data:...

  1. Anisotropic models to account for large borehole washouts to estimate gas hydrate saturations in the Gulf of Mexico Gas Hydrate Joint Industry Project Leg II Alaminos 21 B well

    Science.gov (United States)

    Lee, M.W.; Collett, T.S.; Lewis, K.A.

    2012-01-01

    Through the use of 3-D seismic amplitude mapping, several gashydrate prospects were identified in the Alaminos Canyon (AC) area of the Gulf of Mexico. Two locations were drilled as part of the Gulf of MexicoGasHydrate Joint Industry Project Leg II (JIP Leg II) in May of 2009 and a comprehensive set of logging-while-drilling (LWD) logs were acquired at each well site. LWD logs indicated that resistivity in the range of ~2 ohm-m and P-wave velocity in the range of ~1.9 km/s were measured in the target sand interval between 515 and 645 feet below sea floor. These values were slightly elevated relative to those measured in the sediment above and below the target sand. However, the initial well log analysis was inconclusive regarding the presence of gashydrate in the logged sand interval, mainly because largewashouts caused by drilling in the target interval degraded confidence in the well log measurements. To assess gashydratesaturations in the sedimentary section drilled in the Alaminos Canyon 21B (AC21-B) well, a method of compensating for the effect of washouts on the resistivity and acoustic velocities was developed. The proposed method models the washed-out portion of the borehole as a vertical layer filled with sea water (drilling fluid) and the apparent anisotropic resistivity and velocities caused by a vertical layer are used to correct the measured log values. By incorporating the conventional marine seismic data into the well log analysis, the average gashydratesaturation in the target sand section in the AC21-Bwell can be constrained to the range of 8–28%, with 20% being our best estimate.

  2. A holistic model for Islamic accountants and its value added

    OpenAIRE

    El-Halaby, Sherif; Hussainey, Khaled

    2015-01-01

    Purpose – The core objective for this study is introduce the holistic model for Islamic accountants through exploring the perspectives of Muslim scholars; Islamic sharia and AAOIFI ethical standards. The study also contributes to existing literature by exploring the main added value of Muslim accountant towards stakeholders through investigates the main roles of an Islamic accountants. Design/methodology/approach – The paper critically reviews historical debates about Islamic accounting and t...

  3. River water quality modelling: II

    DEFF Research Database (Denmark)

    Shanahan, P.; Henze, Mogens; Koncsos, L.

    1998-01-01

    The U.S. EPA QUAL2E model is currently the standard for river water quality modelling. While QUAL2E is adequate for the regulatory situation for which it was developed (the U.S. wasteload allocation process), there is a need for a more comprehensive framework for research and teaching. Moreover......, and to achieve robust model calibration. Mass balance problems arise from failure to account for mass in the sediment as well as in the water column and due to the fundamental imprecision of BOD as a state variable. (C) 1998 IAWQ Published by Elsevier Science Ltd. All rights reserved....

  4. Accountancy Modeling on Intangible Fixed Assets in Terms of the Main Provisions of International Accounting Standards

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2014-12-01

    Full Text Available Intangible fixed assets are of great importance in terms of progress of economic units. In recent years new approaches have been developed, additions to old standards so that intangible assets have gained a reputation both in the economic environment and in academia. We intend to develop a practical study on the main accounting approaches of the accounting modeling of the intangibles that impact on a company's brand development research PRORESEARCH SRL.

  5. The financial accounting model from a system dynamics' perspective

    NARCIS (Netherlands)

    Melse, E.

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal

  6. The financial accounting model from a system dynamics' perspective

    NARCIS (Netherlands)

    Melse, E.

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal confl

  7. The financial accounting model from a system dynamics' perspective

    NARCIS (Netherlands)

    Melse, E.

    2006-01-01

    This paper explores the foundation of the financial accounting model. We examine the properties of the accounting equation as the principal algorithm for the design and the development of a System Dynamics model. Key to the perspective is the foundational requirement that resolves the temporal confl

  8. Multilayer piezoelectric transducer models combined with Field II

    DEFF Research Database (Denmark)

    Bæk, David; Willatzen, Morten; Jensen, Jørgen Arendt

    2012-01-01

    with a polymer ring, and submerged into water. The transducer models are developed to account for any external electrical loading impedance in the driving circuit. The models are adapted to calculate the surface acceleration needed by the Field II software in predicting pressure pulses at any location in front...

  9. Modeling interactions of Hg(II) and bauxitic soils.

    Science.gov (United States)

    Weerasooriya, Rohan; Tobschall, Heinz J; Bandara, Atula

    2007-11-01

    The adsorptive interactions of Hg(II) with gibbsite-rich soils (hereafter SOIL-g) were modeled by 1-pK surface complexation theory using charge distribution multi-site ion competition model (CD MUSIC) incorporating basic Stern layer model (BSM) to account for electrostatic effects. The model calibrations were performed for the experimental data of synthetic gibbsite-Hg(II) adsorption. When [NaNO(3)] > or = 0.01M, the Hg(II) adsorption density values, of gibbsite, Gamma(Hg(II)), showed a negligible variation with ionic strength. However, Gamma(Hg(II)) values show a marked variation with the [Cl(-)]. When [Cl(-)] > or = 0.01M, the Gamma(Hg(II)) values showed a significant reduction with the pH. The Hg(II) adsorption behavior in NaNO(3) was modeled assuming homogeneous solid surface. The introduction of high affinity sites, i.e., >Al(s)OH at a low concentration (typically about 0.045 sites nm(-2)) is required to model Hg(II) adsorption in NaCl. According to IR spectroscopic data, the bauxitic soil (SOIL-g) is characterized by gibbsite and bayerite. These mineral phases were not treated discretely in modeling of Hg(II) and soil interactions. The CD MUSIC/BSM model combination can be used to model Hg(II) adsorption on bauxitic soil. The role of organic matter seems to play a role on Hg(II) binding when pH>8. The Hg(II) adsorption in the presence of excess Cl(-) ions required the selection of high affinity sites in modeling.

  10. Testing of a one dimensional model for Field II calibration

    DEFF Research Database (Denmark)

    Bæk, David; Jensen, Jørgen Arendt; Willatzen, Morten

    2008-01-01

    to the calibrated Field II program for 1, 4, and 10 cycle excitations. Two parameter sets were applied for modeling, one real valued Pz27 parameter set, manufacturer supplied, and one complex valued parameter set found in literature, Alguer´o et al. [11]. The latter implicitly accounts for attenuation. Results show......Field II is a program for simulating ultrasound transducer fields. It is capable of calculating the emitted and pulse-echoed fields for both pulsed and continuous wave transducers. To make it fully calibrated a model of the transducer’s electro-mechanical impulse response must be included. We...... examine an adapted one dimensional transducer model originally proposed by Willatzen [9] to calibrate Field II. This model is modified to calculate the required impulse responses needed by Field II for a calibrated field pressure and external circuit current calculation. The testing has been performed...

  11. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Liliana Feleaga

    2006-06-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  12. Models and Rules of Evaluation in International Accounting

    Directory of Open Access Journals (Sweden)

    Niculae Feleaga

    2006-04-01

    Full Text Available The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is frequently measured through costs. At the origin of the international accounting standards lays the framework for preparing, presenting and disclosing the financial statements. The framework stays as a reference matrix, as a standard of standards, as a constitution of financial accounting. According to the international framework, the financial statements use different evaluation basis: the hystorical cost, the current cost, the realisable (settlement value, the present value (the present value of cash flows. Choosing the evaluation basis and the capital maintenance concept will eventually determine the accounting evaluation model used in preparing the financial statements of a company. The multitude of accounting evaluation models differentiate themselves one from another through various relevance and reliable degrees of accounting information and therefore, accountants (the prepares of financial statements must try to equilibrate these two main qualitative characteristics of financial information.

  13. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.

    2000-01-01

    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest

  14. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.

    2000-01-01

    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest for

  15. Accountability Analysis of Electronic Commerce Protocols by Finite Automaton Model

    Institute of Scientific and Technical Information of China (English)

    Xie Xiao-yao; Zhang Huan-guo

    2004-01-01

    The accountability of electronic commerce protocols is an important aspect to insures security of electronic transaction. This paper proposes to use Finite Automaton (FA) model as a new kind of framework to analyze the trans action protocols in the application of electronic commerce.

  16. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.

    2000-01-01

    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest for

  17. The Two-Step Student Teaching Model: Training for Accountability.

    Science.gov (United States)

    Corlett, Donna

    This model of student teaching preparation was developed in collaboration with public schools to focus on systematic experience in teaching and training for accountability in the classroom. In the two-semester plan, students begin with teacher orientation and planning days, serve as teacher aides, attend various methods courses, teach several…

  18. Quantum-like models cannot account for the conjunction fallacy

    CERN Document Server

    Boyer-Kassem, Thomas; Guerci, Eric

    2016-01-01

    Human agents happen to judge that a conjunction of two terms is more probable than one of the terms, in contradiction with the rules of classical probabilities---this is the conjunction fallacy. One of the most discussed accounts of this fallacy is currently the quantum-like explanation, which relies on models exploiting the mathematics of quantum mechanics. The aim of this paper is to investigate the empirical adequacy of major such quantum-like models. We first argue that they can be tested in three different ways, in a question order effect configuration which is different from the traditional conjunction fallacy experiment. We then carry out our proposed experiment, with varied methodologies from experimental economics. The experimental results we get are at odds with the predictions of the quantum-like models. This strongly suggests that the quantum-like account of the conjunction fallacy fails. Future possible research paths are discussed.

  19. Application of a predictive Bayesian model to environmental accounting.

    Science.gov (United States)

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  20. Accounting for Business Models: Increasing the Visibility of Stakeholders

    Directory of Open Access Journals (Sweden)

    Colin Haslam

    2015-01-01

    Full Text Available Purpose: This paper conceptualises a firm’s business model employing stakeholder theory as a central organising element to help inform the purpose and objective(s of business model financial reporting and disclosure. Framework: Firms interact with a complex network of primary and secondary stakeholders to secure the value proposition of a firm’s business model. This value proposition is itself a complex amalgam of value creating, value capturing and value manipulating arrangements with stakeholders. From a financial accounting perspective the purpose of the value proposition for a firm’s business model is to sustain liquidity and solvency as a going concern. Findings: This article argues that stakeholder relations impact upon the financial viability of a firm’s business model value proposition. However current financial reporting by function of expenses and the central organising objectives of the accounting conceptual framework conceal firm-stakeholder relations and their impact on reported financials. Practical implications: The practical implication of our paper is that ‘Business Model’ financial reporting would require a reorientation in the accounting conceptual framework that defines the objectives and purpose of financial reporting. This reorientation would involve reporting about stakeholder relations and their impact on a firms financials not simply reporting financial information to ‘investors’. Social Implications: Business model financial reporting has the potential to be stakeholder inclusive because the numbers and narratives reported by firms in their annual financial statements will increase the visibility of stakeholder relations and how these are being managed. What is original/value of paper: This paper’s original perspective is that it argues that a firm’s business model is structured out of stakeholder relations. It presents the firm’s value proposition as the product of value creating, capturing and

  1. Optimal control design that accounts for model mismatch errors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T.J. [Sandia National Labs., Albuquerque, NM (United States); Hull, D.G. [Texas Univ., Austin, TX (United States). Dept. of Aerospace Engineering and Engineering Mechanics

    1995-02-01

    A new technique is presented in this paper that reduces the complexity of state differential equations while accounting for modeling assumptions. The mismatch controls are defined as the differences between the model equations and the true state equations. The performance index of the optimal control problem is formulated with a set of tuning parameters that are user-selected to tune the control solution in order to achieve the best results. Computer simulations demonstrate that the tuned control law outperforms the untuned controller and produces results that are comparable to a numerically-determined, piecewise-linear optimal controller.

  2. Accounting for microbial habitats in modeling soil organic matter dynamics

    Science.gov (United States)

    Chenu, Claire; Garnier, Patricia; Nunan, Naoise; Pot, Valérie; Raynaud, Xavier; Vieublé, Laure; Otten, Wilfred; Falconer, Ruth; Monga, Olivier

    2017-04-01

    The extreme heterogeneity of soils constituents, architecture and inhabitants at the microscopic scale is increasingly recognized. Microbial communities exist and are active in a complex 3-D physical framework of mineral and organic particles defining pores of various sizes, more or less inter-connected. This results in a frequent spatial disconnection between soil carbon, energy sources and the decomposer organisms and a variety of microhabitats that are more or less suitable for microbial growth and activity. However, current biogeochemical models account for C dynamics at the macroscale (cm, m) and consider time- and spatially averaged relationships between microbial activity and soil characteristics. Different modelling approaches have intended to account for this microscale heterogeneity, based either on considering aggregates as surrogates for microbial habitats, or pores. Innovative modelling approaches are based on an explicit representation of soil structure at the fine scale, i.e. at µm to mm scales: pore architecture and their saturation with water, localization of organic resources and of microorganisms. Three recent models are presented here, that describe the heterotrophic activity of either bacteria or fungi and are based upon different strategies to represent the complex soil pore system (Mosaic, LBios and µFun). These models allow to hierarchize factors of microbial activity in soil's heterogeneous architecture. Present limits of these approaches and challenges are presented, regarding the extensive information required on soils at the microscale and to up-scale microbial functioning from the pore to the core scale.

  3. EXODUS II: A finite element data model

    Energy Technology Data Exchange (ETDEWEB)

    Schoof, L.A.; Yarberry, V.R.

    1994-09-01

    EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).

  4. Accounting models and devolution in the Italian public sector

    Directory of Open Access Journals (Sweden)

    Aldo Pavan

    2006-06-01

    Full Text Available In the 1990s Italy started a public sector administrative reform process consistent, in general terms, with the New Public Management movement. In particular, changes have been introduced in the budgeting and accounting systems of the State, municipalities, health care bodies, etc. In the same years an institutional reform also started and a strong power devolution process began to be realised; a shift to a federal form of the State seems to be the goal. Stating form the challenges coming from the devolution process, the article questions 1 if it is possible to find some shared features in theh reformed accounting systems of the different public sector organisation categories, and to shape in this way on or more accounting Italian models, and 2 if these models have an information capacity adequate to sustain the information needs- in terms of accountability, government co-ordination and decision making- emerging from the devolution process. The information needs in a devolved environment are recognised; eleven budgeting and accounting systems are analysed and compared. The issue of the consistency level existing between accountign and institutional reforms is also discussed.En la Italia de los años 90, se inició un proceso de reforma administrativa del sector público en consonancia, en términos generales, con el movimineto New Public Management. En concreto, se han introducido modificaciones en los sistemas contables y presupuestarios del Estado, de las corporaciones locales y de las instituciones sanitarias. Durante el mismo periodo se empreendió una reforma de carácter constitucional cuyo objetivo último parecía ser la constitución de un estado federal. A partir de los desafíos que supone todo proceso de descentralización, el artículo abre dos interrogantes: 1 la posibilidad de encontrar rasgos comunes en los sitemas contables reformados de los distintos niveles organizativos del sector público, con el fin de confirmar uno o

  5. Accounting for Trust: A Conceptual Model for the Determinants of Trust in the Australian Public Accountant – SME Client Relationship

    Directory of Open Access Journals (Sweden)

    Michael Cherry

    2016-06-01

    Full Text Available This paper investigates trust as it relates to the relationship between Australia’s public accountants and their SME clients. It describes the contribution of the accountancy profession to the SME market, as well as the key challenges faced by accountants and their SME clients. Following the review of prior scholarly studies, a working definition of trust as it relates to this important relationship is also developed and presented. A further consequence of prior academic work is the development of a comprehensive conceptual model to describe the determinants of trust in the Australian public accountant – SME client relationship, which requires testing via empirical studies.

  6. Relations between Balance Sheet Policy and Accounting Policy in the Context of Different Accounting Models

    Directory of Open Access Journals (Sweden)

    Rafał Grabowski

    2010-12-01

    Full Text Available In Polish professional literature the terms balance sheet policy (in German: bilanzpolitik and accounting policy are commonly used. The problem raised by the author is defined by the fact that there exist at least a few perspectives of their meaning and their relations with each other. In some opinions balance sheet policy and accounting policy represent the same issues. In other opinions there appear differences between the two, however, there is no consensus as to the nature of the differences. The lack of clarity with regard to the explanation methods of balance sheet policy and accounting policy and their relations represents a research problem for theory and practice. The theory is required to codify the academic debate and systematize the terminology while in practice it is the management board that holds responsibility for a financial statement which is determined by accounting policy adopted by the entity. In this working paper the author has tried to point out the substance of balance sheet policy and accounting policy as well as to provide explanation of existing differences between them. Although the topic has already been discussed in professional literature, there have been no attempts to explain the substance of the two policies and their relations by making reference to their origin, ie. accounting approaches from which they evolved.

  7. Supo Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Wass, Alexander Joseph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-14

    This report describes the continuation of the Computational Fluid Dynamics (CFD) model of the Supo cooling system described in the report, Supo Thermal Model Development1, by Cynthia Buechler. The goal for this report is to estimate the natural convection heat transfer coefficient (HTC) of the system using the CFD results and to compare those results to remaining past operational data. Also, the correlation for determining radiolytic gas bubble size is reevaluated using the larger simulation sample size. The background, solution vessel geometry, mesh, material properties, and boundary conditions are developed in the same manner as the previous report. Although, the material properties and boundary conditions are determined using the appropriate experiment results for each individual power level.

  8. Principles of Public School Accounting. State Educational Records and Reports Series: Handbook II-B.

    Science.gov (United States)

    Adams, Bert K.; And Others

    This handbook discusses the following primary aspects of school accounting: Definitions and principles; opening the general ledger; recording the approved budget; a sample month of transactions; the balance sheet, monthly, and annual reports; subsidiary journals; payroll procedures; cafeteria fund accounting; debt service accounting; construction…

  9. Modeling and Analysis of NGC System using Ptolemy II

    Directory of Open Access Journals (Sweden)

    Archana Sreekumar

    2015-09-01

    Full Text Available Model based system design has been used in real time embedded systems for validating and testing during the development lifecycle. Computation models - synchronous dataflow model (SDF and Discrete Event (DE have been used and finite state machine has been integrated with SDF and Discrete Event (DE modeling domains for simulating the functionalities in the system. Here a case study of resource augmented Navigation, Guidance and Control unit of onboard computers in satellite launch vehicle has been selected as a frame work and fault tolerant algorithm has been modeled and simulated with Ptolemy II. Feasibility of the scheduling of the fault tolerant algorithm has been analyzed and dependencies existing between different components and processes in the system have been investigated. The future work consists of modeling original functionality of NGC units inside each state of FSM and can be validated for the correct performance. Non-deterministic communication and clock drifts can be accounted into the model.

  10. Meander migration modeling accounting for the effect of riparian vegetation

    Science.gov (United States)

    Eke, E.; Parker, G.

    2010-12-01

    A numerical model is proposed to study the development of meandering rivers so as to reproduce patterns of both migration and spatial/temporal width variation pattern observed in nature. The model comprises of: a) a depth-averaged channel hydrodynamic/morphodynamic model developed using a two-parameter perturbation expansion technique that considers perturbations induced by curvature and spatial channel width variation and b) a bank migration model which separately considers bank erosional and depositional processes. Unlike most previous meandering river models where channel migration is characterized only in terms of bank erosion, channel dynamics are here defined at channel banks which are allowed to migrate independently via deposition/erosion based on the local flow field and bank characteristics. A bank erodes (deposits) if the near bank Shields stress computed from the flow field is greater (less) than a specified threshold. This threshold Shields number is equivalent to the formative Shields stress characterizing bankfull flow. Excessive bank erosion is controlled by means of natural armoring provided by cohesive/rooted slump blocks produced when a stream erodes into the lower non-cohesive part of a composite bank. Bank deposition is largely due to sediment trapping by vegetation; resultant channel narrowing is related to both a natural rate of vegetal encroachment and flow characteristics. This new model allows the channel freedom to vary in width both spatially and in time as it migrates, so accounting for the bi-directional coupling between vegetation and flow dynamics and reproducing more realistic planform geometries. Preliminary results based on the model are presented.

  11. Generalized Stefan models accounting for a discontinuous temperature field

    Science.gov (United States)

    Danescu, A.

    We construct a class of generalized Stefan models able to account for a discontinuous temperature field across a nonmaterial interface. The resulting theory introduces a constitutive scalar interfacial field, denoted by /lineθ and called the equivalent temperature of the interface. A classical procedure, based on the interfacial dissipation inequality, relates the interfacial energy release to the interfacial mass flux and restricts the equivalent temperature of the interface. We show that previously proposed theories are obtained as particular cases when /lineθ = ⪉θ > or /lineθ = ⪉(1)/(θ )>-1 or, more generally, when /lineθ = ⪉θ r ⪉ 1/θ1-r-1 for 0<= r<= 1. We study in a particular constitutive framework the solidification of an under-cooled liquid and we are able to give a sufficient condition for the existence of travelling wave solutions.

  12. Natural Resource Accounting Systems and Environmental Policy Modeling

    OpenAIRE

    Richard Cabe; Johnson, Stanley R

    1990-01-01

    Natural Resource Accounting (RCA) combines national income and product accounting concepts with analysis of natural resource and environmental issues. This paper considers this approach for the RCA Appraisal required by the Soil and Water Resources Conservation Act. Recent natural resource accounting literature is examined in light of requirements of the RCA Appraisal. The paper provides a critique of the economic content of the Second RCA Appraisal and develops a natural resource accounting ...

  13. Accounting for Water Insecurity in Modeling Domestic Water Demand

    Science.gov (United States)

    Galaitsis, S. E.; Huber-lee, A. T.; Vogel, R. M.; Naumova, E.

    2013-12-01

    Water demand management uses price elasticity estimates to predict consumer demand in relation to water pricing changes, but studies have shown that many additional factors effect water consumption. Development scholars document the need for water security, however, much of the water security literature focuses on broad policies which can influence water demand. Previous domestic water demand studies have not considered how water security can affect a population's consumption behavior. This study is the first to model the influence of water insecurity on water demand. A subjective indicator scale measuring water insecurity among consumers in the Palestinian West Bank is developed and included as a variable to explore how perceptions of control, or lack thereof, impact consumption behavior and resulting estimates of price elasticity. A multivariate regression model demonstrates the significance of a water insecurity variable for data sets encompassing disparate water access. When accounting for insecurity, the R-squaed value improves and the marginal price a household is willing to pay becomes a significant predictor for the household quantity consumption. The model denotes that, with all other variables held equal, a household will buy more water when the users are more water insecure. Though the reasons behind this trend require further study, the findings suggest broad policy implications by demonstrating that water distribution practices in scarcity conditions can promote consumer welfare and efficient water use.

  14. Capture-recapture survival models taking account of transients

    Science.gov (United States)

    Pradel, R.; Hines, J.E.; Lebreton, J.D.; Nichols, J.D.

    1997-01-01

    The presence of transient animals, common enough in natural populations, invalidates the estimation of survival by traditional capture- recapture (CR) models designed for the study of residents only. Also, the study of transit is interesting in itself. We thus develop here a class of CR models to describe the presence of transients. In order to assess the merits of this approach we examme the bias of the traditional survival estimators in the presence of transients in relation to the power of different tests for detecting transients. We also compare the relative efficiency of an ad hoc approach to dealing with transients that leaves out the first observation of each animal. We then study a real example using lazuli bunting (Passerina amoena) and, in conclusion, discuss the design of an experiment aiming at the estimation of transience. In practice, the presence of transients is easily detected whenever the risk of bias is high. The ad hoc approach, which yields unbiased estimates for residents only, is satisfactory in a time-dependent context but poorly efficient when parameters are constant. The example shows that intermediate situations between strict 'residence' and strict 'transience' may exist in certain studies. Yet, most of the time, if the study design takes into account the expected length of stay of a transient, it should be possible to efficiently separate the two categories of animals.

  15. Islamic Theoretical Intertemporal Model of the Current Account

    Directory of Open Access Journals (Sweden)

    Hassan Belkacem Ghassan

    2016-06-01

    Full Text Available This paper aims to develop an Islamic intertemporal model of the current account based on the prevailing theoretical and empirical literature of PVMCA (Obstfeld and Rogoff, 1996, Cerrato et al., 2014. The proposed model is based on the budget constraint of the present and future consumption, which depends on the obligatory Zakat from the income and assets, the return rate on the owned assets, the inheritance linking previous to subsequent generation. Using logarithmic utility function, featured by a unitary elasticity of intertemporal substitution and a unitary coefficient of relative risk aversion, we show through Euler equation of consumption that there is an inverse relationship between consumption growth from the last age to the first one and the Zakat rate on assets. The outcomes of this result are that the Zakat on assets disciplines the consumer to have more rationality in consumption, and allows additional marginal assets for future generations. By assuming a unitary subjective discount rate, we indicate that the more the return rate on assets is high, the more the consumption growth between today and tomorrow will be fast. Through the budget constraint, if Zakat rate on the Zakatable assets is greater than Zakat rate on income, this leads to a relative expansion in private consumption of the wealthy group. Besides, we point out that an increase in return rate on assets, can drive to increasing or decreasing current consumption, because the substitution and income effects work in opposite ways.

  16. Expert System Models in the Companies' Financial and Accounting Domain

    CERN Document Server

    Mates, D; Bostan, I; Grosu, V

    2010-01-01

    The present paper is based on studying, analyzing and implementing the expert systems in the financial and accounting domain of the companies, describing the use method of the informational systems that can be used in the multi-national companies, public interest institutions, and medium and small dimension economical entities, in order to optimize the managerial decisions and render efficient the financial-accounting functionality. The purpose of this paper is aimed to identifying the economical exigencies of the entities, based on the already used accounting instruments and the management software that could consent the control of the economical processes and patrimonial assets.

  17. Fitting the Two-Higgs-Doublet model of type II

    CERN Document Server

    Eberhardt, Otto

    2014-01-01

    We present the current status of the Two-Higgs-Doublet model of type II. Taking into account all available relevant information, we exclude at $95$% CL sizeable deviations of the so-called alignment limit, in which all couplings of the light CP-even Higgs boson $h$ are Standard-Model-like. While we can set a lower limit of $240$ GeV on the mass of the pseudoscalar Higgs boson at $95$% CL, the mass of the heavy CP-even Higgs boson $H$ can be even lighter than $200$ GeV. The strong constraints on the model parameters also set limits on the triple Higgs couplings: the $hhh$ coupling in the Two-Higgs-Doublet model of type II cannot be larger than in the Standard Model, while the $hhH$ coupling can maximally be $2.5$ times the size of the Standard Model $hhh$ coupling, assuming an $H$ mass below $1$ TeV. The selection of benchmark scenarios which maximize specific effects within the allowed regions for further collider studies is illustrated for the $H$ branching fraction to fermions and gauge bosons. As an exampl...

  18. APPLICATIONS OF MATHEMATICAL CONTROL THEORY TO ACCOUNTING AND BUDGETING: II. THE CONTINUOUS JOINT TRADING MADEL.

    Science.gov (United States)

    The paper applies the mathematical control theory to the accounting network flows, where the flow rates are constrained by linear inequalities. The...cross section phase of the problem, which is characterized by linear programming, and the dynamic phase of the problem, which is characterized by control theory . (Author)

  19. Accounting for Epistemic Uncertainty in PSHA: Logic Tree and Ensemble Model

    Science.gov (United States)

    Taroni, M.; Marzocchi, W.; Selva, J.

    2014-12-01

    The logic tree scheme is the probabilistic framework that has been widely used in the last decades to take into account epistemic uncertainties in probabilistic seismic hazard analysis (PSHA). Notwithstanding the vital importance for PSHA to incorporate properly the epistemic uncertainties, we argue that the use of the logic tree in a PSHA context has conceptual and practical drawbacks. Despite some of these drawbacks have been reported in the past, a careful evaluation of their impact on PSHA is still lacking. This is the goal of the present work. In brief, we show that i) PSHA practice does not meet the assumptions that stand behind the logic tree scheme; ii) the output of a logic tree is often misinterpreted and/or misleading, e.g., the use of percentiles (median included) in a logic tree scheme raises theoretical difficulties from a probabilistic point of view; iii) in case the assumptions that stand behind a logic tree are actually met, this leads to several problems in testing any PSHA model. We suggest a different strategy - based on ensemble modeling - to account for epistemic uncertainties in a more proper probabilistic framework. Finally, we show that in many PSHA practical applications, the logic tree is de facto loosely applied to build sound ensemble models.

  20. A Unifying Modeling of Plant Shoot Gravitropism With an Explicit Account of the Effects of Growth

    Directory of Open Access Journals (Sweden)

    Renaud eBastien

    2014-04-01

    Full Text Available Gravitropism, the slow reorientation of plant growth in response to gravity, is a major determinant of the form and posture of land plants. Recently a universal model of shoot gravitropism, the AC model, has been presented, in which the dynamics of the tropic movement is only determined by the contradictory controls of i graviception, that tends to curve the plants towards the vertical, and ii proprioception, that tends to keep the stem straights. This model was found valid over a large range of species and over two order of magnitude in organ size. However the motor of the movement, the elongation, has been neglected in the AC model. Taking into account explicit growth effects, however, requires consideration of the material derivative, i.e. the rate of change of curvature bound to an expanding and convected organ elements. Here we show that it is possible to rewrite the material equation of curvature in a compact simplified form that express directly the curvature variation as a function of the median elongation andof the distribution of the differential growth. Through this extended model, called the ACE model, two main destabilizing effects of growth on the tropic movement are identified : i the passive orientation drift, which occurs when a curved element elongates without differential growth and ii the fixed curvature which occurs when a element leaves the elongation zone and is no longer able to change its curvature actively. By comparing the AC and ACE models to experiments, these two effects were however found negligible, revealing a probable selection for rapid convergence to the steady state shape during the tropic movement so as to escape the growth destabilizing effects, involving in particular a selection over proprioceptive sensitivity. Then the simplified AC mode can be used to analyze gravitropism and posture control in actively elongating plant organs without significant information loss.

  1. Line emission from H II blister models

    Science.gov (United States)

    Rubin, R. H.

    1984-01-01

    Numerical techniques to calculate the thermal and geometric properties of line emission from H II 'blister' regions are presented. It is assumed that the density distributions of the H II regions are a function of two dimensions, with rotational symmetry specifying the shape in three-dimensions. The thermal and ionization equilibrium equations of the problem are solved by spherical modeling, and a spherical sector approximation is used to simplify the three-dimensional treatment of diffuse ionizing radiation. The global properties of H II 'blister' regions near the edges of a molecular cloud are simulated by means of the geometry/density distribution, and the results are compared with observational data. It is shown that there is a monotonic increase of peak surface brightness from the i = 0 deg (pole-on) observational position to the i = 90 deg (edge-on) position. The enhancement of the line peak intensity from the edge-on to the pole-on positions is found to depend on the density, stratification, ionization, and electron temperature weighting. It is found that as i increases, the position of peak line brightness of the lower excitation species is displaced to the high-density side of the high excitation species.

  2. Models and Rules of Evaluation in International Accounting

    OpenAIRE

    Liliana Feleaga; Niculae Feleaga

    2006-01-01

    The accounting procedures cannot be analyzed without a previous evaluation. Value is in general a very subjective issue, usually the result of a monetary evaluation made to a specific asset, group of assets or entities, or to some rendered services. Within the economic sciences, value comes from its very own deep history. In accounting, the concept of value had a late and fragile start. The term of value must not be misinterpreted as being the same thing with cost, even though value is freque...

  3. Material control in nuclear fuel fabrication facilities. Part II. Accountability, instrumentation and measurement techniques in fuel fabrication facilities

    Energy Technology Data Exchange (ETDEWEB)

    Borgonovi, G.M.; McCartin, T.J.; McDaniel, T.; Miller, C.L.; Nguyen, T.

    1978-01-01

    This report describes the measurement techniques, the instrumentation, and the procedures used in accountability and control of nuclear materials, as they apply to fuel fabrication facilities. A general discussion is given of instrumentation and measurement techniques which are presently used being considered for fuel fabrication facilities. Those aspects which are most significant from the point of view of satisfying regulatory constraints have been emphasized. Sensors and measurement devices have been discussed, together with their interfacing into a computerized system designed to permit real-time data collection and analysis. Estimates of accuracy and precision of measurement techniques have been given, and, where applicable, estimates of associated costs have been presented. A general description of material control and accounting is also included. In this section, the general principles of nuclear material accounting have been reviewed first (closure of material balance). After a discussion of the most current techniques used to calculate the limit of error on inventory difference, a number of advanced statistical techniques are reviewed. The rest of the section deals with some regulatory aspects of data collection and analysis, for accountability purposes, and with the overall effectiveness of accountability in detecting diversion attempts in fuel fabrication facilities. A specific example of application of the accountability methods to a model fuel fabrication facility is given. The effect of random and systematic errors on the total material uncertainty has been discussed, together with the effect on uncertainty of the length of the accounting period.

  4. MODELING OF A STRUCTURED PLAN OF ACCOUNTS IN PROCEDURES OF INSOLVENCY AND BANKRUPTCY

    Directory of Open Access Journals (Sweden)

    Chalenko R. V.

    2014-02-01

    Full Text Available The article details the problems of constructing a structured plan of accounts in bankruptcy and insolvency proceedings. The proposed model is based on two principal positions, first structured chart of accounts has its own dimension, and secondly, it is built on the principles of architectonics. Architectonics constructing structured chart of accounts allows you to integrate managerial, strategic, transactional accounting and making accounting transparent and efficient

  5. Accounting for heterogeneity of public lands in hedonic property models

    Science.gov (United States)

    Charlotte Ham; Patricia A. Champ; John B. Loomis; Robin M. Reich

    2012-01-01

    Open space lands, national forests in particular, are usually treated as homogeneous entities in hedonic price studies. Failure to account for the heterogeneous nature of public open spaces may result in inappropriate inferences about the benefits of proximate location to such lands. In this study the hedonic price method is used to estimate the marginal values for...

  6. Accounting for Recoil Effects in Geochronometers: A New Model Approach

    Science.gov (United States)

    Lee, V. E.; Huber, C.

    2012-12-01

    dated grain is a major control on the magnitude of recoil loss, the first feature is the ability to calculate recoil effects on isotopic compositions for realistic, complex grain shapes and surface roughnesses. This is useful because natural grains may have irregular shapes that do not conform to simple geometric descriptions. Perhaps more importantly, the surface area over which recoiled nuclides are lost can be significantly underestimated when grain surface roughness is not accounted for, since the recoil distances can be of similar characteristic lengthscales to surface roughness features. The second key feature is the ability to incorporate dynamical geologic processes affecting grain surfaces in natural settings, such as dissolution and crystallization. We describe the model and its main components, and point out implications for the geologically-relevant chronometers mentioned above.

  7. Kriging interpolating cosmic velocity field. II. Taking anistropies and multistreaming into account

    Science.gov (United States)

    Yu, Yu; Zhang, Jun; Jing, Yipeng; Zhang, Pengjie

    2017-02-01

    Measuring the volume-weighted peculiar velocity statistics from inhomogeneously and sparsely distributed galaxies/halos, by existing velocity assignment methods, suffers from a significant sampling artifact. As an alternative, the Kriging interpolation based on Gaussian processes was introduced and evaluated [Y. Yu, J. Zhang, Y. Jing, and P. Zhang, Phys. Rev. D 92, 083527 (2015), 10.1103/PhysRevD.92.083527]. Unfortunately, the most straightforward application of Kriging does not perform better than the existing methods in the literature. In this work, we investigate two physically motivated extensions. The first takes into account of the anisotropic velocity correlations. The second introduces the nugget effect, on account of multistreaming of the velocity field. We find that the performance is indeed improved. For sparsely sampled data [nP≲6 ×10-3(h-1 Mpc )-3 ] where the sampling artifact is the most severe, the improvement is significant and is two-fold: 1) The scale of reliable measurement of the velocity power spectrum is extended by a factor ˜1.6 , and 2) the dependence on the velocity correlation prior is weakened by a factor of ˜2 . We conclude that such extensions are desirable for accurate velocity assignment by Kriging.

  8. Accounting for choice of measurement scale in extreme value modeling

    OpenAIRE

    Wadsworth, J. L.; Tawn, J. A.; Jonathan, P.

    2010-01-01

    We investigate the effect that the choice of measurement scale has upon inference and extrapolation in extreme value analysis. Separate analyses of variables from a single process on scales which are linked by a nonlinear transformation may lead to discrepant conclusions concerning the tail behavior of the process. We propose the use of a Box--Cox power transformation incorporated as part of the inference procedure to account parametrically for the uncertainty surrounding the scale of extrapo...

  9. SEBAL-A: A Remote Sensing ET Algorithm that Accounts for Advection with Limited Data. Part II: Test for Transferability

    Directory of Open Access Journals (Sweden)

    Mcebisi Mkhwanazi

    2015-11-01

    Full Text Available Because the Surface Energy Balance Algorithm for Land (SEBAL tends to underestimate ET when there is advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET. The modification involved the estimation of advected energy, which required the development of a wind function. In Part I, the modified SEBAL model (SEBAL-A was developed and validated on well-watered alfalfa of a standard height of 40–60 cm. In this Part II, SEBAL-A was tested on different crops and irrigation treatments in order to determine its performance under varying conditions. The crops used for the transferability test were beans (Phaseolus vulgaris L., wheat (Triticum aestivum L. and corn (Zea mays L.. The estimated ET using SEBAL-A was compared to actual ET measured using a Bowen Ratio Energy Balance (BREB system. Results indicated that SEBAL-A estimated ET fairly well for beans and wheat, only showing some slight underestimation of a Mean Bias Error (MBE of −0.7 mm·d−1 (−11.3%, a Root Mean Square Error (RMSE of 0.82 mm·d−1 (13.9% and a Nash Sutcliffe Coefficient of Efficiency (NSCE of 0.64. On corn, SEBAL-A resulted in an ET estimation error MBE of −0.7 mm·d−1 (−9.9%, a RMSE of 1.59 mm·d−1 (23.1% and NSCE = 0.24. This result shows an improvement on the original SEBAL model, which for the same data resulted in an ET MBE of −1.4 mm·d−1 (−20.4%, a RMSE of 1.97 mm·d−1 (28.8% and a NSCE of −0.18. When SEBAL-A was tested on only fully irrigated corn, it performed well, resulting in no bias, i.e., MBE of 0.0 mm·d−1; RMSE of 0.78 mm·d−1 (10.7% and NSCE of 0.82. The SEBAL-A model showed less or no improvement on corn that was either water-stressed or at early stages of growth. The errors incurred under these conditions were not due to advection not accounted for but rather were due to the nature of SEBAL and SEBAL-A being single-source energy balance models and

  10. Spectral modeling of Type II SNe

    Science.gov (United States)

    Dessart, Luc

    2015-08-01

    The red supergiant phase represents the final stage of evolution in the life of moderate mass (8-25Msun) massive stars. Hidden from view, the core changes considerably its structure, progressing through the advanced stages of nuclear burning, and eventually becomes degenerate. Upon reaching the Chandrasekhar mass, this Fe or ONeMg core collapses, leading to the formation of a proto neutron star. A type II supernova results if the shock that forms at core bounce, eventually wins over the envelope accretion and reaches the progenitor surface.The electromagnetic display of such core-collapse SNe starts with this shock breakout, and persists for months as the ejecta releases the energy deposited initially by the shock or continuously through radioactive decay. Over a timescale of weeks to months, the originally optically-thick ejecta thins out and turns nebular. SN radiation contains a wealth of information about the explosion physics (energy, explosive nucleosynthesis), the progenitor properties (structure and composition). Polarised radiation also offers signatures that can help constrain the morphology of the ejecta.In this talk, I will review the current status of type II SN spectral modelling, and emphasise that a proper solution requires a time dependent treatment of the radiative transfer problem. I will discuss the wealth of information that can be gleaned from spectra as well as light curves, from both the early times (photospheric phase) and late times (nebular phase). I will discuss the diversity of Type SNe properties and how they are related to the diversity of red supergiant stars from which they originate.SN radiation offers an alternate means of constraining the properties of red-supergiant stars. To wrap up, I will illustrate how SNe II-P can also be used as probes, for example to constrain the metallicity of their environment.

  11. Multidimensional chemical modelling, II. Irradiated outflow walls

    CERN Document Server

    Bruderer, Simon; Doty, Steven D; van Dishoeck, Ewine F; Bourke, Tyler L

    2009-01-01

    Observations of the high-mass star forming region AFGL 2591 reveal a large abundance of CO+, a molecule known to be enhanced by far UV (FUV) and X-ray irradiation. In chemical models assuming a spherically symmetric envelope, the volume of gas irradiated by protostellar FUV radiation is very small due to the high extinction by dust. The abundance of CO+ is thus underpredicted by orders of magnitude. In a more realistic model, FUV photons can escape through an outflow region and irradiate gas at the border to the envelope. Thus, we introduce the first 2D axi-symmetric chemical model of the envelope of a high-mass star forming region to explain the CO+ observations as a prototypical FUV tracer. The model assumes an axi-symmetric power-law density structure with a cavity due to the outflow. The local FUV flux is calculated by a Monte Carlo radiative transfer code taking scattering on dust into account. A grid of precalculated chemical abundances, introduced in the first part of this series of papers, is used to ...

  12. 76 FR 29249 - Medicare Program; Pioneer Accountable Care Organization Model: Request for Applications

    Science.gov (United States)

    2011-05-20

    ... participate in the Pioneer Accountable Care Organization Model for a period beginning in 2011 and ending...://innovations.cms.gov/areas-of-focus/seamless-and-coordinated-care-models/pioneer-aco . Application Submission... Accountable Care Organization Model or the application process. SUPPLEMENTARY INFORMATION: I. Background...

  13. 76 FR 34712 - Medicare Program; Pioneer Accountable Care Organization Model; Extension of the Submission...

    Science.gov (United States)

    2011-06-14

    ...: This notice extends the deadlines for the submission of the Pioneer Accountable Care Organization Model...-coordinated-care-models/pioneer-aco . Application Submission Deadline: Applications must be postmarked on or before August 19, 2011. The Pioneer Accountable Care Organization Model ] Application is available...

  14. MODELLING THE LESOTHO ECONOMY: A SOCIAL ACCOUNTING MATRIX APPROACH

    Directory of Open Access Journals (Sweden)

    Yonas Tesfamariam Bahta

    2013-07-01

    Full Text Available Using a 2000 Social Accounting Matrix (SAM for Lesotho, this paper investigates the key features of the Lesotho economy and the role played by the agricultural sector. A novel feature of the SAM is the elaborate disaggregation of the agricultural sector into finer subcategories. The fundamental importance of agriculture development emerges clearly from a descriptive review and from SAM multiplier analysis. It is dominant with respect to income generation and value of production. It contributes 23 percent of gross domestic product and 12 percent of the total value of production. It employs 26 percent of labour and 24 percent of capital. The construction sector has the highest open SAM output multiplier (1,588 and SAM output multiplier (1.767. The household multipliers indicate that in the rural and urban areas, agriculture and mining respectively generate most household income. Agriculture has the highest employment coefficient. Agriculture and mining sectors also have the largest employment multipliers in Lesotho.

  15. Resource Allocation Models and Accountability: A Jamaican Case Study

    Science.gov (United States)

    Nkrumah-Young, Kofi K.; Powell, Philip

    2008-01-01

    Higher education institutions (HEIs) may be funded privately, by the state or by a mixture of the two. Nevertheless, any state financing of HE necessitates a mechanism to determine the level of support and the channels through which it is to be directed; that is, a resource allocation model. Public funding, through resource allocation models,…

  16. Urea/thiourea derivatives and Zn(II)-DPA complex as receptors for anionic recognition—A brief account

    Indian Academy of Sciences (India)

    Priyadip Das; Prasenjit Mahato; Amrita Ghosh; Amal K Mandal; Tanmay Banerjee; Sukdeb Saha; Amitava Das

    2011-03-01

    This review covers few examples of anion complexation chemistry, with a special focus on urea/thiourea-based receptors and Zn(II)-dipicolyl amine-based receptors. This article specially focuses on structural aspects of the receptors and the anions for obtaining the desire specificity along with an efficient receptor-anion interaction. Two types of receptors have been described in this brief account; first one being the strong hydrogen bond donor urea/thiourea derivatives, which binds the anionic analytes through hydrogen bonded interactions; while, the second type of receptors are coordination complexes, where the coordination of the anion to the metal centre. In both the cases the anion binding modulate the energy gap between the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) and thereby the spectroscopic response. Appropriate choice of the signalling unit may allow probing the anion binding phenomena through visual detection.

  17. Accounting for ecosystem services in Life Cycle Assessment, Part II: toward an ecologically based LCA.

    Science.gov (United States)

    Zhang, Yi; Baral, Anil; Bakshi, Bhavik R

    2010-04-01

    Despite the essential role of ecosystem goods and services in sustaining all human activities, they are often ignored in engineering decision making, even in methods that are meant to encourage sustainability. For example, conventional Life Cycle Assessment focuses on the impact of emissions and consumption of some resources. While aggregation and interpretation methods are quite advanced for emissions, similar methods for resources have been lagging, and most ignore the role of nature. Such oversight may even result in perverse decisions that encourage reliance on deteriorating ecosystem services. This article presents a step toward including the direct and indirect role of ecosystems in LCA, and a hierarchical scheme to interpret their contribution. The resulting Ecologically Based LCA (Eco-LCA) includes a large number of provisioning, regulating, and supporting ecosystem services as inputs to a life cycle model at the process or economy scale. These resources are represented in diverse physical units and may be compared via their mass, fuel value, industrial cumulative exergy consumption, or ecological cumulative exergy consumption or by normalization with total consumption of each resource or their availability. Such results at a fine scale provide insight about relative resource use and the risk and vulnerability to the loss of specific resources. Aggregate indicators are also defined to obtain indices such as renewability, efficiency, and return on investment. An Eco-LCA model of the 1997 economy is developed and made available via the web (www.resilience.osu.edu/ecolca). An illustrative example comparing paper and plastic cups provides insight into the features of the proposed approach. The need for further work in bridging the gap between knowledge about ecosystem services and their direct and indirect role in supporting human activities is discussed as an important area for future work.

  18. Accountability: a missing construct in models of adherence behavior and in clinical practice.

    Science.gov (United States)

    Oussedik, Elias; Foy, Capri G; Masicampo, E J; Kammrath, Lara K; Anderson, Robert E; Feldman, Steven R

    2017-01-01

    Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients' motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8-12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability) to patients' autonomous internal desire to please a respected health care provider (autonomous accountability), the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura's Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well as the testing and refinement of adherence interventions that make use of this critical determinant of human behavior.

  19. Nonlinear model accounting for minor hysteresis of embedded SMA actuators

    Institute of Scientific and Technical Information of China (English)

    YANG Kai; GU Chenglin

    2007-01-01

    A quantitative index martensite fraction was used to describe the phase transformation degree of shape memory alloy (SMA).On the basis of the martensite fraction,a nonlinear analysis model for major and minor hysteresis loops was developed.The model adopted two exponential equations to calculate the martensite fractions for cooling and heating,respectively.The martensite fractions were derived as the relative parameters were adjusted timely according to continuous,common initial and common limit constraints.By use of the linear relationship between the curvature of embedded SMA actuator and SMA's martensite fraction,the curvature was determined.The results of the simulations and experiments prove the validity and veracity of the model.

  20. Modeling tools to Account for Ethanol Impacts on BTEX Plumes

    Science.gov (United States)

    Widespread usage of ethanol in gasoline leads to impacts at leak sites which differ from those of non-ethanol gasolines. The presentation reviews current research results on the distribution of gasoline and ethanol, biodegradation, phase separation and cosolvancy. Model results f...

  1. A Historical Account of the Hypodermic Model in Mass Communication.

    Science.gov (United States)

    Bineham, Jeffery L.

    1988-01-01

    Critiques different historical conceptions of mass communication research. Argues that the different conceptions of the history of mass communication research, and of the hypodermic model (viewing the media as an all-powerful and direct influence on society), influence the theoretical and methodological choices made by mass media scholars. (MM)

  2. Applying the International Medical Graduate Program Model to Alleviate the Supply Shortage of Accounting Doctoral Faculty

    Science.gov (United States)

    HassabElnaby, Hassan R.; Dobrzykowski, David D.; Tran, Oanh Thikie

    2012-01-01

    Accounting has been faced with a severe shortage in the supply of qualified doctoral faculty. Drawing upon the international mobility of foreign scholars and the spirit of the international medical graduate program, this article suggests a model to fill the demand in accounting doctoral faculty. The underlying assumption of the suggested model is…

  3. 76 FR 33306 - Medicare Program; Pioneer Accountable Care Organization Model, Request for Applications; Correction

    Science.gov (United States)

    2011-06-08

    ... Care Organization Model: Request for Applications.'' FOR FURTHER INFORMATION CONTACT: Maria Alexander... http://innovations.cms.gov/areas-of-focus/seamless-and-coordinated-care-models/pioneer-aco... HUMAN SERVICES Centers for Medicare & Medicaid Services Medicare Program; Pioneer Accountable...

  4. Comparison with CLPX II airborne data using DMRT model

    Science.gov (United States)

    Xu, X.; Liang, D.; Andreadis, K.M.; Tsang, L.; Josberger, E.G.

    2009-01-01

    In this paper, we considered a physical-based model which use numerical solution of Maxwell Equations in three-dimensional simulations and apply into Dense Media Radiative Theory (DMRT). The model is validated in two specific dataset from the second Cold Land Processes Experiment (CLPX II) at Alaska and Colorado. The data were all obtain by the Ku-band (13.95GHz) observations using airborne imaging polarimetric scatterometer (POLSCAT). Snow is a densely packed media. To take into account the collective scattering and incoherent scattering, analytical Quasi-Crystalline Approximation (QCA) and Numerical Maxwell Equation Method of 3-D simulation (NMM3D) are used to calculate the extinction coefficient and phase matrix. DMRT equations were solved by iterative solution up to 2nd order for the case of small optical thickness and full multiple scattering solution by decomposing the diffuse intensities into Fourier series was used when optical thickness exceed unity. It was shown that the model predictions agree with the field experiment not only co-polarization but also cross-polarization. For Alaska region, the input snow structure data was obtain by the in situ ground observations, while for Colorado region, we combined the VIC model to get the snow profile. ??2009 IEEE.

  5. Photoionization models for giant h ii regions

    Directory of Open Access Journals (Sweden)

    Gra´zyna Stasi´nska

    2000-01-01

    Full Text Available Revisamos las fuentes de incertidumbre en los modelos de fotoionizaci on de regiones H II gigantes. Tambi en discutimos el problema de la temperatura electr onica a la luz de los ajustes de modelos en tres regiones H II gigantes.

  6. Towards accounting for dissolved iron speciation in global ocean models

    Directory of Open Access Journals (Sweden)

    A. Tagliabue

    2011-10-01

    Full Text Available The trace metal iron (Fe is now routinely included in state-of-the-art ocean general circulation and biogeochemistry models (OGCBMs because of its key role as a limiting nutrient in regions of the world ocean important for carbon cycling and air-sea CO2 exchange. However, the complexities of the seawater Fe cycle, which impact its speciation and bioavailability, are simplified in such OGCBMs due to gaps in understanding and to avoid high computational costs. In a similar fashion to inorganic carbon speciation, we outline a means by which the complex speciation of Fe can be included in global OGCBMs in a reasonably cost-effective manner. We construct an Fe speciation model based on hypothesised relationships between rate constants and environmental variables (temperature, light, oxygen, pH, salinity and assumptions regarding the binding strengths of Fe complexing organic ligands and test hypotheses regarding their distributions. As a result, we find that the global distribution of different Fe species is tightly controlled by spatio-temporal environmental variability and the distribution of Fe binding ligands. Impacts on bioavailable Fe are highly sensitive to assumptions regarding which Fe species are bioavailable and how those species vary in space and time. When forced by representations of future ocean circulation and climate we find large changes to the speciation of Fe governed by pH mediated changes to redox kinetics. We speculate that these changes may exert selective pressure on phytoplankton Fe uptake strategies in the future ocean. In future work, more information on the sources and sinks of ocean Fe ligands, their bioavailability, the cycling of colloidal Fe species and kinetics of Fe-surface coordination reactions would be invaluable. We hope our modeling approach can provide a means by which new observations of Fe speciation can be tested against hypotheses of the processes present in governing the ocean Fe cycle in an

  7. Towards accounting for dissolved iron speciation in global ocean models

    Directory of Open Access Journals (Sweden)

    A. Tagliabue

    2011-03-01

    Full Text Available The trace metal iron (Fe is now routinely included in state-of-the-art ocean general circulation and biogeochemistry models (OGCBMs because of its key role as a limiting nutrient in regions of the world ocean important for carbon cycling and air-sea CO2 exchange. However, the complexities of the seawater Fe cycle, which impact its speciation and bioavailability, are highly simplified in such OGCBMs to avoid high computational costs. In a similar fashion to inorganic carbon speciation, we outline a means by which the complex speciation of Fe can be included in global OGCBMs in a reasonably cost-effective manner. We use our Fe speciation to suggest the global distribution of different Fe species is tightly controlled by environmental variability (temperature, light, oxygen and pH and the assumptions regarding Fe binding ligands. Impacts on bioavailable Fe are highly sensitive to assumptions regarding which Fe species are bioavailable. When forced by representations of future ocean circulation and climate we find large changes to the speciation of Fe governed by pH mediated changes to redox kinetics. We speculate that these changes may exert selective pressure on phytoplankton Fe uptake strategies in the future ocean. We hope our modeling approach can also be used as a ''test bed'' for exploring our understanding of Fe speciation at the global scale.

  8. A mathematical model of sentimental dynamics accounting for marital dissolution.

    Directory of Open Access Journals (Sweden)

    José-Manuel Rey

    Full Text Available BACKGROUND: Marital dissolution is ubiquitous in western societies. It poses major scientific and sociological problems both in theoretical and therapeutic terms. Scholars and therapists agree on the existence of a sort of second law of thermodynamics for sentimental relationships. Effort is required to sustain them. Love is not enough. METHODOLOGY/PRINCIPAL FINDINGS: Building on a simple version of the second law we use optimal control theory as a novel approach to model sentimental dynamics. Our analysis is consistent with sociological data. We show that, when both partners have similar emotional attributes, there is an optimal effort policy yielding a durable happy union. This policy is prey to structural destabilization resulting from a combination of two factors: there is an effort gap because the optimal policy always entails discomfort and there is a tendency to lower effort to non-sustaining levels due to the instability of the dynamics. CONCLUSIONS/SIGNIFICANCE: These mathematical facts implied by the model unveil an underlying mechanism that may explain couple disruption in real scenarios. Within this framework the apparent paradox that a union consistently planned to last forever will probably break up is explained as a mechanistic consequence of the second law.

  9. MODEL OF ACCOUNTING ENGINEERING IN VIEW OF EARNINGS MANAGEMENT IN POLAND

    Directory of Open Access Journals (Sweden)

    Leszek Michalczyk

    2012-10-01

    Full Text Available The article introduces the theoretical foundations of the author’s original concept of accounting engineering. We assume a theoretical premise whereby accounting engineering is understood as a system of accounting practice utilising differences in economic events resultant from the use of divergent accounting methods. Unlike, for instance, creative or praxeological accounting, accounting engineering is composed only, and under all circumstances, of lawful activities and adheres to the current regulations of the balance sheet law. The aim of the article is to construct a model of accounting engineering exploiting taking into account differences inherently present in variant accounting. These differences result in disparate financial results of identical economic events. Given the fact that regardless of which variant is used in accounting, all settlements are eventually equal to one another, a new class of differences emerges - the accounting engineering potential. It is transferred to subsequent reporting (balance sheet periods. In the end, the profit “made” in a given period reduces the financial result of future periods. This effect is due to the “transfer” of costs from one period to another. Such actions may have sundry consequences and are especially dangerous whenever many individuals are concerned with the profit of a given company, e.g. on a stock exchange. The reverse may be observed when a company is privatised and its value is being intentionally reduced by a controlled recording of accounting provisions, depending on the degree to which they are justified. The reduction of a company’s goodwill in Balcerowicz’s model of no-tender privatisation allows to justify the low value of the purchased company. These are only some of many manifestations of variant accounting which accounting engineering employs. A theoretical model of the latter is presented in this article.

  10. HYDRODYNAMICAL MODELS OF TYPE II-P SUPERNOVA LIGHT CURVES

    Directory of Open Access Journals (Sweden)

    M. C. Bersten

    2009-01-01

    Full Text Available We present progress in light curve models of type II-P supernovae (SNe II-P obtained using a newly devel- oped, one-dimensional hydrodynamic code. Using simple initial models (polytropes, we reproduced the global behavior of the observed light curves and we analyzed the sensitivity of the light curves to the variation of free parameters.

  11. A Case Study of the Accounting Models for the Participants in an Emissions Trading Scheme

    Directory of Open Access Journals (Sweden)

    Marius Deac

    2013-10-01

    Full Text Available As emissions trading schemes are becoming more popular across the world, accounting has to keep up with these new economic developments. The absence of guidance regarding the accounting for greenhouse gases (GHGs emissions generated by the withdrawal of IFRIC 3- Emission Rights - is the main reason why there is a diversity of accounting practices. This diversity of accounting methods makes the financial statements of companies that are taking part in emissions trading schemes like EU ETS, difficult to compare. The present paper uses a case study that assumes the existence of three entities that have chosen three different accounting methods: the IFRIC 3 cost model, the IFRIC 3 revaluation model and the “off balance sheet” approach. This illustrates how the choice of an accounting method regarding GHGs emissions influences their interim and annual reports through the chances in the companies’ balance sheet and financial results.

  12. Dynamic model of production enterprises based on accounting registers and its identification

    Science.gov (United States)

    Sirazetdinov, R. T.; Samodurov, A. V.; Yenikeev, I. A.; Markov, D. S.

    2016-06-01

    The report focuses on the mathematical modeling of economic entities based on accounting registers. Developed the dynamic model of financial and economic activity of the enterprise as a system of differential equations. Created algorithms for identification of parameters of the dynamic model. Constructed and identified the model of Russian machine-building enterprises.

  13. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    DEFF Research Database (Denmark)

    Jensen, Ninna Reitzel; Schomacker, Kristian Juul

    2015-01-01

    and unit-linked insurance. By use of a two-account model, we are able to illustrate general concepts without making the model too abstract. To allow for complicated financial markets without dramatically increasing the mathematical complexity, we focus on economic scenarios. We illustrate the use of our......Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death...... product types. This enables comparison of participating life insurance products and unit-linked insurance products, thus building a bridge between the two different ways of formalizing life insurance products. Finally, our model distinguishes itself from the existing literature by taking into account...

  14. An Integrative Model of the Strategic Management Accounting at the Enterprises of Chemical Industry

    Directory of Open Access Journals (Sweden)

    Aleksandra Vasilyevna Glushchenko

    2016-06-01

    Full Text Available Currently, the issues of information and analytical support of strategic management enabling to take timely and high-quality management decisions, are extremely relevant. Conflicting and poor information, haphazard collected in the practice of large companies from unreliable sources, affects the effective implementation of their development strategies and carries the threat of risk, by the increasing instability of the external environment. Thus chemical industry is one of the central places in the industry of Russia and, of course, has its specificity in the formation of the informationsupport system. Such an information system suitable for the development and implementation of strategic directions, changes in recognized competitive advantages of strategic management accounting. The issues of the lack of requirements for strategic accounting information, its inconsistency in the result of simultaneous accumulation in different parts and using different methods of calculation and assessment of indicators is impossible without a well-constructed model of organization of strategic management accounting. The purpose of this study is to develop such a model, the implementation of which will allow realizing the possibility of achieving strategic goals by harmonizing information from the individual objects of the strategic account to increase the functional effectiveness of management decisions with a focus on strategy. Case study was based on dialectical logic and methods of system analysis, and identifying causal relationships in building a model of strategic management accounting that contributes to the forecasts of its development. The study proposed to implement an integrative model of organization of strategic management accounting. The purpose of a phased implementation of this model defines the objects and tools of strategic management accounting. Moreover, it is determined that from the point of view of increasing the usefulness of management

  15. Modeling the Hellenic karst catchments with the Sacramento Soil Moisture Accounting model

    Science.gov (United States)

    Katsanou, K.; Lambrakis, N.

    2017-01-01

    Karst aquifers are very complex due to the presence of dual porosity. Rain-runoff hydrological models are frequently used to characterize these aquifers and assist in their management. The calibration of such models requires knowledge of many parameters, whose quality can be directly related to the quality of the simulation results. The Sacramento Soil Moisture Accounting (SAC-SMA) model includes a number of physically based parameters that permit accurate simulations and predictions of the rain-runoff relationships. Due to common physical characteristics of mature karst structures, expressed by sharp recession limbs of the runoff hydrographs, the calibration of the model becomes relatively simple, and the values of the parameters range within narrow bands. The most sensitive parameters are those related to groundwater storage regulated by the zone of the epikarst. The SAC-SMA model was calibrated for data from the mountainous part of the Louros basin, north-western Greece, which is considered to be representative of such geological formations. Visual assessment of the hydrographs as statistical outcomes revealed that the SAC-SMA model simulated the timing and magnitude of the peak flow and the shape of recession curves well.

  16. Modeling the Hellenic karst catchments with the Sacramento Soil Moisture Accounting model

    Science.gov (United States)

    Katsanou, K.; Lambrakis, N.

    2017-05-01

    Karst aquifers are very complex due to the presence of dual porosity. Rain-runoff hydrological models are frequently used to characterize these aquifers and assist in their management. The calibration of such models requires knowledge of many parameters, whose quality can be directly related to the quality of the simulation results. The Sacramento Soil Moisture Accounting (SAC-SMA) model includes a number of physically based parameters that permit accurate simulations and predictions of the rain-runoff relationships. Due to common physical characteristics of mature karst structures, expressed by sharp recession limbs of the runoff hydrographs, the calibration of the model becomes relatively simple, and the values of the parameters range within narrow bands. The most sensitive parameters are those related to groundwater storage regulated by the zone of the epikarst. The SAC-SMA model was calibrated for data from the mountainous part of the Louros basin, north-western Greece, which is considered to be representative of such geological formations. Visual assessment of the hydrographs as statistical outcomes revealed that the SAC-SMA model simulated the timing and magnitude of the peak flow and the shape of recession curves well.

  17. Extension of the hard-sphere particle-wall collision model to account for particle deposition.

    Science.gov (United States)

    Kosinski, Pawel; Hoffmann, Alex C

    2009-06-01

    Numerical simulations of flows of fluids with granular materials using the Eulerian-Lagrangian approach involve the problem of modeling of collisions: both between the particles and particles with walls. One of the most popular techniques is the hard-sphere model. This model, however, has a major drawback in that it does not take into account cohesive or adhesive forces. In this paper we develop an extension to a well-known hard-sphere model for modeling particle-wall interactions, making it possible to account for adhesion. The model is able to account for virtually any physical interaction, such as van der Waals forces or liquid bridging. In this paper we focus on the derivation of the new model and we show some computational results.

  18. Mutual Calculations in Creating Accounting Models: A Demonstration of the Power of Matrix Mathematics in Accounting Education

    Science.gov (United States)

    Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg

    2016-01-01

    The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…

  19. Mutual Calculations in Creating Accounting Models: A Demonstration of the Power of Matrix Mathematics in Accounting Education

    Science.gov (United States)

    Vysotskaya, Anna; Kolvakh, Oleg; Stoner, Greg

    2016-01-01

    The aim of this paper is to describe the innovative teaching approach used in the Southern Federal University, Russia, to teach accounting via a form of matrix mathematics. It thereby contributes to disseminating the technique of teaching to solve accounting cases using mutual calculations to a worldwide audience. The approach taken in this course…

  20. A Teacher Accountability Model for Overcoming Self-Exclusion of Pupils

    Science.gov (United States)

    Jamal, Abu-Hussain; Tilchin, Oleg; Essawi, Mohammad

    2015-01-01

    Self-exclusion of pupils is one of the prominent challenges of education. In this paper we propose the TERA model, which shapes the process of creating formative accountability of teachers to overcome the self-exclusion of pupils. Development of the model includes elaboration and integration of interconnected model components. The TERA model…

  1. PARALLEL MEASUREMENT AND MODELING OF TRANSPORT IN THE DARHT II BEAMLINE ON ETA II

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, F W; Raymond, B A; Falabella, S; Lee, B S; Richardson, R A; Weir, J T; Davis, H A; Schultze, M E

    2005-05-31

    To successfully tune the DARHT II transport beamline requires the close coupling of a model of the beam transport and the measurement of the beam observables as the beam conditions and magnet settings are varied. For the ETA II experiment using the DARHT II beamline components this was achieved using the SUICIDE (Simple User Interface Connecting to an Integrated Data Environment) data analysis environment and the FITS (Fully Integrated Transport Simulation) model. The SUICIDE environment has direct access to the experimental beam transport data at acquisition and the FITS predictions of the transport for immediate comparison. The FITS model is coupled into the control system where it can read magnet current settings for real time modeling. We find this integrated coupling is essential for model verification and the successful development of a tuning aid for the efficient convergence on a useable tune. We show the real time comparisons of simulation and experiment and explore the successes and limitations of this close coupled approach.

  2. Accountability: a missing construct in models of adherence behavior and in clinical practice

    Directory of Open Access Journals (Sweden)

    Oussedik E

    2017-07-01

    Full Text Available Elias Oussedik,1 Capri G Foy,2 E J Masicampo,3 Lara K Kammrath,3 Robert E Anderson,1 Steven R Feldman1,4,5 1Center for Dermatology Research, Department of Dermatology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 2Department of Social Sciences and Health Policy, Wake Forest School of Medicine, Winston-Salem, NC, USA; 3Department of Psychology, Wake Forest University, Winston-Salem, NC, USA; 4Department of Pathology, Wake Forest School of Medicine, Winston-Salem, NC, USA; 5Department of Public Health Sciences, Wake Forest School of Medicine, Winston-Salem, NC, USA Abstract: Piano lessons, weekly laboratory meetings, and visits to health care providers have in common an accountability that encourages people to follow a specified course of action. The accountability inherent in the social interaction between a patient and a health care provider affects patients’ motivation to adhere to treatment. Nevertheless, accountability is a concept not found in adherence models, and is rarely employed in typical medical practice, where patients may be prescribed a treatment and not seen again until a return appointment 8–12 weeks later. The purpose of this paper is to describe the concept of accountability and to incorporate accountability into an existing adherence model framework. Based on the Self-Determination Theory, accountability can be considered in a spectrum from a paternalistic use of duress to comply with instructions (controlled accountability to patients’ autonomous internal desire to please a respected health care provider (autonomous accountability, the latter expected to best enhance long-term adherence behavior. Existing adherence models were reviewed with a panel of experts, and an accountability construct was incorporated into a modified version of Bandura’s Social Cognitive Theory. Defining accountability and incorporating it into an adherence model will facilitate the development of measures of accountability as well

  3. Design of Studies for Development of BPA Fish and Wildlife Mitigation Accounting Policy Phase II, Volume II, 1985-1988 Technical Report.

    Energy Technology Data Exchange (ETDEWEB)

    Kneese, Allen V.

    1988-08-01

    The incremental costs of corrective measures to lessen the environmental impacts of the hydroelectric system are expected to increase and difficult questions to arise about the costs, effectiveness, and justification of alternative measures and their systemwide implications. The BPA anticipate this situation by launching a forward-looking research program aimed at providing methodological tools and data suitable for estimating the productivity and cost implications of mitigation alternatives in a timely manner with state-of-the-art accuracy. Resources for the Future (RFF) agreed at the request of the BPA to develop a research program which would provide an analytical system designed to assist the BPA Administrator and other interested and responsible parties in evaluating the ecological and economic aspects of alternative protection, enhancement, and mitigation measures. While this progression from an ecological understanding to cost-effectiveness analyses is straightforward in concept, the complexities of the Columbia River system make the development of analytical methods far from simple in practice. The Phase 2 final report outlines the technical issues involved in developing an analytical system and proposes a program of research to address these issues. The report is presented in the Summary Report (Volume 1), and the present volume which consists of three technical reports: Part I, Modeling the Salmon and Steelhead Fisheries of the Columbia River Basin; Part II, Models for Cost-Effectiveness Analysis; and Part III, Ocean Fisheries Harvest Management.

  4. Integrating Seasonal Oscillations into Basel II Behavioural Scoring Models

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-09-01

    Full Text Available The article introduces a new methodology of temporal influence measurement (seasonal oscillations, temporal patterns for behavioural scoring development purposes. The paper shows how significant temporal variables can be recognised and then integrated into the behavioural scoring models in order to improve model performance. Behavioural scoring models are integral parts of the Basel II standard on Internal Ratings-Based Approaches (IRB. The IRB approach much more precisely reflects individual risk bank profile.A solution of the problem of how to analyze and integrate macroeconomic and microeconomic factors represented in time series into behavioural scorecard models will be shown in the paper by using the REF II model.

  5. Asymmetric Gepner Models II. Heterotic Weight Lifting

    CERN Document Server

    Gato-Rivera, B

    2010-01-01

    A systematic study of "lifted" Gepner models is presented. Lifted Gepner models are obtained from standard Gepner models by replacing one of the N=2 building blocks and the $E_8$ factor by a modular isomorphic $N=0$ model on the bosonic side of the heterotic string. The main result is that after this change three family models occur abundantly, in sharp contrast to ordinary Gepner models. In particular, more than 250 new and unrelated moduli spaces of three family models are identified. We discuss the occurrence of fractionally charged particles in these spectra.

  6. Asymmetric Gepner models II. Heterotic weight lifting

    Energy Technology Data Exchange (ETDEWEB)

    Gato-Rivera, B. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); Schellekens, A.N., E-mail: t58@nikhef.n [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands); Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); IMAPP, Radboud Universiteit, Nijmegen (Netherlands)

    2011-05-21

    A systematic study of 'lifted' Gepner models is presented. Lifted Gepner models are obtained from standard Gepner models by replacing one of the N=2 building blocks and the E{sub 8} factor by a modular isomorphic N=0 model on the bosonic side of the heterotic string. The main result is that after this change three family models occur abundantly, in sharp contrast to ordinary Gepner models. In particular, more than 250 new and unrelated moduli spaces of three family models are identified. We discuss the occurrence of fractionally charged particles in these spectra.

  7. Situated sentence processing: the coordinated interplay account and a neurobehavioral model.

    Science.gov (United States)

    Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R

    2010-03-01

    Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms

  8. Accounting for the influence of vegetation and landscape improves model transferability in a tropical savannah region

    Science.gov (United States)

    Gao, Hongkai; Hrachowitz, Markus; Sriwongsitanon, Nutchanart; Fenicia, Fabrizio; Gharari, Shervan; Savenije, Hubert H. G.

    2016-10-01

    Understanding which catchment characteristics dominate hydrologic response and how to take them into account remains a challenge in hydrological modeling, particularly in ungauged basins. This is even more so in nontemperate and nonhumid catchments, where—due to the combination of seasonality and the occurrence of dry spells—threshold processes are more prominent in rainfall runoff behavior. An example is the tropical savannah, the second largest climatic zone, characterized by pronounced dry and wet seasons and high evaporative demand. In this study, we investigated the importance of landscape variability on the spatial variability of stream flow in tropical savannah basins. We applied a stepwise modeling approach to 23 subcatchments of the Upper Ping River in Thailand, where gradually more information on landscape was incorporated. The benchmark is represented by a classical lumped model (FLEXL), which does not account for spatial variability. We then tested the effect of accounting for vegetation information within the lumped model (FLEXLM), and subsequently two semidistributed models: one accounting for the spatial variability of topography-based landscape features alone (FLEXT), and another accounting for both topographic features and vegetation (FLEXTM). In cross validation, each model was calibrated on one catchment, and then transferred with its fitted parameters to the remaining catchments. We found that when transferring model parameters in space, the semidistributed models accounting for vegetation and topographic heterogeneity clearly outperformed the lumped model. This suggests that landscape controls a considerable part of the hydrological function and explicit consideration of its heterogeneity can be highly beneficial for prediction in ungauged basins in tropical savannah.

  9. Matrix Representation of the Kaliningrad Regional Accounts System: Experimental Development and Modelling Prospects

    Directory of Open Access Journals (Sweden)

    Soldatova S.

    2015-12-01

    Full Text Available This article addresses the task of creating a regional Social Accounting Matrix (SAM in the Kaliningrad region. Analyzing the behavior of economic systems of national and sub-national levels in the changing environment is one of the main objectives of macroeconomic research. Matrices are used in examining the flow of financial resources, which makes it possible to conduct a comprehensive analysis of commodity and cash flows at the regional level. The study identifies key data sources for matrix development and presents its main results: the data sources for the accounts development and filling the social accounting matrix are identified, regional accounts consolidated, the structure of regional matrix devised, and the multiplier of the regional social accounting matrix calculated. An important aspect of this approach is the set target, which determines the composition of matrix accounts representing different aspects of regional performance. The calculated multiplier suggests the possibility of modelling of a socioeconomic system for the region using a social accounting matrix. The regional modelling approach ensures the matrix compliance with the methodological requirements of the national system.

  10. Matrix Representation of the Kaliningrad Regional Accounts System: Experimental Development and Modelling Prospects

    Directory of Open Access Journals (Sweden)

    Soldatova S.

    2015-08-01

    Full Text Available This article addresses the task of creating a regional Social Accounting Matrix (SAM in the Kaliningrad region. Analyzing the behavior of economic systems of national and sub-national levels in the changing environment is one of the main objectives of macroeconomic research. Matrices are used in examining the flow of financial resources, which makes it possible to conduct a comprehensive analysis of commodity and cash flows at the regional level. The study identifies key data sources for matrix development and presents its main results: the data sources for the accounts development and filling the social accounting matrix are identified, regional accounts consolidated, the structure of regional matrix devised, and the multiplier of the regional social accounting matrix calculated. An important aspect of this approach is the set target, which determines the composition of matrix accounts representing different aspects of regional performance. The calculated multiplier suggests the possibility of modelling of a socioeconomic system for the region using a social accounting matrix. The regional modelling approach ensures the matrix compliance with the methodological requirements of the national system

  11. Matrix Representation of the Kaliningrad Regional Accounts System: Experimental Development and Modelling Prospects

    Directory of Open Access Journals (Sweden)

    Soldatova Svetlana

    2015-09-01

    Full Text Available This article addresses the task of creating a regional Social Accounting Matrix (SAM in the Kaliningrad region. Analyzing the behavior of economic systems of national and sub-national levels in the changing environment is one of the main objectives of macroeconomic research. Matrices are used in examining the flow of financial resources, which makes it possible to conduct a comprehensive analysis of commodity and cash flows at the regional level. The study identifies key data sources for matrix development and presents its main results: the data sources for the accounts development and filling the social accounting matrix are identified, regional accounts consolidated, the structure of regional matrix devised, and the multiplier of the regional social accounting matrix calculated. An important aspect of this approach is the set target, which determines the composition of matrix accounts representing different aspects of regional performance. The calculated multiplier suggests the possibility of modelling of a socioeconomic system for the region using a social accounting matrix. The regional modelling approach ensures the matrix compliance with the methodological requirements of the national system

  12. Development and application of a large scale river system model for National Water Accounting in Australia

    Science.gov (United States)

    Dutta, Dushmanta; Vaze, Jai; Kim, Shaun; Hughes, Justin; Yang, Ang; Teng, Jin; Lerat, Julien

    2017-04-01

    Existing global and continental scale river models, mainly designed for integrating with global climate models, are of very coarse spatial resolutions and lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing water accounts, which have become increasingly important for water resources planning and management at regional and national scales. A continental scale river system model called Australian Water Resource Assessment River System model (AWRA-R) has been developed and implemented for national water accounting in Australia using a node-link architecture. The model includes major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. Two key components of the model are an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. The results in the Murray-Darling Basin shows highly satisfactory performance of the model with median daily Nash-Sutcliffe Efficiency (NSE) of 0.64 and median annual bias of less than 1% for the period of calibration (1970-1991) and median daily NSE of 0.69 and median annual bias of 12% for validation period (1992-2014). The results have demonstrated that the performance of the model is less satisfactory when the key processes such as overbank flow, groundwater seepage and irrigation diversion are switched off. The AWRA-R model, which has been operationalised by the Australian Bureau of Meteorology for continental scale water accounting, has contributed to improvements in the national water account by substantially reducing accounted different volume (gain/loss).

  13. A simulation model of hospital management based on cost accounting analysis according to disease.

    Science.gov (United States)

    Tanaka, Koji; Sato, Junzo; Guo, Jinqiu; Takada, Akira; Yoshihara, Hiroyuki

    2004-12-01

    Since a little before 2000, hospital cost accounting has been increasingly performed at Japanese national university hospitals. At Kumamoto University Hospital, for instance, departmental costs have been analyzed since 2000. And, since 2003, the cost balance has been obtained according to certain diseases for the preparation of Diagnosis-Related Groups and Prospective Payment System. On the basis of these experiences, we have constructed a simulation model of hospital management. This program has worked correctly at repeated trials and with satisfactory speed. Although there has been room for improvement of detailed accounts and cost accounting engine, the basic model has proved satisfactory. We have constructed a hospital management model based on the financial data of an existing hospital. We will later improve this program from the viewpoint of construction and using more various data of hospital management. A prospective outlook may be obtained for the practical application of this hospital management model.

  14. Towards ecosystem accounting: a comprehensive approach to modelling multiple hydrological ecosystem services

    Science.gov (United States)

    Duku, C.; Rathjens, H.; Zwart, S. J.; Hein, L.

    2015-10-01

    Ecosystem accounting is an emerging field that aims to provide a consistent approach to analysing environment-economy interactions. One of the specific features of ecosystem accounting is the distinction between the capacity and the flow of ecosystem services. Ecohydrological modelling to support ecosystem accounting requires considering among others physical and mathematical representation of ecohydrological processes, spatial heterogeneity of the ecosystem, temporal resolution, and required model accuracy. This study examines how a spatially explicit ecohydrological model can be used to analyse multiple hydrological ecosystem services in line with the ecosystem accounting framework. We use the Upper Ouémé watershed in Benin as a test case to demonstrate our approach. The Soil Water and Assessment Tool (SWAT), which has been configured with a grid-based landscape discretization and further enhanced to simulate water flow across the discretized landscape units, is used to simulate the ecohydrology of the Upper Ouémé watershed. Indicators consistent with the ecosystem accounting framework are used to map and quantify the capacities and the flows of multiple hydrological ecosystem services based on the model outputs. Biophysical ecosystem accounts are subsequently set up based on the spatial estimates of hydrological ecosystem services. In addition, we conduct trend analysis statistical tests on biophysical ecosystem accounts to identify trends in changes in the capacity of the watershed ecosystems to provide service flows. We show that the integration of hydrological ecosystem services into an ecosystem accounting framework provides relevant information on ecosystems and hydrological ecosystem services at appropriate scales suitable for decision-making.

  15. School Board Improvement Plans in Relation to the AIP Model of Educational Accountability: A Content Analysis

    Science.gov (United States)

    van Barneveld, Christina; Stienstra, Wendy; Stewart, Sandra

    2006-01-01

    For this study we analyzed the content of school board improvement plans in relation to the Achievement-Indicators-Policy (AIP) model of educational accountability (Nagy, Demeris, & van Barneveld, 2000). We identified areas of congruence and incongruence between the plans and the model. Results suggested that the content of the improvement…

  16. Standard solar model. II - g-modes

    Science.gov (United States)

    Guenther, D. B.; Demarque, P.; Pinsonneault, M. H.; Kim, Y.-C.

    1992-01-01

    The paper presents the g-mode oscillation for a set of modern solar models. Each solar model is based on a single modification or improvement to the physics of a reference solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The error in the predicted g-mode periods associated with the uncertainties in the model physics is predicted and the specific sensitivities of the g-mode periods and their period spacings to the different model structures are described. In addition, these models are compared to a sample of published observations. A remarkably good agreement is found between the 'best' solar model and the observations of Hill and Gu (1990).

  17. Aqueous Solution Vessel Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-28

    The work presented in this report is a continuation of the work described in the May 2015 report, “Aqueous Solution Vessel Thermal Model Development”. This computational fluid dynamics (CFD) model aims to predict the temperature and bubble volume fraction in an aqueous solution of uranium. These values affect the reactivity of the fissile solution, so it is important to be able to calculate them and determine their effects on the reaction. Part A of this report describes some of the parameter comparisons performed on the CFD model using Fluent. Part B describes the coupling of the Fluent model with a Monte-Carlo N-Particle (MCNP) neutron transport model. The fuel tank geometry is the same as it was in the May 2015 report, annular with a thickness-to-height ratio of 0.16. An accelerator-driven neutron source provides the excitation for the reaction, and internal and external water cooling channels remove the heat. The model used in this work incorporates the Eulerian multiphase model with lift, wall lubrication, turbulent dispersion and turbulence interaction. The buoyancy-driven flow is modeled using the Boussinesq approximation, and the flow turbulence is determined using the k-ω Shear-Stress-Transport (SST) model. The dispersed turbulence multiphase model is employed to capture the multiphase turbulence effects.

  18. Accounting for imperfect forward modeling in geophysical inverse problems — Exemplified for crosshole tomography

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Holm Jacobsen, Bo

    2014-01-01

    forward models, can be more than an order of magnitude larger than the measurement uncertainty. We also found that the modeling error is strongly linked to the spatial variability of the assumed velocity field, i.e., the a priori velocity model.We discovered some general tools by which the modeling error...... synthetic ground-penetrating radar crosshole tomographic inverse problems. Ignoring the modeling error can lead to severe artifacts, which erroneously appear to be well resolved in the solution of the inverse problem. Accounting for the modeling error leads to a solution of the inverse problem consistent...

  19. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  20. Optimization of Actuarial Model for Individual Account of Rural Social Pension Insurance

    Institute of Scientific and Technical Information of China (English)

    Wenxian; CAO

    2013-01-01

    This paper firstly analyzes different payment methods of individual account and the pension replacement rate under the pension payment method.Results show that it will be more scientific and reasonable for the individual account of new rural social pension insurance to adopt the actuarial model of payment according to proportion of income and periodic prestation at variable amount.The Guiding Opinions on New Rural Social Pension Insurance sets forth individual account should be paid at fixed amount,and the insured voluntarily selects payment level as per criteria set by the State.The monthly calculation and distribution amount of pension is the total amount of individual account divided by139.Therefore,it should start from continuation of policies and make adjustment of payment level in accordance with growth of per capita net income of rural residents.When condition permits,it is expected to realize transition to payment as per income proportion and periodic prestation at variable amount.

  1. Modelling characteristics of photovoltaic panels with thermal phenomena taken into account

    Science.gov (United States)

    Krac, Ewa; Górecki, Krzysztof

    2016-01-01

    In the paper a new form of the electrothermal model of photovoltaic panels is proposed. This model takes into account the optical, electrical and thermal properties of the considered panels, as well as electrical and thermal properties of the protecting circuit and thermal inertia of the considered panels. The form of this model is described and some results of measurements and calculations of mono-crystalline and poly-crystalline panels are presented.

  2. Shadow Segmentation and Augmentation Using á-overlay Models that Account for Penumbra

    DEFF Research Database (Denmark)

    Nielsen, Michael; Madsen, Claus B.

    2006-01-01

    that an augmented virtual object can cast an exact shadow. The penumbras (half-shadows) must be taken into account so that we can model the soft shadows.We hope to achieve this by modelling the shadow regions (umbra and penumbra alike) with a transparent overlay. This paper reviews the state-of-the-art shadow...... theories and presents two overlay models. These are analyzed analytically in relation to color theory and tangibility....

  3. A Neuronal Model of Predictive Coding Accounting for the Mismatch Negativity

    OpenAIRE

    Wacongne, Catherine; Changeux, Jean-Pierre; Dehaene, Stanislas

    2012-01-01

    International audience; The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spi...

  4. A Carbon Monitoring System Approach to US Coastal Wetland Carbon Fluxes: Progress Towards a Tier II Accounting Method with Uncertainty Quantification

    Science.gov (United States)

    Windham-Myers, L.; Holmquist, J. R.; Bergamaschi, B. A.; Byrd, K. B.; Callaway, J.; Crooks, S.; Drexler, J. Z.; Feagin, R. A.; Ferner, M. C.; Gonneea, M. E.; Kroeger, K. D.; Megonigal, P.; Morris, J. T.; Schile, L. M.; Simard, M.; Sutton-Grier, A.; Takekawa, J.; Troxler, T.; Weller, D.; Woo, I.

    2015-12-01

    Despite their high rates of long-term carbon (C) sequestration when compared to upland ecosystems, coastal C accounting is only recently receiving the attention of policy makers and carbon markets. Assessing accuracy and uncertainty in net C flux estimates requires both direct and derived measurements based on both short and long term dynamics in key drivers, particularly soil accretion rates and soil organic content. We are testing the ability of remote sensing products and national scale datasets to estimate biomass and soil stocks and fluxes over a wide range of spatial and temporal scales. For example, the 2013 Wetlands Supplement to the 2006 IPCC GHG national inventory reporting guidelines requests information on development of Tier I-III reporting, which express increasing levels of detail. We report progress toward development of a Carbon Monitoring System for "blue carbon" that may be useful for IPCC reporting guidelines at Tier II levels. Our project uses a current dataset of publically available and contributed field-based measurements to validate models of changing soil C stocks, across a broad range of U.S. tidal wetland types and landuse conversions. Additionally, development of biomass algorithms for both radar and spectral datasets will be tested and used to determine the "price of precision" of different satellite products. We discuss progress in calculating Tier II estimates focusing on variation introduced by the different input datasets. These include the USFWS National Wetlands Inventory, NOAA Coastal Change Analysis Program, and combinations to calculate tidal wetland area. We also assess the use of different attributes and depths from the USDA-SSURGO database to map soil C density. Finally, we examine the relative benefit of radar, spectral and hybrid approaches to biomass mapping in tidal marshes and mangroves. While the US currently plans to report GHG emissions at a Tier I level, we argue that a Tier II analysis is possible due to national

  5. Nyala and Bushbuck II: A Harvesting Model.

    Science.gov (United States)

    Fay, Temple H.; Greeff, Johanna C.

    1999-01-01

    Adds a cropping or harvesting term to the animal overpopulation model developed in Part I of this article. Investigates various harvesting strategies that might suggest a solution to the overpopulation problem without actually culling any animals. (ASK)

  6. Nyala and Bushbuck II: A Harvesting Model.

    Science.gov (United States)

    Fay, Temple H.; Greeff, Johanna C.

    1999-01-01

    Adds a cropping or harvesting term to the animal overpopulation model developed in Part I of this article. Investigates various harvesting strategies that might suggest a solution to the overpopulation problem without actually culling any animals. (ASK)

  7. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    DEFF Research Database (Denmark)

    Jensen, Ninna Reitzel; Schomacker, Kristian Juul

    2015-01-01

    Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death......, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance...... model by conducting scenario analysis based on Monte Carlo simulation, but the model applies to scenarios in general and to worst-case and best-estimate scenarios in particular. In addition to easy computations, our model offers a common framework for the valuation of life insurance payments across...

  8. The Anachronism of the Local Public Accountancy Determinate by the Accrual European Model

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2009-01-01

    Full Text Available Placing the European accrual model upon cash accountancy model,presently used in Romania, at the level of the local communities, makespossible that the anachronism of the model to manifest itself on the discussion’sconcentration at the nominalization about the model’s inclusion in everydaypublic practice. The basis of the accrual model were first defined in the lawregarding the commercial societies adopted in Great Britain in 1985, when theydetermined that all income and taxes referring to the financial year “will betaken into consideration without any boundary to the reception or paymentdate.”1 The accrual model in accountancy needs the recording of the non-casheffects in transactions or financial events for their appearance periods and not inany generated cash, received or paid. The business development was the basisfor “sophistication” of the recordings of the transactions and financial events,being prerequisite for recording the debtors’ or creditors’ sums.

  9. Model Application of Accounting Information Systems of Spare Parts Sales and Purchase on Car Service Company

    Directory of Open Access Journals (Sweden)

    Lianawati Christian

    2015-12-01

    Full Text Available The purpose of this research is to analyze accounting information systems of sales and purchases of spare parts in general car service companies and to identify the problems encountered and the needs of necessary information. This research used literature study to collect data, field study with observation, and design using UML (Unified Modeling Language with activity diagrams, class diagrams, use case diagrams, database design, form design, display design, draft reports. The result achieved is an application model of accounting information systems of sales and purchases of spare parts in general car service companies. As a conclusion, the accounting information systems of sales and purchases provides ease for management to obtain information quickly and easily as well as the presentation of reports quickly and accurately.

  10. Extension of the gurson model accounting for the void size effect

    Institute of Scientific and Technical Information of China (English)

    Jie Wen; Keh-Chih Hwang; Yonggang Huang

    2005-01-01

    A continuum model of solids with cylindrical microvoids is proposed based on the Taylor dislocation model.The model is an extension of Gurson model in the sense that the void size effect is accounted for. Beside the void volume fraction f, the intrinsic material length l becomes a parameter representing voids since the void size comes into play in the Gurson model. Approximate yield functions in analytic forms are suggested for both solids with cylindrical microvoids and with spherical microvoids. The application to uniaxial tension curves shows a precise agreement between the approximate analytic yield function and the "exact" parametric form of integrals.

  11. Mineral vein dynamics modelling (FRACS II)

    Energy Technology Data Exchange (ETDEWEB)

    Urai, J.; Virgo, S.; Arndt, M. [RWTH Aachen (Germany); and others

    2016-08-15

    The Mineral Vein Dynamics Modeling group ''FRACS'' started out as a team of 7 research groups in its first phase and continued with a team of 5 research groups at the Universities of Aachen, Tuebingen, Karlsruhe, Mainz and Glasgow during its second phase ''FRACS 11''. The aim of the group was to develop an advanced understanding of the interplay between fracturing, fluid flow and fracture healing with a special emphasis on the comparison of field data and numerical models. Field areas comprised the Oman mountains in Oman (which where already studied in detail in the first phase), a siliciclastic sequence in the Internal Ligurian Units in Italy (closed to Sestri Levante) and cores of Zechstein carbonates from a Lean Gas reservoir in Northern Germany. Numerical models of fracturing, sealing and interaction with fluid that were developed in phase I where expanded in phase 11. They were used to model small scale fracture healing by crystal growth and the resulting influence on flow, medium scale fracture healing and its influence on successive fracturing and healing, as well as large scale dynamic fluid flow through opening and closing fractures and channels as a function of fluid overpressure. The numerical models were compared with structures in the field and we were able to identify first proxies for mechanical vein-hostrock properties and fluid overpressures versus tectonic stresses. Finally we propose a new classification of stylolites based on numerical models and observations in the Zechstein cores and continued to develop a new stress inversion tool to use stylolites to estimate depth of their formation.

  12. Horns Rev II, 2-D Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), on behalf of Energy E2 A/S part of DONG Energy A/S, Denmark. The objective of the tests was: to investigate the combined influence of the pile...

  13. Interaction of a supersonic NO beam with static and resonant RF fields: Simple theoretical model to account for molecular interferences

    Science.gov (United States)

    Ureña, A. González; Caceres, J. O.; Morato, M.

    2006-09-01

    In previous experimental works from this laboratory two unexpected phenomena were reported: (i) a depletion of ca. 40% in the total intensity of a pulsed He seeded NO beam when these molecules passed a homogeneous and a resonant oscillating RF electric field and (ii) a beam splitting of ca. 0.5° when the transverse beam profile is measured, under the same experimental conditions. In this work a model based on molecular beam interferences is introduced which satisfactorily accounts for these two observations. It is shown how the experimental set-up a simple device used as C-field in early molecular beam electric resonance experiments, can be employed as molecular interferometer to investigate matter-wave interferences in beams of polar molecules.

  14. A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk

    Directory of Open Access Journals (Sweden)

    Ninna Reitzel Jensen

    2015-06-01

    Full Text Available Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance and unit-linked insurance. By use of a two-account model, we are able to illustrate general concepts without making the model too abstract. To allow for complicated financial markets without dramatically increasing the mathematical complexity, we focus on economic scenarios. We illustrate the use of our model by conducting scenario analysis based on Monte Carlo simulation, but the model applies to scenarios in general and to worst-case and best-estimate scenarios in particular. In addition to easy computations, our model offers a common framework for the valuation of life insurance payments across product types. This enables comparison of participating life insurance products and unit-linked insurance products, thus building a bridge between the two different ways of formalizing life insurance products. Finally, our model distinguishes itself from the existing literature by taking into account the Markov model for the state of the policyholder and, hereby, facilitating event risk.

  15. Accounting for environmental variability, modeling errors, and parameter estimation uncertainties in structural identification

    Science.gov (United States)

    Behmanesh, Iman; Moaveni, Babak

    2016-07-01

    This paper presents a Hierarchical Bayesian model updating framework to account for the effects of ambient temperature and excitation amplitude. The proposed approach is applied for model calibration, response prediction and damage identification of a footbridge under changing environmental/ambient conditions. The concrete Young's modulus of the footbridge deck is the considered updating structural parameter with its mean and variance modeled as functions of temperature and excitation amplitude. The identified modal parameters over 27 months of continuous monitoring of the footbridge are used to calibrate the updating parameters. One of the objectives of this study is to show that by increasing the levels of information in the updating process, the posterior variation of the updating structural parameter (concrete Young's modulus) is reduced. To this end, the calibration is performed at three information levels using (1) the identified modal parameters, (2) modal parameters and ambient temperatures, and (3) modal parameters, ambient temperatures, and excitation amplitudes. The calibrated model is then validated by comparing the model-predicted natural frequencies and those identified from measured data after deliberate change to the structural mass. It is shown that accounting for modeling error uncertainties is crucial for reliable response prediction, and accounting only the estimated variability of the updating structural parameter is not sufficient for accurate response predictions. Finally, the calibrated model is used for damage identification of the footbridge.

  16. Business Models, Accounting and Billing Concepts in Grid-Aware Networks

    Science.gov (United States)

    Kotrotsos, Serafim; Racz, Peter; Morariu, Cristian; Iskioupi, Katerina; Hausheer, David; Stiller, Burkhard

    The emerging Grid Economy, shall set new challenges for the network. More and more literature underlines the significance of network - awareness for efficient and effective grid services. Following this path to Grid evolution, this paper identifies some key challenges in the areas of business modeling, accounting and billing and proposes an architecture that addresses them.

  17. Measurement and modeling of advanced coal conversion processes, Volume II

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, P.R.; Serio, M.A.; Hamblen, D.G. [and others

    1993-06-01

    A two dimensional, steady-state model for describing a variety of reactive and nonreactive flows, including pulverized coal combustion and gasification, is presented. The model, referred to as 93-PCGC-2 is applicable to cylindrical, axi-symmetric systems. Turbulence is accounted for in both the fluid mechanics equations and the combustion scheme. Radiation from gases, walls, and particles is taken into account using a discrete ordinates method. The particle phase is modeled in a lagrangian framework, such that mean paths of particle groups are followed. A new coal-general devolatilization submodel (FG-DVC) with coal swelling and char reactivity submodels has been added.

  18. An analytical model for particulate reinforced composites (PRCs) taking account of particle debonding and matrix cracking

    Science.gov (United States)

    Jiang, Yunpeng

    2016-10-01

    In this work, a simple micromechanics-based model was developed to describe the overall stress-strain relations of particulate reinforced composites (PRCs), taking into account both particle debonding and matrix cracking damage. Based on the secant homogenization frame, the effective compliance tensor could be firstly given for the perfect composites without any damage. The progressive interface debonding damage is controlled by a Weibull probability function, and then the volume fraction of detached particles is involved in the equivalent compliance tensor to account for the impact of particle debonding. The matrix cracking was introduced in the present model to embody the stress softening stage in the deformation of PRCs. The analytical model was firstly verified by comparing with the corresponding experiment, and then parameter analyses were conducted. This modeling will shed some light on optimizing the microstructures in effectively improving the mechanical behaviors of PRCs.

  19. Argonne Bubble Experiment Thermal Model Development II

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations. The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.

  20. A WEAKLY NONLINEAR WATER WAVE MODEL TAKING INTO ACCOUNT DISPERSION OF WAVE PHASE VELOCITY

    Institute of Scientific and Technical Information of China (English)

    李瑞杰; 李东永

    2002-01-01

    This paper presents a weakly nonlinear water wave model using a mild slope equation and a new explicit formulation which takes into account dispersion of wave phase velocity, approximates Hedges' (1987) nonlinear dispersion relationship, and accords well with the original empirical formula. Comparison of the calculating results with those obtained from the experimental data and those obtained from linear wave theory showed that the present water wave model considering the dispersion of phase velocity is rational and in good agreement with experiment data.

  1. Strictly isospectral Bianchi type II cosmological models

    CERN Document Server

    Rosu, H C; Obregón, O

    1996-01-01

    We show that, in the Q=0 factor ordering, the Wheeler-DeWitt equation for the Bianchi type ll model with the Ansatz \\rm \\Psi=A\\, e^{\\pm \\Phi(q^{\\mu})}, due to its one-dimensional character, may be approached by the strictly isospectral Darboux-Witten technique in standard supersymmetric quantum mechanics. One-parameter families of cosmological potentials and normalizable `wavefunctions of the universe' are exhibited. The isospectral method can be used to introduce normalizable wavefunctions in quantum cosmology.

  2. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    CERN Document Server

    Mitchell, Lewis

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are described; a time-constant model error treatment where the same model error statistical description is time-invariant, and a time-varying treatment where the assumed model error statistics is randomly sampled at each analysis step. We compare both methods with the standard method of dealing with model error through inflation and localization, and illustrate our results with numerical simulations on a low order nonlinear system exhibiting chaotic dynamics. The results show that the filter skill is significantly improved through th...

  3. The Mg II index for upper atmosphere modelling

    Directory of Open Access Journals (Sweden)

    G. Thuillier

    Full Text Available The solar radio flux at 10.7 cm has been used in upper atmosphere density modelling because of its correlation with EUV radiation and its long and complete observational record. A proxy, the Mg II index, for the solar chromospheric activity has been derived by Heath and Schlesinger (1986 from Nimbus-7 data. This index allows one to describe the changes occurring in solar-activity in the UV Sun spectral irradiance. The use of this new proxy in upper atmosphere density modelling will be considered. First, this is supported by the 99.9% correlation between the solar radio flux (F10.7 and the Mg II index over a period of 19 years with, however, large differences on time scales of days to months. Secondly, correlation between EUV emissions and the Mg II index has been shown recently, suggesting that this last index may also be used to describe the EUV variations. Using the same density dataset, a model was first run with the F10.7 index as a solar forcing function and second, with the Mg II index. Comparison of their respective predictions to partial density data showed a 3–8% higher precision when the modelling uses the Mg II index rather than F10.7. An external validation, by means of orbit computation, resulted in a 20–40% smaller RMS of the tracking residuals. A density dataset spanning an entire solar cycle, together with Mg II data, is required to construct an accurate, unbiased as possible density model.

    Key words. Atmospheric composition and structure (middle atmosphere – composition and chemistry; thermosphere – composition and chemistry – History of geophysics (atmospheric sciences

  4. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  5. Cost accounting models used for price-setting of health services: an international review.

    Science.gov (United States)

    Raulinajtys-Grzybek, Monika

    2014-12-01

    The aim of the article was to present and compare cost accounting models which are used in the area of healthcare for pricing purposes in different countries. Cost information generated by hospitals is further used by regulatory bodies for setting or updating prices of public health services. The article presents a set of examples from different countries of the European Union, Australia and the United States and concentrates on DRG-based payment systems as they primarily use cost information for pricing. Differences between countries concern the methodology used, as well as the data collection process and the scope of the regulations on cost accounting. The article indicates that the accuracy of the calculation is only one of the factors that determine the choice of the cost accounting methodology. Important aspects are also the selection of the reference hospitals, precise and detailed regulations and the existence of complex healthcare information systems in hospitals.

  6. THE ROLE OF TECHNICAL CONSUMPTION CALCULATION MODELS ON ACCOUNTING INFORMATION SYSTEMS OF PUBLIC UTILITIES SERVICES OPERATORS

    Directory of Open Access Journals (Sweden)

    GHEORGHE CLAUDIU FEIES

    2012-05-01

    Full Text Available After studying how the operators’ management works, an influence of the specific activities of public utilities on their financial accounting system can be noticed. The asymmetry of these systems is also present, resulting from organization and specific services, which implies a close link between the financial accounting system and the specialized technical department. The research methodology consists in observing specific activities of public utility operators and their influence on information system and analysis views presented in the context of published work in some journals. It analyses the impact of technical computing models used by public utility community services on the financial statements and therefore the information provided by accounting information system stakeholders.

  7. Modeling of Accounting Doctoral Thesis with Emphasis on Solution for Financial Problems

    Directory of Open Access Journals (Sweden)

    F. Mansoori

    2015-02-01

    Full Text Available By passing the instruction period and increase of graduate students and also research budget, knowledge of accounting in Iran entered to the field of research in a way that number of accounting projects has been implemented in the real world. Because of that different experience in implementing the accounting standards were achieved. So, it was expected the mentioned experiences help to solve the financial problems in country, in spite of lots of efforts which were done for researching; we still have many financial and accounting problems in our country. PHD projects could be considered as one of the important solutions to improve the University subjects including accounting. PHD projects are considered as team work job and it will be legitimate by supervisor teams in universities.It is obvious that applied projects should solve part of the problems in accounting field but unfortunately it is not working in the real world. The question which came in to our mind is how come that the out put of the applied and knowledge base projects could not make the darkness of the mentioned problems clear and also why politicians in difficult situations prefer to use their own previous experiences in important decision makings instead of using the consultant’s knowledge base suggestions.In this research I’m going to study, the reasons behind that prevent the applied PHD projects from success in real world which relates to the point of view that consider the political suggestions which are out put of knowledge base projects are not qualified enough for implementation. For this purpose, the indicators of an applied PHD project were considered and 110 vise people were categorized the mentioned indicators and then in a comprehensive study other applied PHD accounting projects were compared to each other.As result, in this study problems of the studied researches were identified and a proper and applied model for creating applied research was developed.

  8. A selection model for accounting for publication bias in a full network meta-analysis.

    Science.gov (United States)

    Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia

    2014-12-30

    Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency.

  9. Modeling of vapor intrusion from hydrocarbon-contaminated sources accounting for aerobic and anaerobic biodegradation

    Science.gov (United States)

    Verginelli, Iason; Baciocchi, Renato

    2011-11-01

    A one-dimensional steady state vapor intrusion model including both anaerobic and oxygen-limited aerobic biodegradation was developed. The aerobic and anaerobic layer thickness are calculated by stoichiometrically coupling the reactive transport of vapors with oxygen transport and consumption. The model accounts for the different oxygen demand in the subsurface required to sustain the aerobic biodegradation of the compound(s) of concern and for the baseline soil oxygen respiration. In the case of anaerobic reaction under methanogenic conditions, the model accounts for the generation of methane which leads to a further oxygen demand, due to methane oxidation, in the aerobic zone. The model was solved analytically and applied, using representative parameter ranges and values, to identify under which site conditions the attenuation of hydrocarbons migrating into indoor environments is likely to be significant. Simulations were performed assuming a soil contaminated by toluene only, by a BTEX mixture, by Fresh Gasoline and by Weathered Gasoline. The obtained results have shown that for several site conditions oxygen concentration below the building is sufficient to sustain aerobic biodegradation. For these scenarios the aerobic biodegradation is the primary mechanism of attenuation, i.e. anaerobic contribution is negligible and a model accounting just for aerobic biodegradation can be used. On the contrary, in all cases where oxygen is not sufficient to sustain aerobic biodegradation alone (e.g. highly contaminated sources), anaerobic biodegradation can significantly contribute to the overall attenuation depending on the site specific conditions.

  10. Modeling of vapor intrusion from hydrocarbon-contaminated sources accounting for aerobic and anaerobic biodegradation.

    Science.gov (United States)

    Verginelli, Iason; Baciocchi, Renato

    2011-11-01

    A one-dimensional steady state vapor intrusion model including both anaerobic and oxygen-limited aerobic biodegradation was developed. The aerobic and anaerobic layer thickness are calculated by stoichiometrically coupling the reactive transport of vapors with oxygen transport and consumption. The model accounts for the different oxygen demand in the subsurface required to sustain the aerobic biodegradation of the compound(s) of concern and for the baseline soil oxygen respiration. In the case of anaerobic reaction under methanogenic conditions, the model accounts for the generation of methane which leads to a further oxygen demand, due to methane oxidation, in the aerobic zone. The model was solved analytically and applied, using representative parameter ranges and values, to identify under which site conditions the attenuation of hydrocarbons migrating into indoor environments is likely to be significant. Simulations were performed assuming a soil contaminated by toluene only, by a BTEX mixture, by Fresh Gasoline and by Weathered Gasoline. The obtained results have shown that for several site conditions oxygen concentration below the building is sufficient to sustain aerobic biodegradation. For these scenarios the aerobic biodegradation is the primary mechanism of attenuation, i.e. anaerobic contribution is negligible and a model accounting just for aerobic biodegradation can be used. On the contrary, in all cases where oxygen is not sufficient to sustain aerobic biodegradation alone (e.g. highly contaminated sources), anaerobic biodegradation can significantly contribute to the overall attenuation depending on the site specific conditions.

  11. The self-consistent field model for Fermi systems with account of three-body interactions

    Directory of Open Access Journals (Sweden)

    Yu.M. Poluektov

    2015-12-01

    Full Text Available On the basis of a microscopic model of self-consistent field, the thermodynamics of the many-particle Fermi system at finite temperatures with account of three-body interactions is built and the quasiparticle equations of motion are obtained. It is shown that the delta-like three-body interaction gives no contribution into the self-consistent field, and the description of three-body forces requires their nonlocality to be taken into account. The spatially uniform system is considered in detail, and on the basis of the developed microscopic approach general formulas are derived for the fermion's effective mass and the system's equation of state with account of contribution from three-body forces. The effective mass and pressure are numerically calculated for the potential of "semi-transparent sphere" type at zero temperature. Expansions of the effective mass and pressure in powers of density are obtained. It is shown that, with account of only pair forces, the interaction of repulsive character reduces the quasiparticle effective mass relative to the mass of a free particle, and the attractive interaction raises the effective mass. The question of thermodynamic stability of the Fermi system is considered and the three-body repulsive interaction is shown to extend the region of stability of the system with the interparticle pair attraction. The quasiparticle energy spectrum is calculated with account of three-body forces.

  12. Modelling of asymmetric nebulae. II. Line profiles

    CERN Document Server

    Morisset, C

    2006-01-01

    We present a tool, VELNEB_3D, which can be applied to the results of 3D photoionization codes to generate emission line profiles, position-velocity maps and 3D maps in any emission line by assuming an arbitrary velocity field. We give a few examples, based on our pseudo-3D photoionization code NEBU_3D (Morisset, Stasinska and Pena, 2005) which show the potentiality and usefulness of our tool. One example shows how complex line profiles can be obtained even with a simple expansion law if the nebula is bipolar and the slit slightly off-center. Another example shows different ways to produce line profiles that could be attributed to a turbulent velocity field while there is no turbulence in the model. A third example shows how, in certain circumstances, it is possible to discriminate between two very different geometrical structures -- here a face-on blister and its ``spherical impostor'' -- when using appropriate high resolution spectra. Finally, we show how our tool is able to generate 3D maps, similar to the ...

  13. Modeling the interaction of electric current and tissue: importance of accounting for time varying electric properties.

    Science.gov (United States)

    Evans, Daniel J; Manwaring, Mark L

    2007-01-01

    Time varying computer models of the interaction of electric current and tissue are very valuable in helping to understand the complexity of the human body and biological tissue. The electrical properties of tissue, permittivity and conductivity, are vital to accurately modeling the interaction of the human tissue with electric current. Past models have represented the electric properties of the tissue as constant or temperature dependent. This paper presents time dependent electric properties that change as a result of tissue damage, temperature, blood flow, blood vessels, and tissue property. Six models are compared to emphasize the importance of accounting for these different tissue properties in the computer model. In particular, incorporating the time varying nature of the electric properties of human tissue into the model leads to a significant increase in tissue damage. An important feature of the model is the feedback loop created between the electric properties, tissue damage, and temperature.

  14. Causality in 1+1-dimensional Yukawa model-II

    Indian Academy of Sciences (India)

    Asrarul Haque; Satish D Joglekar

    2013-10-01

    The limits → large, $M →$ large with ($g^{3}/M$) = const. of the 1+1-dimensional Yukawa model are discussed. The conclusion of the results on bound states of the Yukawa model in this limit (obtained in arXiv:0908.4510v3 [hep-th]) is taken into account. It is found that model reduces to an effective non-local 3 theory in this limit. Causality violation also is observed in this limit.

  15. Beyond standard model report of working group II

    CERN Document Server

    Joshipura, A S; Joshipura, Anjan S; Roy, Probir

    1995-01-01

    Working group II at WHEPP3 concentrated on issues related to the supersymmetric standard model as well as SUSY GUTS and neutrino properties. The projects identified by various working groups as well as progress made in them since WHEPP3 are briefly reviewed.

  16. Meta-analysis of diagnostic tests accounting for disease prevalence: a new model using trivariate copulas.

    Science.gov (United States)

    Hoyer, A; Kuss, O

    2015-05-20

    In real life and somewhat contrary to biostatistical textbook knowledge, sensitivity and specificity (and not only predictive values) of diagnostic tests can vary with the underlying prevalence of disease. In meta-analysis of diagnostic studies, accounting for this fact naturally leads to a trivariate expansion of the traditional bivariate logistic regression model with random study effects. In this paper, a new model is proposed using trivariate copulas and beta-binomial marginal distributions for sensitivity, specificity, and prevalence as an expansion of the bivariate model. Two different copulas are used, the trivariate Gaussian copula and a trivariate vine copula based on the bivariate Plackett copula. This model has a closed-form likelihood, so standard software (e.g., SAS PROC NLMIXED) can be used. The results of a simulation study have shown that the copula models perform at least as good but frequently better than the standard model. The methods are illustrated by two examples.

  17. Efficient modeling of sun/shade canopy radiation dynamics explicitly accounting for scattering

    Directory of Open Access Journals (Sweden)

    P. Bodin

    2011-08-01

    Full Text Available The separation of global radiation (Rg into its direct (Rb and diffuse constituents (Rd is important when modeling plant photosynthesis because a high Rd:Rg ratio has been shown to enhance Gross Primary Production (GPP. To include this effect in vegetation models, the plant canopy must be separated into sunlit and shaded leaves, for example using an explicit 3-dimensional ray tracing model. However, because such models are often too intractable and computationally expensive for theoretical or large scale studies simpler sun-shade approaches are often preferred. A widely used and computationally efficient sun-shade model is a model originally developed by Goudriaan (1977 (GOU, which however does not explicitly account for radiation scattering.

    Here we present a new model based on the GOU model, but which in contrast explicitly simulates radiation scattering by sunlit leaves and the absorption of this radiation by the canopy layers above and below (2-stream approach. Compared to the GOU model our model predicts significantly different profiles of scattered radiation that are in better agreement with measured profiles of downwelling diffuse radiation. With respect to these data our model's performance is equal to a more complex and much slower iterative radiation model while maintaining the simplicity and computational efficiency of the GOU model.

  18. Material control in nuclear fuel fabrication facilities. Part II. Accountability, instrumntation, and measurement techniques in fuel fabrication facilities, P. O. 1236909. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Borgonovi, G.M.; McCartin, T.J.; McDaniel, T.; Miller, C.L.; Nguyen, T.

    1978-12-01

    This report describes the measurement techniques, the instrumentation, and the procedures used in accountability and control of nuclear materials, as they apply to fuel fabrication facilities. Some of the material included has appeared elswhere and it has been summarized. An extensive bibliography is included. A spcific example of application of the accountability methods to a model fuel fabrication facility which is based on the Westinghouse Anderson design.

  19. The hydrodynamical models of the cometary compact H II region

    CERN Document Server

    Zhu, Feng-Yao; Li, Juan; Zhang, Jiang-Shui; Wang, Jun-Zhi

    2015-01-01

    We have developed a full numerical method to study the gas dynamics of cometary ultra-compact (UC) H II regions, and associated photodissociation regions (PDRs). The bow-shock and champagne-flow models with a $40.9/21.9 M_\\odot$ star are simulated. In the bow-shock models, the massive star is assumed to move through dense ($n=8000~cm^{-3}$) molecular material with a stellar velocity of $15~km~s^{-1}$. In the champagne-flow models, an exponential distribution of density with a scale height of 0.2 pc is assumed. The profiles of the [Ne II] 12.81\\mum and $H_2~S(2)$ lines from the ionized regions and PDRs are compared for two sets of models. In champagne-flow models, emission lines from the ionized gas clearly show the effect of acceleration along the direction toward the tail due to the density gradient. The kinematics of the molecular gas inside the dense shell is mainly due to the expansion of the H II region. However, in bow-shock models the ionized gas mainly moves in the same direction as the stellar motion...

  20. Evaluating the predictive abilities of community occupancy models using AUC while accounting for imperfect detection

    Science.gov (United States)

    Zipkin, Elise F.; Grant, Evan H. Campbell; Fagan, William F.

    2012-01-01

    The ability to accurately predict patterns of species' occurrences is fundamental to the successful management of animal communities. To determine optimal management strategies, it is essential to understand species-habitat relationships and how species habitat use is related to natural or human-induced environmental changes. Using five years of monitoring data in the Chesapeake and Ohio Canal National Historical Park, Maryland, USA, we developed four multi-species hierarchical models for estimating amphibian wetland use that account for imperfect detection during sampling. The models were designed to determine which factors (wetland habitat characteristics, annual trend effects, spring/summer precipitation, and previous wetland occupancy) were most important for predicting future habitat use. We used the models to make predictions of species occurrences in sampled and unsampled wetlands and evaluated model projections using additional data. Using a Bayesian approach, we calculated a posterior distribution of receiver operating characteristic area under the curve (ROC AUC) values, which allowed us to explicitly quantify the uncertainty in the quality of our predictions and to account for false negatives in the evaluation dataset. We found that wetland hydroperiod (the length of time that a wetland holds water) as well as the occurrence state in the prior year were generally the most important factors in determining occupancy. The model with only habitat covariates predicted species occurrences well; however, knowledge of wetland use in the previous year significantly improved predictive ability at the community level and for two of 12 species/species complexes. Our results demonstrate the utility of multi-species models for understanding which factors affect species habitat use of an entire community (of species) and provide an improved methodology using AUC that is helpful for quantifying the uncertainty in model predictions while explicitly accounting for

  1. Accounting Models of the Human Factor and its Architecture in Scheduling and Acceptance of Administrative Solutions

    Science.gov (United States)

    2010-10-01

    terrorism or fighting, as for example in Bhopal, Goiânia, Chernobyl , Novosibirsk. General global trend is an extension of the tasks from military... animals . Accounting Models of the Human Factor and its Architecture in Scheduling and Acceptance of Administrative Solutions RTO-MP-HFM-202 P14 - 5...endemic infections, dangerous insects and animals . Vector equipment and protective equipment (Eq) describes the physiological and hygienic

  2. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

    Energy Technology Data Exchange (ETDEWEB)

    Dojcsak, L.; Marriner, J.; /Fermilab

    2010-08-01

    In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

  3. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect.

    Science.gov (United States)

    Carenzo, M; Pellicciotti, F; Mabillard, J; Reid, T; Brock, B W

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the

  4. An enhanced temperature index model for debris-covered glaciers accounting for thickness effect

    Science.gov (United States)

    Carenzo, M.; Pellicciotti, F.; Mabillard, J.; Reid, T.; Brock, B. W.

    2016-08-01

    Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the

  5. Structured assessment approach version 1. License submittal document content and format for material control and accounting assessment. Volume II

    Energy Technology Data Exchange (ETDEWEB)

    Parziale, A.A.; Sacks, I.J.

    1979-10-01

    A methodology, the Structured Assessment Approach, has been developed for the assessment of the effectiveness of material control and accounting (MC and A) safeguards systems at nuclear fuel cycle facilities. This methodology has been refined into a computational tool, the SAA Version 1 computational package, that was used first to analyze a hypothetical fuel cycle facility and used more recently to assess operational nuclear plants. The Version 1 analysis package is designed to analyze safeguards systems that prevent the diversion of special nuclear material (SNM) from nuclear fuel cycle facilities and to provide assurance that diversion has not occurred. This report is the second volume, the License Submittal Document Content and Format for Material Control and Accounting Assessment, of a four-volume document. It presents the content and format of the LSD necessary for Material Control and Accounting (MC and A) assessment with the SAA Version 1. The LSD is designed to provide the necessary data input to perform all four stages of analyses associated with the SAA. A full-size but Hypothetical Fuel Cycle Facility (HFCF) is used as an example to illustrate the required input data content and data format and to illustrate the procedure for generating the LSD. Generation of the LSD is the responsibility of the nuclear facility licensee applicant.

  6. Bayesian model accounting for within-class biological variability in Serial Analysis of Gene Expression (SAGE

    Directory of Open Access Journals (Sweden)

    Brentani Helena

    2004-08-01

    Full Text Available Abstract Background An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE, "Digital Northern" or Massively Parallel Signature Sequencing (MPSS, is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. Results We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries" and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Conclusion Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.

  7. The importance of accounting for the uncertainty of published prognostic model estimates.

    Science.gov (United States)

    Young, Tracey A; Thompson, Simon

    2004-01-01

    Reported is the importance of properly reflecting uncertainty associated with prognostic model estimates when calculating the survival benefit of a treatment or technology, using liver transplantation as an example. Monte Carlo simulation techniques were used to account for the uncertainty of prognostic model estimates using the standard errors of the regression coefficients and their correlations. These methods were applied to patients with primary biliary cirrhosis undergoing liver transplantation using a prognostic model from a historic cohort who did not undergo transplantation. The survival gain over 4 years from transplantation was estimated. Ignoring the uncertainty in the prognostic model, the estimated survival benefit of liver transplantation was 16.7 months (95 percent confidence interval [CI], 13.5 to 20.1), and was statistically significant (p important that the precision of regression coefficients is available for users of published prognostic models. Ignoring this additional information substantially underestimates uncertainty, which can then impact misleadingly on policy decisions.

  8. Accounting for crop rotations in acreage choice modeling: a tractable modeling framework

    OpenAIRE

    Carpentier, Alain; Gohin, Alexandre

    2014-01-01

    Crop rotation effects and constraints are major determinants of farmers’ crop choices. Crop rotations are also keystone elements of most environmentally friendly cropping systems. The aim of this paper is twofold. First, it proposes simple tools for investigating optimal dynamic crop acreage choices accounting for crop rotation effects and constraints in an uncertain context. Second, it illustrates the impacts of crop rotation effects and constraints on farmers’ acreage choices through simple...

  9. Synthesis, Spectral Characterization, Molecular Modeling, and Antimicrobial Studies of Cu(II, Ni(II, Co(II, Mn(II, and Zn(II Complexes of ONO Schiff Base

    Directory of Open Access Journals (Sweden)

    Padmaja Mendu

    2012-01-01

    Full Text Available A series of Cu(II, Ni(II, Co(II, Mn(II, and Zn(II complexes have been synthesized from the schiff base ligand L. The schiff base ligand [(4-oxo-4H-chromen-3-yl methylene] benzohydrazide (L has been synthesized by the reaction between chromone-3-carbaldehyde and benzoyl hydrazine. The nature of bonding and geometry of the transition metal complexes as well as schiff base ligand L have been deduced from elemental analysis, FT-IR, UV-Vis, 1HNMR, ESR spectral studies, mass, thermal (TGA and DTA analysis, magnetic susceptibility, and molar conductance measurements. Cu(II, Ni(II, Co(II, and Mn(II metal ions are forming 1:2 (M:L complexes, Zn(II is forming 1:1 (M:L complex. Based on elemental, conductance and spectral studies, six-coordinated geometry was assigned for Cu(II, Ni(II, Co(II, Mn(II, and Zn(II complexes. The complexes are 1:2 electrolytes in DMSO except zinc complex, which is neutral in DMSO. The ligand L acts as tridentate and coordinates through nitrogen atom of azomethine group, oxygen atom of keto group of γ-pyrone ring and oxygen atom of hydrazoic group of benzoyl hydrazine. The 3D molecular modeling and energies of all the compounds are furnished. The biological activity of the ligand and its complexes have been studied on the four bacteria E. coli, Edwardella, Pseudomonas, and B. subtilis and two fungi pencillium and tricoderma by well disc and fusion method and found that the metal chelates are more active than the free schiff base ligand.

  10. Current Account Imbalances and Economic Growth: a two-country model with real-financial linkages

    OpenAIRE

    Laura Barbosa de Carvalho

    2012-01-01

    This paper builds a two-country stock-flow consistent model by com- bining a debt-led economy that emits the international reserve currency with an export-led economy. The model has two major implications. First, an initial trade deficit in the debt-led country leads to a perma- nent imbalance in the current account, even when the exchange rate is at parity. Second, different re-balancing mechanisms, namely a currency depreciation or the reduction of the propensity to import in the debt-led c...

  11. Model of inventory replenishment in periodic review accounting for the occurrence of shortages

    Directory of Open Access Journals (Sweden)

    Stanisław Krzyżaniak

    2014-03-01

    Full Text Available Background: Despite the development of alternative concepts of goods flow management, the inventory management under conditions of random variations of demand is still an important issue, both from the point of view of inventory keeping and replenishment costs and the service level measured as the level of inventory availability. There is a number of inventory replenishment systems used in these conditions, but they are mostly developments of two basic systems: reorder point-based and periodic review-based. The paper deals with the latter system. Numerous researches indicate the need to improve the classical models describing that system, the reason being mainly the necessity to adapt the model better to the actual conditions. This allows a correct selection of parameters that control the used inventory replenishment system and - as a result - to obtain expected economic effects. Methods: This research aimed at building a model of the periodic review system to reflect the relations (observed during simulation tests between the volume of inventory shortages and the degree of accounting for so-called deferred demand, and the service level expressed as the probability of satisfying the demand in the review and the inventory replenishment cycle. The following model building and testing method has been applied: numerical simulation of inventory replenishment - detailed analysis of simulation results - construction of the model taking into account the regularities observed during the simulations - determination of principles of solving the system of relations creating the model - verification of the results obtained from the model using the results from simulation. Results: Presented are selected results of calculations based on classical formulas and using the developed model, which describe the relations between the service level and the parameters controlling the discussed inventory replenishment system. The results are compared to the simulation

  12. Generation of SEEAW asset accounts based on water resources management models

    Science.gov (United States)

    Pedro-Monzonís, María; Solera, Abel; Andreu, Joaquín

    2015-04-01

    One of the main challenges in the XXI century is related with the sustainable use of water. This is due to the fact that water is an essential element for the life of all who inhabit our planet. In many cases, the lack of economic valuation of water resources causes an inefficient water use. In this regard, society expects of policymakers and stakeholders maximise the profit produced per unit of natural resources. Water planning and the Integrated Water Resources Management (IWRM) represent the best way to achieve this goal. The System of Environmental-Economic Accounting for Water (SEEAW) is displayed as a tool for water allocation which enables the building of water balances in a river basin. The main concern of the SEEAW is to provide a standard approach which allows the policymakers to compare results between different territories. But building water accounts is a complex task due to the difficulty of the collection of the required data. Due to the difficulty of gauging the components of the hydrological cycle, the use of simulation models has become an essential tool extensively employed in last decades. The target of this paper is to present the building up of a database that enables the combined use of hydrological models and water resources models developed with AQUATOOL DSSS to fill in the SEEAW tables. This research is framed within the Water Accounting in a Multi-Catchment District (WAMCD) project, financed by the European Union. Its main goal is the development of water accounts in the Mediterranean Andalusian River Basin District, in Spain. This research pretends to contribute to the objectives of the "Blueprint to safeguard Europe's water resources". It is noteworthy that, in Spain, a large part of these methodological decisions are included in the Spanish Guideline of Water Planning with normative status guaranteeing consistency and comparability of the results.

  13. Optimization Model for Refinery Hydrogen Networks Part II

    Directory of Open Access Journals (Sweden)

    Enrique E. Tarifa

    2016-10-01

    Full Text Available In the first part of this work, a model of optimization was presented that minimizes the consumption of the hydrogen of a refinery. In this second part, the model will be augmented to take into account the length of the pipelines, the addition of purification units and the installation of new compressors, all features of industrial real networks. The model developed was implemented in the LINGO software environment. For data input and results output, an Excel spreadsheet was developed that interfaces with LINGO. The model is currently being used in YPFLuján de Cuyo refinery (Mendoza, Argentina

  14. Accounting for anatomical noise in search-capable model observers for planar nuclear imaging.

    Science.gov (United States)

    Sen, Anando; Gifford, Howard C

    2016-01-01

    Model observers intended to predict the diagnostic performance of human observers should account for the effects of both quantum and anatomical noise. We compared the abilities of several visual-search (VS) and scanning Hotelling-type models to account for anatomical noise in a localization receiver operating characteristic (LROC) study involving simulated nuclear medicine images. Our VS observer invoked a two-stage process of search and analysis. The images featured lesions in the prostate and pelvic lymph nodes. Lesion contrast and the geometric resolution and sensitivity of the imaging collimator were the study variables. A set of anthropomorphic mathematical phantoms was imaged with an analytic projector based on eight parallel-hole collimators with different sensitivity and resolution properties. The LROC study was conducted with human observers and the channelized nonprewhitening, channelized Hotelling (CH) and VS model observers. The CH observer was applied in a "background-known-statistically" protocol while the VS observer performed a quasi-background-known-exactly task. Both of these models were applied with and without internal noise in the decision variables. A perceptual search threshold was also tested with the VS observer. The model observers without inefficiencies failed to mimic the average performance trend for the humans. The CH and VS observers with internal noise matched the humans primarily at low collimator sensitivities. With both internal noise and the search threshold, the VS observer attained quantitative agreement with the human observers. Computational efficiency is an important advantage of the VS observer.

  15. A Buffer Model Account of Behavioral and ERP Patterns in the Von Restorff Paradigm

    Directory of Open Access Journals (Sweden)

    Siri-Maria Kamp

    2016-06-01

    Full Text Available We combined a mechanistic model of episodic encoding with theories on the functional significance of two event-related potential (ERP components to develop an integrated account for the Von Restorff effect, which refers to the enhanced recall probability for an item that deviates in some feature from other items in its study list. The buffer model of Lehman and Malmberg (2009, 2013 can account for this effect such that items encountered during encoding enter an episodic buffer where they are actively rehearsed. When a deviant item is encountered, in order to re-allocate encoding resources towards this item the buffer is emptied from its prior content, a process labeled “compartmentalization”. Based on theories on their functional significance, the P300 component of the ERP may co-occur with this hypothesized compartmentalization process, while the frontal slow wave may index rehearsal. We derived predictions from this integrated model for output patterns in free recall, systematic variance in ERP components, as well as associations between the two types of measures in a dataset of 45 participants who studied and freely recalled lists of the Von Restorff type. Our major predictions were confirmed and the behavioral and physiological results were consistent with the predictions derived from the model. These findings demonstrate that constraining mechanistic models of episodic memory with brain activity patterns and generating predictions for relationships between brain activity and behavior can lead to novel insights into the relationship between the brain, the mind, and behavior.

  16. Constructing kinetic models to elucidate structural dynamics of a complete RNA polymerase II elongation cycle

    Science.gov (United States)

    Yu, Jin; Da, Lin-Tai; Huang, Xuhui

    2015-02-01

    The RNA polymerase II elongation is central in eukaryotic transcription. Although multiple intermediates of the elongation complex have been identified, the dynamical mechanisms remain elusive or controversial. Here we build a structure-based kinetic model of a full elongation cycle of polymerase II, taking into account transition rates and conformational changes characterized from both single molecule experimental studies and computational simulations at atomistic scale. Our model suggests a force-dependent slow transition detected in the single molecule experiments corresponds to an essential conformational change of a trigger loop (TL) opening prior to the polymerase translocation. The analyses on mutant study of E1103G and on potential sequence effects of the translocation substantiate this proposal. Our model also investigates another slow transition detected in the transcription elongation cycle which is independent of mechanical force. If this force-independent slow transition happens as the TL gradually closes upon NTP binding, the analyses indicate that the binding affinity of NTP to the polymerase has to be sufficiently high. Otherwise, one infers that the slow transition happens pre-catalytically but after the TL closing. Accordingly, accurate determination of intrinsic properties of NTP binding is demanded for an improved characterization of the polymerase elongation. Overall, the study provides a working model of the polymerase II elongation under a generic Brownian ratchet mechanism, with most essential structural transition and functional kinetics elucidated.

  17. Regional Balance Model of Financial Flows through Sectoral Approaches System of National Accounts

    Directory of Open Access Journals (Sweden)

    Ekaterina Aleksandrovna Zaharchuk

    2017-03-01

    Full Text Available The main purpose of the study, the results of which are reflected in this article, is the theoretical and methodological substantiation of possibilities to build a regional balance model of financial flows consistent with the principles of the construction of the System of National Accounts (SNA. The paper summarizes the international experience of building regional accounts in the SNA as well as reflects the advantages and disadvantages of the existing techniques for constructing Social Accounting Matrix. The authors have proposed an approach to build the regional balance model of financial flows, which is based on the disaggregated tables of the formation, distribution and use of the added value of territory in the framework of institutional sectors of SNA (corporations, public administration, households. Within the problem resolution of the transition of value added from industries to sectors, the authors have offered an approach to the accounting of development, distribution and use of value added within the institutional sectors of the territories. The methods of calculation are based on the publicly available information base of statistics agencies and federal services. The authors provide the scheme of the interrelations of the indicators of the regional balance model of financial flows. It allows to coordinate mutually the movement of regional resources by the sectors of «corporation», «public administration» and «households» among themselves, and cash flows of the region — by the sectors and directions of use. As a result, they form a single account of the formation and distribution of territorial financial resources, which is a regional balance model of financial flows. This matrix shows the distribution of financial resources by income sources and sectors, where the components of the formation (compensation, taxes and gross profit, distribution (transfers and payments and use (final consumption, accumulation of value added are

  18. A comparison of Graham and Piotroski investment models using accounting information and efficacy measurement

    Directory of Open Access Journals (Sweden)

    Nusrat Jahan

    2016-03-01

    Full Text Available We examine the investment models of Benjamin Graham and Joseph Piotroski and compare the efficacy of these two models by running backtest, using screening rules and ranking systems built in Portfolio 123. Using different combinations of screening rules and ranking systems, we also examine the performance of Piotroski and Graham investment models. We find that the combination of Piotroski and Graham investment models performs better than S&P 500. We also find that the Piotroski screening with Graham ranking generates the highest average annualized return among different combinations of screening rules and ranking systems analyzed in this paper. Overall, our results show a profound impact of accounting information on investor’s decision making.

  19. Improved Mathematical Model of PMSM Taking Into Account Cogging Torque Oscillations

    Directory of Open Access Journals (Sweden)

    TUDORACHE, T.

    2012-08-01

    Full Text Available This paper presents an improved mathematical model of Permanent Magnet Synchronous Machine (PMSM that takes into account the Cogging Torque (CT oscillations that appear due to the mutual attraction between the Permanent Magnets (PMs and the anisotropic stator armature. The electromagnetic torque formula in the proposed model contains an analytical expression of the CT calibrated by Finite Element (FE analysis. The numerical calibration is carried out using a data fitting procedure based on the Simplex Downhill optimization algorithm. The proposed model is characterized by good accuracy and reduced computation effort, its performance being verified by comparison with the classical d-q model of the machine using Matlab/Simulink environment.

  20. A statistical RCL interconnect delay model taking account of process variations

    Institute of Scientific and Technical Information of China (English)

    Zhu Zhang-Ming; wan Da-Jing; Yang Yin-Tang; En Yun-Fei

    2011-01-01

    As the feature size of the CMOS integrated circuit continues to shrink, process variations have become a key factor affecting the interconnect performance. Based on the equivalent Elmore model and the use of the polynomial chaos theory and the Galerkin method, we propose a linear statistical RCL interconnect delay model, taking into account process variations by successive application of the linear approximation method. Based on a variety of nano-CMOS process parameters, HSPICE simulation results show that the maximum error of the proposed model is less than 3.5%.The proposed model is simple, of high precision, and can be used in the analysis and design of nanometer integrated circuit interconnect systems.

  1. MODELING ENERGY EXPENDITURE AND OXYGEN CONSUMPTION IN HUMAN EXPOSURE MODELS: ACCOUNTING FOR FATIGUE AND EPOC

    Science.gov (United States)

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled Individual's physical activity level as described in an activity diary. Activity level is quantified via standardized val...

  2. MODELING ENERGY EXPENDITURE AND OXYGEN CONSUMPTION IN HUMAN EXPOSURE MODELS: ACCOUNTING FOR FATIGUE AND EPOC

    Science.gov (United States)

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled Individual's physical activity level as described in an activity diary. Activity level is quantified via standardized val...

  3. Statistical model of rough surface contact accounting for size-dependent plasticity and asperity interaction

    Science.gov (United States)

    Song, H.; Vakis, A. I.; Liu, X.; Van der Giessen, E.

    2017-09-01

    The work by Greenwood and Williamson (GW) has initiated a simple but effective method of contact mechanics: statistical modeling based on the mechanical response of a single asperity. Two main assumptions of the original GW model are that the asperity response is purely elastic and that there is no interaction between asperities. However, as asperities lie on a continuous substrate, the deformation of one asperity will change the height of all other asperities through deformation of the substrate and will thus influence subsequent contact evolution. Moreover, a high asperity contact pressure will result in plasticity, which below tens of microns is size dependent, with smaller being harder. In this paper, the asperity interaction effect is taken into account through substrate deformation, while a size-dependent plasticity model is adopted for individual asperities. The intrinsic length in the strain gradient plasticity (SGP) theory is obtained by fitting to two-dimensional discrete dislocation plasticity simulations of the flattening of a single asperity. By utilizing the single asperity response in three dimensions and taking asperity interaction into account, a statistical calculation of rough surface contact is performed. The effectiveness of the statistical model is addressed by comparison with full-detail finite element simulations of rough surface contact using SGP. Throughout the paper, our focus is on the difference of contact predictions based on size-dependent plasticity as compared to conventional size-independent plasticity.

  4. Theoretical models for Type I and Type II supernova

    Energy Technology Data Exchange (ETDEWEB)

    Woosley, S.E.; Weaver, T.A.

    1985-01-01

    Recent theoretical progress in understanding the origin and nature of Type I and Type II supernovae is discussed. New Type II presupernova models characterized by a variety of iron core masses at the time of collapse are presented and the sensitivity to the reaction rate /sup 12/C(..cap alpha..,..gamma..)/sup 16/O explained. Stars heavier than about 20 M/sub solar/ must explode by a ''delayed'' mechanism not directly related to the hydrodynamical core bounce and a subset is likely to leave black hole remnants. The isotopic nucleosynthesis expected from these massive stellar explosions is in striking agreement with the sun. Type I supernovae result when an accreting white dwarf undergoes a thermonuclear explosion. The critical role of the velocity of the deflagration front in determining the light curve, spectrum, and, especially, isotopic nucleosynthesis in these models is explored. 76 refs., 8 figs.

  5. Modelling representative and coherent Danish farm types based on farm accountancy data for use in environmental assessments

    DEFF Research Database (Denmark)

    Dalgaard, Randi; Halberg, Niels; Kristensen, Ib Sillebak

    2006-01-01

    -oriented environmental assessment (e.g. greenhouse gas emissions per kg pork). The objective of this study was to establish a national agricultural model for estimating data on resource use, production and environmentally important emissions for a set of representative farm types. Every year a sample of farm accounts...... is established in order to report Danish agro-economical data to the ‘Farm Accountancy Data Network’ (FADN), and to produce ‘The annual Danish account statistics for agriculture’. The farm accounts are selected and weighted to be representative for the Danish agricultural sector, and similar samples of farm...... accounts are collected in most of the European countries. Based on a sample of 2138 farm accounts from year 1999 a national agricultural model, consisting of 31 farm types, was constructed. The farm accounts were grouped according to the major soil types, the number of working hours, the most important...

  6. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice

    Energy Technology Data Exchange (ETDEWEB)

    Ahlroth, S.

    2001-01-01

    This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO{sub 2} and NO{sub x} emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner.

  7. A simple model for predicting sprint-race times accounting for energy loss on the curve

    Science.gov (United States)

    Mureika, J. R.

    1997-11-01

    The mathematical model of J. Keller for predicting World Record race times, based on a simple differential equation of motion, predicted quite well the records of the day. One of its shortcoming is that it neglects to account for a sprinter's energy loss around a curve, a most important consideration particularly in the 200m--400m. An extension to Keller's work is considered, modeling the aforementioned energy loss as a simple function of the centrifugal force acting on the runner around the curve. Theoretical World Record performances for indoor and outdoor 200m are discussed, and the use of the model at 300m is investigated. Some predictions are made for possible 200m outdoor and indoor times as run by Canadian 100m WR holder Donovan Bailey, based on his 100m final performance at the 1996 Olympic Games in Atlanta.

  8. A Simple Model for Predicting Sprint Race Times Accounting for Energy Loss on the Curve

    CERN Document Server

    Mureika, J R

    1997-01-01

    The mathematical model of J. Keller for predicting World Record race times, based on a simple differential equation of motion, predicted quite well the records of the day. One of its shortcoming is that it neglects to account for a sprinter's energy loss around a curve, a most important consideration particularly in the 200m--400m. An extension to Keller's work is considered, modeling the aforementioned energy loss as a simple function of the centrifugal force acting on the runner around the curve. Theoretical World Record performances for indoor and outdoor 200m are discussed, and the use of the model at 300m is investigated. Some predictions are made for possible 200m outdoor and indoor times as run by Canadian 100m WR holder Donovan Bailey, based on his 100m final performance at the 1996 Olympic Games in Atlanta.

  9. Climate projections of future extreme events accounting for modelling uncertainties and historical simulation biases

    Science.gov (United States)

    Brown, Simon J.; Murphy, James M.; Sexton, David M. H.; Harris, Glen R.

    2014-11-01

    A methodology is presented for providing projections of absolute future values of extreme weather events that takes into account key uncertainties in predicting future climate. This is achieved by characterising both observed and modelled extremes with a single form of non-stationary extreme value (EV) distribution that depends on global mean temperature and which includes terms that account for model bias. Such a distribution allows the prediction of future "observed" extremes for any period in the twenty-first century. Uncertainty in modelling future climate, arising from a wide range of atmospheric, oceanic, sulphur cycle and carbon cycle processes, is accounted for by using probabilistic distributions of future global temperature and EV parameters. These distributions are generated by Bayesian sampling of emulators with samples weighted by their likelihood with respect to a set of observational constraints. The emulators are trained on a large perturbed parameter ensemble of global simulations of the recent past, and the equilibrium response to doubled CO2. Emulated global EV parameters are converted to the relevant regional scale through downscaling relationships derived from a smaller perturbed parameter regional climate model ensemble. The simultaneous fitting of the EV model to regional model data and observations allows the characterisation of how observed extremes may change in the future irrespective of biases that may be present in the regional models simulation of the recent past climate. The clearest impact of a parameter perturbation in this ensemble was found to be the depth to which plants can access water. Members with shallow soils tend to be biased hot and dry in summer for the observational period. These biases also appear to have an impact on the potential future response for summer temperatures with some members with shallow soils having increases for extremes that reduce with extreme severity. We apply this methodology for London, using the

  10. FORMATION OF CONSUMER ACCOMMODATION MODELS WITH DUE ACCOUNT OF POPULATION INVESTMENT POTENTIAL

    Directory of Open Access Journals (Sweden)

    I. Shaniukevich

    2013-01-01

    Full Text Available The paper considers a theme of typological urban housing diversity  which is relevant for a modern residential real estate market. Analyzed Quantitative and qualitative characteristics of the existing urban residential accommodation, new  house building have been analyzed in the paper. The paper presents author’s calculations of differentiation extent and changes in economic opportunities of the population. Differentiation of  potential consumer accommodation models with specific standardized characteristics has been made in terms of the population economic prosperity.  The paper substantiates proposals on accountability and implementation of typological differences in the government housing policy.

  11. A simple bioclogging model that accounts for spatial spreading of bacteria

    Directory of Open Access Journals (Sweden)

    Laurent Demaret

    2009-04-01

    Full Text Available An extension of biobarrier formation and bioclogging models is presented that accounts for spatial expansion of the bacterial population in the soil. The bacteria move into neighboring sites if locally almost all of the available pore space is occupied and the environmental conditions are such that further growth of the bacterial population is sustained. This is described by a density-dependent, double degenerate diffusion-equation that is coupled with the Darcy equations and a transport-reaction equation for growth limiting substrates. We conduct computational simulations of the governing differential equation system.

  12. Photon Number Conserving Models of H II Bubbles during Reionization

    CERN Document Server

    Paranjape, Aseem; Padmanabhan, Hamsa

    2015-01-01

    Traditional excursion set based models of H II bubble growth during the epoch of reionization are known to violate photon number conservation, in the sense that the mass fraction in ionized bubbles in these models does not equal the ratio of the number of ionizing photons produced by sources and the number of hydrogen atoms in the intergalactic medium. We demonstrate that this problem arises from a fundamental conceptual shortcoming of the excursion set approach (already recognised in the literature on this formalism) which only tracks average mass fractions instead of the exact, stochastic source counts. With this insight, we build an approximately photon number conserving Monte Carlo model of bubble growth based on partitioning regions of dark matter into halos. Our model, which is formally valid for white noise initial conditions (ICs), shows dramatic improvements in photon number conservation, as well as substantial differences in the bubble size distribution, as compared to traditional models. We explore...

  13. A parametric ribcage geometry model accounting for variations among the adult population.

    Science.gov (United States)

    Wang, Yulong; Cao, Libo; Bai, Zhonghao; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2016-09-06

    The objective of this study is to develop a parametric ribcage model that can account for morphological variations among the adult population. Ribcage geometries, including 12 pair of ribs, sternum, and thoracic spine, were collected from CT scans of 101 adult subjects through image segmentation, landmark identification (1016 for each subject), symmetry adjustment, and template mesh mapping (26,180 elements for each subject). Generalized procrustes analysis (GPA), principal component analysis (PCA), and regression analysis were used to develop a parametric ribcage model, which can predict nodal locations of the template mesh according to age, sex, height, and body mass index (BMI). Two regression models, a quadratic model for estimating the ribcage size and a linear model for estimating the ribcage shape, were developed. The results showed that the ribcage size was dominated by the height (p=0.000) and age-sex-interaction (p=0.007) and the ribcage shape was significantly affected by the age (p=0.0005), sex (p=0.0002), height (p=0.0064) and BMI (p=0.0000). Along with proper assignment of cortical bone thickness, material properties and failure properties, this parametric ribcage model can directly serve as the mesh of finite element ribcage models for quantifying effects of human characteristics on thoracic injury risks.

  14. Improvement of the integration of Soil Moisture Accounting into the NRCS-CN model

    Science.gov (United States)

    Durán-Barroso, Pablo; González, Javier; Valdés, Juan B.

    2016-11-01

    Rainfall-runoff quantification is one of the most important tasks in both engineering and watershed management as it allows the identification, forecast and explanation of the watershed response. This non-linear process depends on the watershed antecedent conditions, which are commonly related to the initial soil moisture content. Although several studies have highlighted the relevance of soil moisture measures to improve flood modelling, the discussion is still open in the literature about the approach to use in lumped model. The integration of these previous conditions in the widely used rainfall-runoff models NRCS-CN (e.g. National Resources Conservation Service - Curve Number model) could be handled in two ways: using the Antecedent Precipitation Index (API) concept to modify the model parameter; or alternatively, using a Soil Moisture Accounting (SMA) procedure into the NRCS-CN, being the soil moisture a state variable. For this second option, the state variable does not have a direct physical representation. This make difficult the estimation of the initial soil moisture store level. This paper presents a new formulation that overcomes such issue, the rainfall-runoff model called RSSa. Its suitability is evaluated by comparing the RSSa model with the original NRCS-CN model and alternatives SMA procedures in 12 watersheds located in six different countries, with different climatic conditions, from Mediterranean to Semi-arid regions. The analysis shows that the new model, RSSa, performs better when compared with previously proposed CN-based models. Finally, an assessment is made of the influence of the soil moisture parameter for each watershed and the relative weight of scale effects over model parameterization.

  15. Advances in stream shade modelling. Accounting for canopy overhang and off-centre view

    Science.gov (United States)

    Davies-Colley, R.; Meleason, M. A.; Rutherford, K.

    2005-05-01

    Riparian shade controls the stream thermal regime and light for photosynthesis of stream plants. The quantity difn (diffuse non-interceptance), defined as the proportion of incident lighting received under a sky of uniform brightness, is useful for general specification of stream light exposure, having the virtue that it can be measured directly with common light sensors of appropriate spatial and spectral character. A simple model (implemented in EXCEL-VBA) (Davies-Colley & Rutherford Ecol. Engrg in press) successfully reproduces the broad empirical trend of decreasing difn at the channel centre with increasing ratio of canopy height to stream width. We have now refined this model to account for (a) foliage overhanging the channel (for trees of different canopy form), and (b) off-centre view of the shade (rather than just the channel centre view). We use two extreme geometries bounding real (meandering) streams: the `canyon' model simulates an infinite straight canal, whereas the `cylinder' model simulates a stream meandering so tightly that its geometry collapses into an isolated pool in the forest. The model has been validated using a physical `rooftop' model of the cylinder case, with which it is possible to measure shade with different geometries.

  16. Accountability and pediatric physician-researchers: are theoretical models compatible with Canadian lived experience?

    Directory of Open Access Journals (Sweden)

    Czoli Christine

    2011-10-01

    Full Text Available Abstract Physician-researchers are bound by professional obligations stemming from both the role of the physician and the role of the researcher. Currently, the dominant models for understanding the relationship between physician-researchers' clinical duties and research duties fit into three categories: the similarity position, the difference position and the middle ground. The law may be said to offer a fourth "model" that is independent from these three categories. These models frame the expectations placed upon physician-researchers by colleagues, regulators, patients and research participants. This paper examines the extent to which the data from semi-structured interviews with 30 physician-researchers at three major pediatric hospitals in Canada reflect these traditional models. It seeks to determine the extent to which existing models align with the described lived experience of the pediatric physician-researchers interviewed. Ultimately, we find that although some physician-researchers make references to something like the weak version of the similarity position, the pediatric-researchers interviewed in this study did not describe their dual roles in a way that tightly mirrors any of the existing theoretical frameworks. We thus conclude that either physician-researchers are in need of better training regarding the nature of the accountability relationships that flow from their dual roles or that models setting out these roles and relationships must be altered to better reflect what we can reasonably expect of physician-researchers in a real-world environment.

  17. A hybrid phenomenological model for ferroelectroelastic ceramics. Part II: Morphotropic PZT ceramics

    Science.gov (United States)

    Stark, S.; Neumeister, P.; Balke, H.

    2016-10-01

    In this part II of a two part series, the rate-independent hybrid phenomenological constitutive model introduced in part I is modified to account for the material behavior of morphotropic lead zirconate titanate ceramics (PZT ceramics). The modifications are based on a discussion of the available literature results regarding the micro-structure of these materials. In particular, a monoclinic phase and a highly simplified representation of the hierarchical structure of micro-domains and nano-domains observed experimentally are incorporated into the model. It is shown that experimental data for the commercially available morphotropic PZT material PIC151 (PI Ceramic GmbH, Lederhose, Germany) can be reproduced and predicted based on the modified hybrid model.

  18. SEBAL-A: A remote sensing ET algorithm that accounts for advection with limited data. Part II: Test for transferability

    Science.gov (United States)

    Because the Surface Energy Balance Algorithm for Land (SEBAL) tends to underestimate ET under conditions of advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET). The modification involved the estimation of advected en...

  19. Radiative transfer modeling through terrestrial atmosphere and ocean accounting for inelastic processes: Software package SCIATRAN

    Science.gov (United States)

    Rozanov, V. V.; Dinter, T.; Rozanov, A. V.; Wolanin, A.; Bracher, A.; Burrows, J. P.

    2017-06-01

    SCIATRAN is a comprehensive software package which is designed to model radiative transfer processes in the terrestrial atmosphere and ocean in the spectral range from the ultraviolet to the thermal infrared (0.18-40 μm). It accounts for multiple scattering processes, polarization, thermal emission and ocean-atmosphere coupling. The main goal of this paper is to present a recently developed version of SCIATRAN which takes into account accurately inelastic radiative processes in both the atmosphere and the ocean. In the scalar version of the coupled ocean-atmosphere radiative transfer solver presented by Rozanov et al. [61] we have implemented the simulation of the rotational Raman scattering, vibrational Raman scattering, chlorophyll and colored dissolved organic matter fluorescence. In this paper we discuss and explain the numerical methods used in SCIATRAN to solve the scalar radiative transfer equation including trans-spectral processes, and demonstrate how some selected radiative transfer problems are solved using the SCIATRAN package. In addition we present selected comparisons of SCIATRAN simulations with those published benchmark results, independent radiative transfer models, and various measurements from satellite, ground-based, and ship-borne instruments. The extended SCIATRAN software package along with a detailed User's Guide is made available for scientists and students, who are undertaking their own research typically at universities, via the web page of the Institute of Environmental Physics (IUP), University of Bremen: http://www.iup.physik.uni-bremen.de.

  20. Implementation of a cost-accounting model in a biobank: practical implications.

    Science.gov (United States)

    Gonzalez-Sanchez, Maria Beatriz; Lopez-Valeiras, Ernesto; García-Montero, Andres C

    2014-01-01

    Given the state of global economy, cost measurement and control have become increasingly relevant over the past years. The scarcity of resources and the need to use these resources more efficiently is making cost information essential in management, even in non-profit public institutions. Biobanks are no exception. However, no empirical experiences on the implementation of cost accounting in biobanks have been published to date. The aim of this paper is to present a step-by-step implementation of a cost-accounting tool for the main production and distribution activities of a real/active biobank, including a comprehensive explanation on how to perform the calculations carried out in this model. Two mathematical models for the analysis of (1) production costs and (2) request costs (order management and sample distribution) have stemmed from the analysis of the results of this implementation, and different theoretical scenarios have been prepared. Global analysis and discussion provides valuable information for internal biobank management and even for strategic decisions at the research and development governmental policies level.

  1. An extended car-following model accounting for the average headway effect in intelligent transportation system

    Science.gov (United States)

    Kuang, Hua; Xu, Zhi-Peng; Li, Xing-Li; Lo, Siu-Ming

    2017-04-01

    In this paper, an extended car-following model is proposed to simulate traffic flow by considering average headway of preceding vehicles group in intelligent transportation systems environment. The stability condition of this model is obtained by using the linear stability analysis. The phase diagram can be divided into three regions classified as the stable, the metastable and the unstable ones. The theoretical result shows that the average headway plays an important role in improving the stabilization of traffic system. The mKdV equation near the critical point is derived to describe the evolution properties of traffic density waves by applying the reductive perturbation method. Furthermore, through the simulation of space-time evolution of the vehicle headway, it is shown that the traffic jam can be suppressed efficiently with taking into account the average headway effect, and the analytical result is consistent with the simulation one.

  2. Accounting for the kinetics in order parameter analysis: lessons from theoretical models and a disordered peptide

    CERN Document Server

    Berezovska, Ganna; Mostarda, Stefano; Rao, Francesco

    2012-01-01

    Molecular simulations as well as single molecule experiments have been widely analyzed in terms order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such description is not accurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account of the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-State-Models of the system under study. Application of the methodology to theoretical models with a noisy orde...

  3. Water accounting for stressed river basins based on water resources management models.

    Science.gov (United States)

    Pedro-Monzonís, María; Solera, Abel; Ferrer, Javier; Andreu, Joaquín; Estrela, Teodoro

    2016-09-15

    Water planning and the Integrated Water Resources Management (IWRM) represent the best way to help decision makers to identify and choose the most adequate alternatives among other possible ones. The System of Environmental-Economic Accounting for Water (SEEA-W) is displayed as a tool for the building of water balances in a river basin, providing a standard approach to achieve comparability of the results between different territories. The target of this paper is to present the building up of a tool that enables the combined use of hydrological models and water resources models to fill in the SEEA-W tables. At every step of the modelling chain, we are capable to build the asset accounts and the physical water supply and use tables according to SEEA-W approach along with an estimation of the water services costs. The case study is the Jucar River Basin District (RBD), located in the eastern part of the Iberian Peninsula in Spain which as in other many Mediterranean basins is currently water-stressed. To guide this work we have used PATRICAL model in combination with AQUATOOL Decision Support System (DSS). The results indicate that for the average year the total use of water in the district amounts to 15,143hm(3)/year, being the Total Water Renewable Water Resources 3909hm(3)/year. On the other hand, the water service costs in Jucar RBD amounts to 1634 million € per year at constant 2012 prices. It is noteworthy that 9% of these costs correspond to non-conventional resources, such as desalinated water, reused water and water transferred from other regions.

  4. A Model for Urban Environment and Resource Planning Based on Green GDP Accounting System

    Directory of Open Access Journals (Sweden)

    Linyu Xu

    2013-01-01

    Full Text Available The urban environment and resources are currently on course that is unsustainable in the long run due to excessive human pursuit of economic goals. Thus, it is very important to develop a model to analyse the relationship between urban economic development and environmental resource protection during the process of rapid urbanisation. This paper proposed a model to identify the key factors in urban environment and resource regulation based on a green GDP accounting system, which consisted of four parts: economy, society, resource, and environment. In this model, the analytic hierarchy process (AHP method and a modified Pearl curve model were combined to allow for dynamic evaluation, with higher green GDP value as the planning target. The model was applied to the environmental and resource planning problem of Wuyishan City, and the results showed that energy use was a key factor that influenced the urban environment and resource development. Biodiversity and air quality were the most sensitive factors that influenced the value of green GDP in the city. According to the analysis, the urban environment and resource planning could be improved for promoting sustainable development in Wuyishan City.

  5. A model proposal concerning balance scorecard application integrated with resource consumption accounting in enterprise performance management

    Directory of Open Access Journals (Sweden)

    ORHAN ELMACI

    2014-06-01

    Full Text Available The present study intended to investigate the “Balance Scorecard (BSC model integrated with Resource Consumption Accounting (RCA” which helps to evaluate the enterprise as matrix structure in its all parts. It aims to measure how much tangible and intangible values (assets of enterprises contribute to the enterprises. In other words, it measures how effectively, actively, and efficiently these values (assets are used. In short, it aims to measure sustainable competency of enterprises. As expressing the effect of tangible and intangible values (assets of the enterprise on the performance in mathematical and statistical methods is insufficient, it is targeted that RCA Method integrated with BSC model is based on matrix structure and control models. The effects of all complex factors in the enterprise on the performance (productivity and efficiency estimated algorithmically with cause and effect diagram. The contributions of matrix structures for reaching the management functional targets of the enterprises that operate in market competitive environment increasing day to day, is discussed. So in the context of modern management theories, as a contribution to BSC approach which is in the foreground in today’s administrative science of enterprises in matrix organizational structures, multidimensional performance evaluation model -RCA integrated with BSC Model proposal- is presented as strategic planning and strategic evaluation instrument.

  6. Pointing, looking at, and pressing keys: A diffusion model account of response modality.

    Science.gov (United States)

    Gomez, Pablo; Ratcliff, Roger; Childers, Russ

    2015-12-01

    Accumulation of evidence models of perceptual decision making have been able to account for data from a wide range of domains at an impressive level of precision. In particular, Ratcliff's (1978) diffusion model has been used across many different 2-choice tasks in which the response is executed via a key-press. In this article, we present 2 experiments in which we used a letter-discrimination task exploring 3 central aspects of a 2-choice task: the discriminability of the stimulus, the modality of the response execution (eye movement, key pressing, and pointing on a touchscreen), and the mapping of the response areas for the eye movement and the touchscreen conditions (consistent vs. inconsistent). We fitted the diffusion model to the data from these experiments and examined the behavior of the model's parameters. Fits of the model were consistent with the hypothesis that the same decision mechanism is used in the task with 3 different response methods. Drift rates are affected by the duration of the presentation of the stimulus while the response execution time changed as a function of the response modality.

  7. Models of class II methanol masers based on improved molecular data

    CERN Document Server

    Cragg, D M; Godfrey, P D

    2005-01-01

    The class II masers of methanol are associated with the early stages of formation of high-mass stars. Modelling of these dense, dusty environments has demonstrated that pumping by infrared radiation can account for the observed masers. Collisions with other molecules in the ambient gas also play a significant role, but have not been well modelled in the past. Here we examine the effects on the maser models of newly available collision rate coefficients for methanol. The new collision data does not alter which transitions become masers in the models, but does influence their brightness and the conditions under which they switch on and off. At gas temperatures above 100 K the effects are broadly consistent with a reduction in the overall collision cross-section. This means, for example, that a slightly higher gas density than identified previously can account for most of the observed masers in W3(OH). We have also examined the effects of including more excited state energy levels in the models, and find that th...

  8. An agent-based simulation model to study accountable care organizations.

    Science.gov (United States)

    Liu, Pai; Wu, Shinyi

    2016-03-01

    Creating accountable care organizations (ACOs) has been widely discussed as a strategy to control rapidly rising healthcare costs and improve quality of care; however, building an effective ACO is a complex process involving multiple stakeholders (payers, providers, patients) with their own interests. Also, implementation of an ACO is costly in terms of time and money. Immature design could cause safety hazards. Therefore, there is a need for analytical model-based decision-support tools that can predict the outcomes of different strategies to facilitate ACO design and implementation. In this study, an agent-based simulation model was developed to study ACOs that considers payers, healthcare providers, and patients as agents under the shared saving payment model of care for congestive heart failure (CHF), one of the most expensive causes of sometimes preventable hospitalizations. The agent-based simulation model has identified the critical determinants for the payment model design that can motivate provider behavior changes to achieve maximum financial and quality outcomes of an ACO. The results show nonlinear provider behavior change patterns corresponding to changes in payment model designs. The outcomes vary by providers with different quality or financial priorities, and are most sensitive to the cost-effectiveness of CHF interventions that an ACO implements. This study demonstrates an increasingly important method to construct a healthcare system analytics model that can help inform health policy and healthcare management decisions. The study also points out that the likely success of an ACO is interdependent with payment model design, provider characteristics, and cost and effectiveness of healthcare interventions.

  9. Air quality modeling for accountability research: Operational, dynamic, and diagnostic evaluation

    Science.gov (United States)

    Henneman, Lucas R. F.; Liu, Cong; Hu, Yongtao; Mulholland, James A.; Russell, Armistead G.

    2017-10-01

    Photochemical grid models play a central role in air quality regulatory frameworks, including in air pollution accountability research, which seeks to demonstrate the extent to which regulations causally impacted emissions, air quality, and public health. There is a need, however, to develop and demonstrate appropriate practices for model application and evaluation in an accountability framework. We employ a combination of traditional and novel evaluation techniques to assess four years (2001-02, 2011-12) of simulated pollutant concentrations across a decade of major emissions reductions using the Community Multiscale Air Quality (CMAQ) model. We have grouped our assessments in three categories: Operational evaluation investigates how well CMAQ captures absolute concentrations; dynamic evaluation investigates how well CMAQ captures changes in concentrations across the decade of changing emissions; diagnostic evaluation investigates how CMAQ attributes variability in concentrations and sensitivities to emissions between meteorology and emissions, and how well this attribution compares to empirical statistical models. In this application, CMAQ captures O3 and PM2.5 concentrations and change over the decade in the Eastern United States similarly to past CMAQ applications and in line with model evaluation guidance; however, some PM2.5 species-EC, OC, and sulfate in particular-exhibit high biases in various months. CMAQ-simulated PM2.5 has a high bias in winter months and low bias in the summer, mainly due to a high bias in OC during the cold months and low bias in OC and sulfate during the summer. Simulated O3 and PM2.5 changes across the decade have normalized mean bias of less than 2.5% and 17%, respectively. Detailed comparisons suggest biased EC emissions, negative wintertime SO42- sensitivities to mobile source emissions, and incomplete capture of OC chemistry in the summer and winter. Photochemical grid model-simulated O3 and PM2.5 responses to emissions and

  10. Enterprise marketing potential modeling taking into account optimizing and dynamic essence of the potential

    Directory of Open Access Journals (Sweden)

    Potrashkova Lyudmyla Vladimirovna

    2014-12-01

    consequent enumeration. At the same time, the constituent part of the models system is constrained optimization block. Conclusions and directions of further researches. The suggested simulation and optimization models system of b2b-enterprise marketing potential result-based estimation has the following advantages: it corresponds optimizing essence of potential, takes into account marketing resources dynamics and allows to get estimation in the view of enterprise potential hierarchic levels. The suggested models system is the instrument for estimation and analysis of the future enterprise sales and marketing abilities, comparison of which with producing and financial abilities will allow to define narrow places in the analyzed enterprise activity and increase its general potential. The given models system is a part of mathematical providing to manage future enterprise abilities. The further investigations on research area have to be oriented to build models of the enterprise marketing potential estimation in integral system concerning enterprise integral potential estimation.

  11. Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica

    Energy Technology Data Exchange (ETDEWEB)

    Oboh, I., E-mail: innocentoboh@uniuyo.edu.ng [Department of Chemical and Petroleum Engineering, University of Uyo, Uyo (Nigeria); Aluyor, E.; Audu, T. [Department of Chemical Engineering, University of Uyo, BeninCity, BeninCity (Nigeria)

    2015-03-30

    The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R{sup 2}), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.

  12. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    Science.gov (United States)

    Mastin, Larry G.; Van Eaton, Alexa; Durant, A.J.

    2016-01-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16–17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m−3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between  ∼  2.3 and 2.7φ (0.20–0.15 mm), despite large variations in erupted mass (0.25–50 Tg), plume height (8.5–25 km), mass fraction of fine ( operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  13. Adjusting particle-size distributions to account for aggregation in tephra-deposit model forecasts

    Science.gov (United States)

    Mastin, Larry G.; Van Eaton, Alexa R.; Durant, Adam J.

    2016-07-01

    Volcanic ash transport and dispersion (VATD) models are used to forecast tephra deposition during volcanic eruptions. Model accuracy is limited by the fact that fine-ash aggregates (clumps into clusters), thus altering patterns of deposition. In most models this is accounted for by ad hoc changes to model input, representing fine ash as aggregates with density ρagg, and a log-normal size distribution with median μagg and standard deviation σagg. Optimal values may vary between eruptions. To test the variance, we used the Ash3d tephra model to simulate four deposits: 18 May 1980 Mount St. Helens; 16-17 September 1992 Crater Peak (Mount Spurr); 17 June 1996 Ruapehu; and 23 March 2009 Mount Redoubt. In 192 simulations, we systematically varied μagg and σagg, holding ρagg constant at 600 kg m-3. We evaluated the fit using three indices that compare modeled versus measured (1) mass load at sample locations; (2) mass load versus distance along the dispersal axis; and (3) isomass area. For all deposits, under these inputs, the best-fit value of μagg ranged narrowly between ˜ 2.3 and 2.7φ (0.20-0.15 mm), despite large variations in erupted mass (0.25-50 Tg), plume height (8.5-25 km), mass fraction of fine ( water content between these eruptions. This close agreement suggests that aggregation may be treated as a discrete process that is insensitive to eruptive style or magnitude. This result offers the potential for a simple, computationally efficient parameterization scheme for use in operational model forecasts. Further research may indicate whether this narrow range also reflects physical constraints on processes in the evolving cloud.

  14. Material Protection, Accounting, and Control Technologies (MPACT): Modeling and Simulation Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    Cipiti, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dunn, Timothy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Durbin, Samual [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Durkee, Joe W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); England, Jeff [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jones, Robert [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Ketusky, Edward [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Li, Shelly [Idaho National Lab. (INL), Idaho Falls, ID (United States); Lindgren, Eric [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Meier, David [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Miller, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Osburn, Laura Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pereira, Candido [Argonne National Lab. (ANL), Argonne, IL (United States); Rauch, Eric Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Scaglione, John [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Scherer, Carolynn P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sprinkle, James K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Yoo, Tae-Sic [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-05

    The development of sustainable advanced nuclear fuel cycles is a long-term goal of the Office of Nuclear Energy’s (DOE-NE) Fuel Cycle Technologies program. The Material Protection, Accounting, and Control Technologies (MPACT) campaign is supporting research and development (R&D) of advanced instrumentation, analysis tools, and integration methodologies to meet this goal. This advanced R&D is intended to facilitate safeguards and security by design of fuel cycle facilities. The lab-scale demonstration of a virtual facility, distributed test bed, that connects the individual tools being developed at National Laboratories and university research establishments, is a key program milestone for 2020. These tools will consist of instrumentation and devices as well as computer software for modeling. To aid in framing its long-term goal, during FY16, a modeling and simulation roadmap is being developed for three major areas of investigation: (1) radiation transport and sensors, (2) process and chemical models, and (3) shock physics and assessments. For each area, current modeling approaches are described, and gaps and needs are identified.

  15. Accounting for exhaust gas transport dynamics in instantaneous emission models via smooth transition regression.

    Science.gov (United States)

    Kamarianakis, Yiannis; Gao, H Oliver

    2010-02-15

    Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.

  16. Can the forward-shock model account for the multiwavelength emission of GRB afterglow 090510 ?

    CERN Document Server

    Neamus, Ano

    2010-01-01

    GRB 090510 is the first burst whose afterglow emission above 100 MeV was measured by Fermi over two decades in time. Owing to its power-law temporal decay and power-law spectrum, it seems likely that the high-energy emission is from the forward-shock energizing the ambient medium (the standard blast-wave model for GRB afterglows), the GeV flux and its decay rate being consistent with that model's expectations. However, the synchrotron emission from a collimated outflow (the standard jet model) has difficulties in accounting for the lower-energy afterglow emission, where a simultaneous break occurs in the optical and X-ray light-curves at 2 ks, but with the optical flux decay (before and after the break) being much slower than in the X-rays (at same time). The measured X-ray and GeV fluxes are incompatible with the higher-energy afterglow emission being from same spectral component as the lower-energy afterglow emission, which suggests a synchrotron self-Compton model for this afterglow. Cessation of energy in...

  17. Integrated Approach Model of Risk, Control and Auditing of Accounting Information Systems

    Directory of Open Access Journals (Sweden)

    Claudiu BRANDAS

    2013-01-01

    Full Text Available The use of IT in the financial and accounting processes is growing fast and this leads to an increase in the research and professional concerns about the risks, control and audit of Ac-counting Information Systems (AIS. In this context, the risk and control of AIS approach is a central component of processes for IT audit, financial audit and IT Governance. Recent studies in the literature on the concepts of risk, control and auditing of AIS outline two approaches: (1 a professional approach in which we can fit ISA, COBIT, IT Risk, COSO and SOX, and (2 a research oriented approach in which we emphasize research on continuous auditing and fraud using information technology. Starting from the limits of existing approaches, our study is aimed to developing and testing an Integrated Approach Model of Risk, Control and Auditing of AIS on three cycles of business processes: purchases cycle, sales cycle and cash cycle in order to improve the efficiency of IT Governance, as well as ensuring integrity, reality, accuracy and availability of financial statements.

  18. Statistical approaches to account for missing values in accelerometer data: Applications to modeling physical activity.

    Science.gov (United States)

    Xu, Selene Yue; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki

    2016-07-10

    Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.

  19. Fe K alpha and hydrodynamic loop model diagnostics for a large flare on II Peg

    CERN Document Server

    Ercolano, Barbara; Reale, Fabio; Testa, Paola; Miller, Jon M

    2008-01-01

    The observation by the Swift X-ray Telescope of the Fe K alpha_1, alpha_2 doublet during a large flare on the RS CVn binary system II Peg represents one of only two firm detections to date of photospheric Fe K alpha from a star other than our Sun. We present models of the Fe K alpha equivalent widths reported in the literature for the II Peg observations and show that they are most probably due to fluorescence following inner shell photoionisation of quasi-neutral Fe by the flare X-rays. Our models constrain the maximum height of flare the to 0.15 R_* assuming solar abundances for the photospheric material, and 0.1 R_* and 0.06 R_* assuming depleted photospheric abundances ([M/H]=-0.2 and [M/H]=-0.4, respectively). Accounting for an extended loop geometry has the effect of increasing the estimated flare heights by a factor of ~3. These predictions are consistent with those derived using results of flaring loop models, which are also used to estimate the flaring loop properties and energetics. From loop models...

  20. [Application of multilevel models in the evaluation of bioequivalence (II).].

    Science.gov (United States)

    Liu, Qiao-lan; Shen, Zhuo-zhi; Li, Xiao-song; Chen, Feng; Yang, Min

    2010-03-01

    The main purpose of this paper is to explore the applicability of multivariate multilevel models for bioequivalence evaluation. Using an example of a 4 x 4 cross-over test design in evaluating bioequivalence of homemade and imported rosiglitazone maleate tablets, this paper illustrated the multivariate-model-based method for partitioning total variances of ln(AUC) and ln(C(max)) in the framework of multilevel models. It examined the feasibility of multivariate multilevel models in directly evaluating average bioequivalence (ABE), population bioequivalence (PBE) and individual bioequivalence (IBE). Taking into account the correlation between ln(AUC) and ln(C(max)) of rosiglitazone maleate tablets, the proposed models suggested no statistical difference between the two effect measures in their ABE bioequivalence via joint tests, whilst a contradictive conclusion was derived based on univariate multilevel models. Furthermore, the PBE and IBE for both ln(AUC) and ln(C(max)) of the two types of tablets were assessed with no statistical difference based on estimates of variance components from the proposed models. Multivariate multilevel models could be used to analyze bioequivalence of multiple effect measures simultaneously and they provided a new way of statistical analysis to evaluate bioequivalence.

  1. Counterparty risk analysis using Merton's structural model under Solvency II

    Directory of Open Access Journals (Sweden)

    Luis Otero González

    2014-12-01

    Full Text Available The new solvency regulation in the European insurance sector, denominated Solvency II, will completely transform the system of capital requirements estimation. Recently it has introduced the latest quantitative impact study (QIS5, which provides the calculation method in the internal model for the determination of capital requirements. The aim of this paper is to analyze the adequacy of the calibration of the counterparty credit risk by the models proposed in recent quantitative impact reports (fourth and fifth. To do this we compare capital requirements obtained by the two alternatives, against which that results from applying a simulation model based on the structural approach. The results shows that the use of probabilities based on the methodology of Merton, which can be used in an internal model, compared to those based on ratings (standard model result in substantially higher capital requirements. In addition, the model proposed in QIS4 based on Vasicek distribution is not appropriate when the number of counterparties is reduced, a common situation in the European insurance sector. Moreover, the new proposal (QIS5 or Ter Berg model is more versatile and suitable than its predecessor but requires further research in order to improve the calibration hypothesis and, thus, to better approximate estimates to the risk actually assumed.

  2. Identifying Opportunities to Reduce Uncertainty in a National-Scale Forest Carbon Accounting Model

    Science.gov (United States)

    Shaw, C. H.; Metsaranta, J. M.; Kurz, W.; Hilger, A.

    2013-12-01

    Assessing the quality of forest carbon budget models used for national and international reporting of greenhouse gas emissions is essential, but model evaluations are rarely conducted mainly because of lack of appropriate, independent ground plot data sets. Ecosystem carbon stocks for all major pools estimated from data collected for 696 ground plots from Canada's new National Forest Inventory (NFI) were used to assess plot-level carbon stocks predicted by the Carbon Budget Model of the Canadian Forest Sector 3 (CBM-CFS3) -- a model compliant with the most complex (Tier-3) approach in the reporting guidelines of the Intergovernmental Panel on Climate Change. The model is the core of Canada's National Forest Carbon Monitoring, Accounting, and Reporting System. At the landscape scale, a major portion of total uncertainty in both C stock and flux estimation is associated with biomass productivity, turnover, and soil and dead organic matter modelling parameters, which can best be further evaluated using plot-level data. Because the data collected for the ground plots were comprehensive we were able to compare carbon stock estimates for 13 pools also estimated by the CBM-CFS3 (all modelled pools excepting coarse and fine root biomass) using the classical comparison statistics of mean difference and correlation. Using a Monte Carlo approach we were able to determine the contribution of aboveground biomass, deadwood and soil pool error to modeled ecosystem total error, as well as the contribution of pools that are summed to estimate aboveground biomass, deadwood and soil, to the error of these three subtotal pools. We were also able to assess potential sources of error propagation in the computational sequence of the CBM-CFS3. Analysis of the data grouped by the 16 dominant tree species allowed us to isolate the leading species where further research would lead to the greatest reductions in uncertainty for modeling of carbon stocks using the CBM-CFS3. This analysis

  3. ACCOUNTING HARMONIZATION AND HISTORICAL COST ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Valentin Gabriel CRISTEA

    2017-05-01

    Full Text Available There is a huge interest in accounting harmonization and historical costs accounting, in what they offer us. In this article, different valuation models are discussed. Although one notices the movement from historical cost accounting to fair value accounting, each one has its advantages.

  4. REGRESSION MODEL FOR RISK REPORTING IN FINANCIAL STATEMENTS OF ACCOUNTING SERVICES ENTITIES

    Directory of Open Access Journals (Sweden)

    Mirela NICHITA

    2015-06-01

    Full Text Available The purpose of financial reports is to provide useful information to users; the utility of information is defined through the qualitative characteristics (fundamental and enhancing. The financial crisis emphasized the limits of financial reporting which has been unable to prevent investors about the risks they were facing. Due to the current changes in business environment, managers have been highly motivated to rethink and improve the risk governance philosophy, processes and methodologies. The lack of quality, timely data and adequate systems to capture, report and measure the right information across the organization is a fundamental challenge for implementing and sustaining all aspects of effective risk management. Starting with the 80s, the investors are more interested in narratives (Notes to financial statements, than in primary reports (financial position and performance. The research will apply a regression model for assessment of risk reporting by the professional (accounting and taxation services for major companies from Romania during the period 2009 – 2013.

  5. A Thermodamage Strength Theoretical Model of Ceramic Materials Taking into Account the Effect of Residual Stress

    Directory of Open Access Journals (Sweden)

    Weiguo Li

    2012-01-01

    Full Text Available A thermodamage strength theoretical model taking into account the effect of residual stress was established and applied to each temperature phase based on the study of effects of various physical mechanisms on the fracture strength of ultrahigh-temperature ceramics. The effects of SiC particle size, crack size, and SiC particle volume fraction on strength corresponding to different temperatures were studied in detail. This study showed that when flaw size is not large, the bigger SiC particle size results in the greater effect of tensile residual stress in the matrix grains on strength reduction, and this prediction coincides with experimental results; and the residual stress and the combined effort of particle size and crack size play important roles in controlling material strength.

  6. Taking individual scaling differences into account by analyzing profile data with the Mixed Assessor Model

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Schlich, Pascal; Skovgaard, Ib

    2015-01-01

    are deduced that include scaling difference in the error term to the proper extent. A meta study of 8619 sensory attributes from 369 sensory profile data sets from SensoBase (www.sensobase.fr) is conducted. In 45.3% of all attributes scaling heterogeneity is present (P-value ...) that properly takes this into account by a simple inclusion of the product averages as a covariate in the modeling and allowing the covariate regression coefficients to depend on the assessor. This gives a more powerful analysis by removing the scaling difference from the error term and proper confidence limits.......9% of the attributes having a product difference P-value in an intermediate range by the traditional approach, the new approach resulted in a clearly more significant result for 42.3% of these cases. Overall, the new approach claimed significant product difference (P-value

  7. REGRESSION MODEL FOR RISK REPORTING IN FINANCIAL STATEMENTS OF ACCOUNTING SERVICES ENTITIES

    Directory of Open Access Journals (Sweden)

    Mirela NICHITA

    2015-06-01

    Full Text Available The purpose of financial reports is to provide useful information to users; the utility of information is defined through the qualitative characteristics (fundamental and enhancing. The financial crisis emphasized the limits of financial reporting which has been unable to prevent investors about the risks they were facing. Due to the current changes in business environment, managers have been highly motivated to rethink and improve the risk governance philosophy, processes and methodologies. The lack of quality, timely data and adequate systems to capture, report and measure the right information across the organization is a fundamental challenge for implementing and sustaining all aspects of effective risk management. Starting with the 80s, the investors are more interested in narratives (Notes to financial statements, than in primary reports (financial position and performance. The research will apply a regression model for assessment of risk reporting by the professional (accounting and taxation services for major companies from Romania during the period 2009 – 2013.

  8. Accounting for detectability in fish distribution models: an approach based on time-to-first-detection

    Directory of Open Access Journals (Sweden)

    Mário Ferreira

    2015-12-01

    Full Text Available Imperfect detection (i.e., failure to detect a species when the species is present is increasingly recognized as an important source of uncertainty and bias in species distribution modeling. Although methods have been developed to solve this problem by explicitly incorporating variation in detectability in the modeling procedure, their use in freshwater systems remains limited. This is probably because most methods imply repeated sampling (≥ 2 of each location within a short time frame, which may be impractical or too expensive in most studies. Here we explore a novel approach to control for detectability based on the time-to-first-detection, which requires only a single sampling occasion and so may find more general applicability in freshwaters. The approach uses a Bayesian framework to combine conventional occupancy modeling with techniques borrowed from parametric survival analysis, jointly modeling factors affecting the probability of occupancy and the time required to detect a species. To illustrate the method, we modeled large scale factors (elevation, stream order and precipitation affecting the distribution of six fish species in a catchment located in north-eastern Portugal, while accounting for factors potentially affecting detectability at sampling points (stream depth and width. Species detectability was most influenced by depth and to lesser extent by stream width and tended to increase over time for most species. Occupancy was consistently affected by stream order, elevation and annual precipitation. These species presented a widespread distribution with higher uncertainty in tributaries and upper stream reaches. This approach can be used to estimate sampling efficiency and provide a practical framework to incorporate variations in the detection rate in fish distribution models.

  9. Physics Of Eclipsing Binaries. II. The Increased Model Precision

    CERN Document Server

    Prsa, Andrej; Horvat, Martin; Pablo, Herbert; Kochoska, Angela; Bloemen, Steven; Nemravova, Jana; Giammarco, Joseph; Hambleton, Kelly M; Degroote, Pieter

    2016-01-01

    The precision of photometric and spectroscopic observations has been systematically improved in the last decade, mostly thanks to space-borne photometric missions and ground-based spectrographs dedicated to finding exoplanets. The field of eclipsing binary stars strongly benefited from this development. Eclipsing binaries serve as critical tools for determining fundamental stellar properties (masses, radii, temperatures and luminosities), yet the models are not capable of reproducing observed data well, either because of the missing physics or because of insufficient precision. This led to a predicament where radiative and dynamical effects, insofar buried in noise, started showing up routinely in the data, but were not accounted for in the models. PHOEBE (PHysics Of Eclipsing BinariEs; http://phoebe-project.org) is an open source modeling code for computing theoretical light and radial velocity curves that addresses both problems by incorporating missing physics and by increasing the computational fidelity. ...

  10. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    Science.gov (United States)

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology.

  11. Accounting comparability and the accuracy of peer-based valuation models

    NARCIS (Netherlands)

    Young, S.; Zeng, Y.

    2015-01-01

    We examine the link between enhanced accounting comparability and the valuation performance of pricing multiples. Using the warranted multiple method proposed by Bhojraj and Lee (2002, Journal of Accounting Research), we demonstrate how enhanced accounting comparability leads to better peer-based va

  12. Accounting comparability and the accuracy of peer-based valuation models

    NARCIS (Netherlands)

    Young, S.; Zeng, Y.

    2015-01-01

    We examine the link between enhanced accounting comparability and the valuation performance of pricing multiples. Using the warranted multiple method proposed by Bhojraj and Lee (2002, Journal of Accounting Research), we demonstrate how enhanced accounting comparability leads to better peer-based va

  13. Modeling Lung Carcinogenesis in Radon-Exposed Miner Cohorts: Accounting for Missing Information on Smoking.

    Science.gov (United States)

    van Dillen, Teun; Dekkers, Fieke; Bijwaard, Harmen; Brüske, Irene; Wichmann, H-Erich; Kreuzer, Michaela; Grosche, Bernd

    2016-05-01

    Epidemiological miner cohort data used to estimate lung cancer risks related to occupational radon exposure often lack cohort-wide information on exposure to tobacco smoke, a potential confounder and important effect modifier. We have developed a method to project data on smoking habits from a case-control study onto an entire cohort by means of a Monte Carlo resampling technique. As a proof of principle, this method is tested on a subcohort of 35,084 former uranium miners employed at the WISMUT company (Germany), with 461 lung cancer deaths in the follow-up period 1955-1998. After applying the proposed imputation technique, a biologically-based carcinogenesis model is employed to analyze the cohort's lung cancer mortality data. A sensitivity analysis based on a set of 200 independent projections with subsequent model analyses yields narrow distributions of the free model parameters, indicating that parameter values are relatively stable and independent of individual projections. This technique thus offers a possibility to account for unknown smoking habits, enabling us to unravel risks related to radon, to smoking, and to the combination of both.

  14. The Iquique earthquake sequence of April 2014: Bayesian modeling accounting for prediction uncertainty

    Science.gov (United States)

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Riel, Bryan; Owen, Susan E; Moore, Angelyn W; Samsonov, Sergey V; Ortega Culaciati, Francisco; Minson, Sarah E.

    2016-01-01

    The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

  15. A common signal detection model accounts for both perception and discrimination of the watercolor effect.

    Science.gov (United States)

    Devinck, Frédéric; Knoblauch, Kenneth

    2012-03-21

    Establishing the relation between perception and discrimination is a fundamental objective in psychophysics, with the goal of characterizing the neural mechanisms mediating perception. Here, we show that a procedure for estimating a perceptual scale based on a signal detection model also predicts discrimination performance. We use a recently developed procedure, Maximum Likelihood Difference Scaling (MLDS), to measure the perceptual strength of a long-range, color, filling-in phenomenon, the Watercolor Effect (WCE), as a function of the luminance ratio between the two components of its generating contour. MLDS is based on an equal-variance, gaussian, signal detection model and yields a perceptual scale with interval properties. The strength of the fill-in percept increased 10-15 times the estimate of the internal noise level for a 3-fold increase in the luminance ratio. Each observer's estimated scale predicted discrimination performance in a subsequent paired-comparison task. A common signal detection model accounts for both the appearance and discrimination data. Since signal detection theory provides a common metric for relating discrimination performance and neural response, the results have implications for comparing perceptual and neural response functions.

  16. Atomic Data and Spectral Models for FeII

    CERN Document Server

    Bautista, Manuel A; Ballance, Connor; Quinet, Pascal; Ferland, Gary; Mendoza, Claudio; Kallman, Timothy R

    2015-01-01

    We present extensive calculations of radiative transition rates and electron impact collision strengths for Fe II. The data sets involve 52 levels from the $3d\\,^7$, $3d\\,^64s$, and $3d\\,^54s^2$ configurations. Computations of $A$-values are carried out with a combination of state-of-the-art multiconfiguration approaches, namely the relativistic Hartree--Fock, Thomas--Fermi--Dirac potential, and Dirac--Fock methods; while the $R$-matrix plus intermediate coupling frame transformation, Breit--Pauli $R$-matrix and Dirac $R$-matrix packages are used to obtain collision strengths. We examine the advantages and shortcomings of each of these methods, and estimate rate uncertainties from the resulting data dispersion. We proceed to construct excitation balance spectral models, and compare the predictions from each data set with observed spectra from various astronomical objects. We are thus able to establish benchmarks in the spectral modeling of [Fe II] emission in the IR and optical regions as well as in the UV Fe...

  17. Hydrodynamical models of Type II-Plateau Supernovae

    CERN Document Server

    Bersten, Melina C; Hamuy, Mario

    2011-01-01

    We present bolometric light curves of Type II-plateau supernovae (SNe II-P) obtained using a newly developed, one-dimensional Lagrangian hydrodynamic code with flux-limited radiation diffusion. Using our code we calculate the bolometric light curve and photospheric velocities of SN1999em obtaining a remarkably good agreement with observations despite the simplifications used in our calculation. The physical parameters used in our calculation are E=1.25 foe, M= 19 M_\\odot, R= 800 R_\\odot and M_{Ni}=0.056 M_\\odot. We find that an extensive mixing of 56Ni is needed in order to reproduce a plateau as flat as that shown by the observations. We also study the possibility to fit the observations with lower values of the initial mass consistently with upper limits that have been inferred from pre-supernova imaging of SN1999em in connection with stellar evolution models. We cannot find a set of physical parameters that reproduce well the observations for models with pre-supernova mass of \\leq 12 M_\\odot, although mode...

  18. Eagle II: A prototype for multi-resolution combat modeling

    Energy Technology Data Exchange (ETDEWEB)

    Powell, D.R.; Hutchinson, J.L.

    1993-02-01

    Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overall computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to ``calibrate`` the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.

  19. A Modified Model of Ecological Footprint Accounting and Its Application to Cropland in Jiangsu,China

    Institute of Scientific and Technical Information of China (English)

    LIU Qin-Pu; LIN Zhen-Shan; FENG Nian-Hua; LIU Yong-Mei

    2008-01-01

    Based on the theory of emergy analysis,a modified model of ecological footprint accounting,termed emergetic ecological footprint (EMEF) in contrast to the conventional ecological footprint (EF) model,is formulated and applied to a case study of Jiangsu cropland,China.Comparisons between the EF and the EMEF with respect to grain,cotton,and food oil were outlined.Per capita EF and EMEF of cropland were also presented to depict the resources consumption level by comparing the biocapacity (BC) or emergetic biocapacity (EMBC,a new BC calculation by emergy analysis)of the same area.In the meanwhile,the ecological sustainability index (ESI),a new concept initiated by the authors,was established in the modified model to indicate and compare the sustainability of cropland use at different levels and between different regions.The results from conventional EF showed that per capita EF of the cropland has exceeded its per capita BC in Jiangsu since 1986.In contrast,based on the EMBC,the per capita EMEF exceeded the per capita EMBC 5 years earlier.The ESIs of Jiangsu cropland use were between 0.7 and 0.4 by the conventional method,while the numbers were between 0.7 and 0.3 by the modified one.The fact that the results of the two methods were similar showed that the modified model was reasonable and feasible,although some principles of the EF and EMEF were quite different.Also,according to the realities of Jiangsu'cropland use,the results from the modified model were more acceptable.

  20. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    Science.gov (United States)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  1. Modeling the quantum interference signatures of the Ba II D2 4554 A line in the second solar spectrum

    CERN Document Server

    Smitha, H N; Stenflo, J O; Sampoorna, M

    2013-01-01

    Quantum interference effects play a vital role in shaping the linear polarization profiles of solar spectral lines. The Ba II D2 line at 4554 A is a prominent example, where the F-state interference effects due to the odd isotopes produce polarization profiles, which are very different from those of the even isotopes that have no F-state interference. It is therefore necessary to account for the contributions from the different isotopes to understand the observed linear polarization profiles of this line. Here we do radiative transfer modeling with partial frequency redistribution (PRD) of such observations while accounting for the interference effects and isotope composition. The Ba II D2 polarization profile is found to be strongly governed by the PRD mechanism. We show how a full PRD treatment succeeds in reproducing the observations, while complete frequency redistribution (CRD) alone fails to produce polarization profiles that have any resemblance with the observed ones. However, we also find that the li...

  2. A GLOBAL MODEL OF THE LIGHT CURVES AND EXPANSION VELOCITIES OF TYPE II-PLATEAU SUPERNOVAE

    Energy Technology Data Exchange (ETDEWEB)

    Pejcha, Ondřej [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08540 (United States); Prieto, Jose L., E-mail: pejcha@astro.princeton.edu [Núcleo de Astronomía de la Facultad de Ingeniería, Universidad Diego Portales, Av. Ejército 441 Santiago (Chile)

    2015-02-01

    We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of Type II-Plateau supernovae based on their expansion velocities and photometric measurements. We apply the method to a sample of 26 well-observed, nearby supernovae with published light curves and velocities. We simultaneously fit ∼230 velocity and ∼6800 mag measurements distributed over 21 photometric passbands spanning wavelengths from 0.19 to 2.2 μm. The light-curve differences among the Type II-Plateau supernovae are well modeled by assuming different rates of photospheric radius expansion, which we explain as different density profiles of the ejecta, and we argue that steeper density profiles result in flatter plateaus, if everything else remains unchanged. The steep luminosity decline of Type II-Linear supernovae is due to fast evolution of the photospheric temperature, which we verify with a successful fit of SN 1980K. Eliminating the need for theoretical supernova atmosphere models, we obtain self-consistent relative distances, reddenings, and nickel masses fully accounting for all internal model uncertainties and covariances. We use our global fit to estimate the time evolution of any missing band tailored specifically for each supernova, and we construct spectral energy distributions and bolometric light curves. We produce bolometric corrections for all filter combinations in our sample. We compare our model to the theoretical dilution factors and find good agreement for the B and V filters. Our results differ from the theory when the I, J, H, or K bands are included. We investigate the reddening law toward our supernovae and find reasonable agreement with standard R{sub V}∼3.1 reddening law in UBVRI bands. Results for other bands are inconclusive. We make our fitting code publicly available.

  3. Simple inflationary quintessential model. II. Power law potentials

    Science.gov (United States)

    de Haro, Jaume; Amorós, Jaume; Pan, Supriya

    2016-09-01

    The present work is a sequel of our previous work [Phys. Rev. D 93, 084018 (2016)] which depicted a simple version of an inflationary quintessential model whose inflationary stage was described by a Higgs-type potential and the quintessential phase was responsible due to an exponential potential. Additionally, the model predicted a nonsingular universe in past which was geodesically past incomplete. Further, it was also found that the model is in agreement with the Planck 2013 data when running is allowed. But, this model provides a theoretical value of the running which is far smaller than the central value of the best fit in ns , r , αs≡d ns/d l n k parameter space where ns, r , αs respectively denote the spectral index, tensor-to-scalar ratio and the running of the spectral index associated with any inflationary model, and consequently to analyze the viability of the model one has to focus in the two-dimensional marginalized confidence level in the allowed domain of the plane (ns,r ) without taking into account the running. Unfortunately, such analysis shows that this model does not pass this test. However, in this sequel we propose a family of models runs by a single parameter α ∈[0 ,1 ] which proposes another "inflationary quintessential model" where the inflation and the quintessence regimes are respectively described by a power law potential and a cosmological constant. The model is also nonsingular although geodesically past incomplete as in the cited model. Moreover, the present one is found to be more simple compared to the previous model and it is in excellent agreement with the observational data. In fact, we note that, unlike the previous model, a large number of the models of this family with α ∈[0 ,1/2 ) match with both Planck 2013 and Planck 2015 data without allowing the running. Thus, the properties in the current family of models compared to its past companion justify its need for a better cosmological model with the successive

  4. Carbon accounting of forest bioenergy: from model calibrations to policy options (Invited)

    Science.gov (United States)

    Lamers, P.

    2013-12-01

    knowledge in the field by comparing different state-of-the-art temporal forest carbon modeling efforts, and discusses whether or to what extent a deterministic ';carbon debt' accounting is possible and appropriate. It concludes upon the possible scientific and eventually political choices in temporal carbon accounting for regulatory frameworks including alternative options to address unintentional carbon losses within forest ecosystems/bioenergy systems.

  5. Modelling the range expansion of the Tiger mosquito in a Mediterranean Island accounting for imperfect detection.

    Science.gov (United States)

    Tavecchia, Giacomo; Miranda, Miguel-Angel; Borrás, David; Bengoa, Mikel; Barceló, Carlos; Paredes-Esquivel, Claudia; Schwarz, Carl

    2017-01-01

    Aedes albopictus (Diptera; Culicidae) is a highly invasive mosquito species and a competent vector of several arboviral diseases that have spread rapidly throughout the world. Prevalence and patterns of dispersal of the mosquito are of central importance for an effective control of the species. We used site-occupancy models accounting for false negative detections to estimate the prevalence, the turnover, the movement pattern and the growth rate in the number of sites occupied by the mosquito in 17 localities throughout Mallorca Island. Site-occupancy probability increased from 0.35 in the 2012, year of first reported observation of the species, to 0.89 in 2015. Despite a steady increase in mosquito presence, the extinction probability was generally high indicating a high turnover in the occupied sites. We considered two site-dependent covariates, namely the distance from the point of first observation and the estimated yearly occupancy rate in the neighborhood, as predicted by diffusion models. Results suggested that mosquito distribution during the first year was consistent with what predicted by simple diffusion models, but was not consistent with the diffusion model in subsequent years when it was similar to those expected from leapfrog dispersal events. Assuming a single initial colonization event, the spread of Ae. albopictus in Mallorca followed two distinct phases, an early one consistent with diffusion movements and a second consistent with long distance, 'leapfrog', movements. The colonization of the island was fast, with ~90% of the sites estimated to be occupied 3 years after the colonization. The fast spread was likely to have occurred through vectors related to human mobility such as cars or other vehicles. Surveillance and management actions near the introduction point would only be effective during the early steps of the colonization.

  6. Long-term fiscal implications of funding assisted reproduction: a generational accounting model for Spain

    Directory of Open Access Journals (Sweden)

    R. Matorras

    2015-12-01

    Full Text Available The aim of this study was to assess the lifetime economic benefits of assisted reproduction in Spain by calculating the return on this investment. We developed a generational accounting model that simulates the flow of taxes paid by the individual, minus direct government transfers received over the individual’s lifetime. The difference between discounted transfers and taxes minus the cost of either IVF or artificial insemination (AI equals the net fiscal contribution (NFC of a child conceived through assisted reproduction. We conducted sensitivity analysis to test the robustness of our results under various macroeconomic scenarios. A child conceived through assisted reproduction would contribute €370,482 in net taxes to the Spanish Treasury and would receive €275,972 in transfers over their lifetime. Taking into account that only 75% of assisted reproduction pregnancies are successful, the NFC was estimated at €66,709 for IVF-conceived children and €67,253 for AI-conceived children. The return on investment for each euro invested was €15.98 for IVF and €18.53 for AI. The long-term NFC of a child conceived through assisted reproduction could range from €466,379 to €-9,529 (IVF and from €466,923 to €-8,985 (AI. The return on investment would vary between €-2.28 and €111.75 (IVF, and €-2.48 and €128.66 (AI for each euro invested. The break-even point at which the financial position would begin to favour the Spanish Treasury ranges between 29 and 41 years of age. Investment in assisted reproductive techniques may lead to positive discounted future fiscal revenue, notwithstanding its beneficial psychological effect for infertile couples in Spain.

  7. A performance weighting procedure for GCMs based on explicit probabilistic models and accounting for observation uncertainty

    Science.gov (United States)

    Renard, Benjamin; Vidal, Jean-Philippe

    2016-04-01

    In recent years, the climate modeling community has put a lot of effort into releasing the outputs of multimodel experiments for use by the wider scientific community. In such experiments, several structurally distinct GCMs are run using the same observed forcings (for the historical period) or the same projected forcings (for the future period). In addition, several members are produced for a single given model structure, by running each GCM with slightly different initial conditions. This multiplicity of GCM outputs offers many opportunities in terms of uncertainty quantification or GCM comparisons. In this presentation, we propose a new procedure to weight GCMs according to their ability to reproduce the observed climate. Such weights can be used to combine the outputs of several models in a way that rewards good-performing models and discards poorly-performing ones. The proposed procedure has the following main properties: 1. It is based on explicit probabilistic models describing the time series produced by the GCMs and the corresponding historical observations, 2. It can use several members whenever available, 3. It accounts for the uncertainty in observations, 4. It assigns a weight to each GCM (all weights summing up to one), 5. It can also assign a weight to the "H0 hypothesis" that all GCMs in the multimodel ensemble are not compatible with observations. The application of the weighting procedure is illustrated with several case studies including synthetic experiments, simple cases where the target GCM output is a simple univariate variable and more realistic cases where the target GCM output is a multivariate and/or a spatial variable. These case studies illustrate the generality of the procedure which can be applied in a wide range of situations, as long as the analyst is prepared to make an explicit probabilistic assumption on the target variable. Moreover, these case studies highlight several interesting properties of the weighting procedure. In

  8. Historical Account to the State of the Art in Debris Flow Modeling

    Science.gov (United States)

    Pudasaini, Shiva P.

    2013-04-01

    In this contribution, I present a historical account of debris flow modelling leading to the state of the art in simulations and applications. A generalized two-phase model is presented that unifies existing avalanche and debris flow theories. The new model (Pudasaini, 2012) covers both the single-phase and two-phase scenarios and includes many essential and observable physical phenomena. In this model, the solid-phase stress is closed by Mohr-Coulomb plasticity, while the fluid stress is modeled as a non-Newtonian viscous stress that is enhanced by the solid-volume-fraction gradient. A generalized interfacial momentum transfer includes viscous drag, buoyancy and virtual mass forces, and a new generalized drag force is introduced to cover both solid-like and fluid-like drags. Strong couplings between solid and fluid momentum transfer are observed. The two-phase model is further extended to describe the dynamics of rock-ice avalanches with new mechanical models. This model explains dynamic strength weakening and includes internal fluidization, basal lubrication, and exchanges of mass and momentum. The advantages of the two-phase model over classical (effectively single-phase) models are discussed. Advection and diffusion of the fluid through the solid are associated with non-linear fluxes. Several exact solutions are constructed, including the non-linear advection-diffusion of fluid, kinematic waves of debris flow front and deposition, phase-wave speeds, and velocity distribution through the flow depth and through the channel length. The new model is employed to study two-phase subaerial and submarine debris flows, the tsunami generated by the debris impact at lakes/oceans, and rock-ice avalanches. Simulation results show that buoyancy enhances flow mobility. The virtual mass force alters flow dynamics by increasing the kinetic energy of the fluid. Newtonian viscous stress substantially reduces flow deformation, whereas non-Newtonian viscous stress may change the

  9. Scale invariant cosmology II: model equations and properties

    CERN Document Server

    Maeder, Andre

    2016-01-01

    We want to establish the basic properties of a scale invariant cosmology, that also accounts for the hypothesis of scale invariance of the empty space at large scales. We write the basic analytical properties of the scale invariant cosmological models. The hypothesis of scale invariance of the empty space at large scale brings interesting simplifications in the scale invariant equations for cosmology. There is one new term, depending on the scale factor of the scale invariant cosmology, that opposes to gravity and favours an accelerated expansion. We first consider a zero-density model and find an accelerated expansion, going like t square. In models with matter present, the displacements due to the new term make a significant contribution Omega_l to the energy-density of the Universe, satisfying an equation of the form Omega_m + Omega_k + Omega_l = 1. Unlike the Friedman's models, there is a whole family of flat models (k=0) with different density parameters Omega_m smaller than 1. We examine the basic relat...

  10. Modeling fluid dynamics on type II quantum computers

    Science.gov (United States)

    Scoville, James; Weeks, David; Yepez, Jeffrey

    2006-03-01

    A quantum algorithm is presented for modeling the time evolution of density and flow fields governed by classical equations, such as the diffusion equation, the nonlinear Burgers equation, and the damped wave equation. The algorithm is intended to run on a type-II quantum computer, a parallel quantum computer consisting of a lattice of small type I quantum computers undergoing unitary evolution and interacting via information interchanges represented by an orthogonal matrices. Information is effectively transferred between adjacent quantum computers over classical communications channels because of controlled state demolition following local quantum mechanical qubit-qubit interactions within each quantum computer. The type-II quantum algorithm presented in this paper describes a methodology for generating quantum logic operations as a generalization of classical operations associated with finite-point group symmetries. The quantum mechanical evolution of multiple qubits within each node is described. Presented is a proof that the parallel quantum system obeys a finite-difference quantum Boltzman equation at the mesoscopic scale, leading in turn to various classical linear and nonlinear effective field theories at the macroscopic scale depending on the details of the local qubit-qubit interactions.

  11. The black hole challenge in Randall-Sundrum II model

    CERN Document Server

    Pappas, Nikolaos D

    2014-01-01

    Models postulating the existence of additional spacelike dimensions of macroscopic or even infinite size, while viewing our observable universe as merely a 3-brane living in a higher-dimensional bulk were a major breakthrough when proposed some 15 years ago. The most interesting among them both in terms of elegance of the setup and of the richness of the emerging phenomenology is the Randall-Sundrum II model where one infinite extra spacelike dimension is considered with an AdS topology, characterized by the warping effect caused by the presence of a negative cosmological constant in the bulk. A major drawback of this model is that despite numerous efforts no line element has ever been found that could describe a stable, regular, realistic black hole. Finding a smoothly behaved such solution supported by the presence of some more or less conventional fields either in the bulk and/or on the brane is the core of the black hole challenge. After a comprehensive presentation of the details of the model and the ana...

  12. Type II Supernovae: Model Light Curves and Standard Candle Relationships

    Science.gov (United States)

    Kasen, Daniel; Woosley, S. E.

    2009-10-01

    A survey of Type II supernovae explosion models has been carried out to determine how their light curves and spectra vary with their mass, metallicity, and explosion energy. The presupernova models are taken from a recent survey of massive stellar evolution at solar metallicity supplemented by new calculations at subsolar metallicity. Explosions are simulated by the motion of a piston near the edge of the iron core and the resulting light curves and spectra are calculated using full multi-wavelength radiation transport. Formulae are developed that describe approximately how the model observables (light curve luminosity and duration) scale with the progenitor mass, explosion energy, and radioactive nucleosynthesis. Comparison with observational data shows that the explosion energy of typical supernovae (as measured by kinetic energy at infinity) varies by nearly an order of magnitude—from 0.5 to 4.0 × 1051 ergs, with a typical value of ~0.9 × 1051 ergs. Despite the large variation, the models exhibit a tight relationship between luminosity and expansion velocity, similar to that previously employed empirically to make SNe IIP standardized candles. This relation is explained by the simple behavior of hydrogen recombination in the supernova envelope, but we find a sensitivity to progenitor metallicity and mass that could lead to systematic errors. Additional correlations between light curve luminosity, duration, and color might enable the use of SNe IIP to obtain distances accurate to ~20% using only photometric data.

  13. Modelling overbank flow on farmed catchments taking into account spatial hydrological discontinuities

    Science.gov (United States)

    Moussa, R.; Tilma, M.; Chahinian, N.; Huttel, O.

    2003-04-01

    In agricultural catchments, hydrological processes are largely variable in space due to human impact causing hydrological discontinuities such as ditch network, field limits and terraces. The ditch network accelerates runoff by concentrating flows, drains the water table or replenishes it by reinfiltration of the runoff water. During extreme flood events, overbank flow occurs and surface pathflows are modified. The purpose of this study is to assess the influence of overbank flow on hydrograph shape during flood events. For that, MHYDAS, a physically based distributed hydrological model, was especially developed to take into account these hydrological discontinuities. The model considers the catchment as a series of interconnected hydrological unit. Runoff from each unit is estimated using a deterministic model based on the pounding-time algorithm and then routed through the ditch network using the diffusive wave equation. Overbank flow is modelled by modifying links between the hydrological units and the ditch network. The model was applied to simulate the main hydrological processes on a small headwater farmed Mediterranean catchment located in Southern France. The basic hydrometeorological equipment consists of a meteorological station, rain gauges, a tensio-neutronic and a piezometric measurement network, and eight water flow measurements. A multi-criteria and multi-scale approach was used. Three independent error criteria (Nash, error on volume and error on peak flow) were calculated and combined using the Pareto technique. Then, a multi-scale approach was used to calibrate and validate the model for the eight water flow measurements. The application of MHYDAS on the extreme ten flood events of the last decade enables to identify the ditches where overbank flows occur and to calculate discharge at various points of the ditch network. Results show that for the extreme flood event, more than 45% of surface runoff occur due to overbank flow. Discussion shows that

  14. Use of the Sacramento Soil Moisture Accounting Model in Areas with Insufficient Forcing Data

    Science.gov (United States)

    Kuzmin, V.

    2009-04-01

    The Sacramento Soil Moisture Accounting model (SAC-SMA) is known as a very reliable and effective hydrological model. It is widely used by the U.S. National Weather Service (NWS) and many organizations in other countries for operational forecasting of flash floods. As a purely conceptual model, the SAC-SMA requires a periodic re-calibration. However, this procedure is not trivial in watersheds with little or no historical data, in areas with changing watershed properties, in a changing climate environment, in regions with low quality and low spatial resolution forcing data etc. In such cases, so-called physically based models with measurable parameters also may not be an alternative, because they usually require high quality forcing data and, hence, are quite expensive. Therefore, this type of models can not be implemented in countries with scarce surface observation data. To resolve this problem, we offer using a very fast and efficient automatic calibration algorithm, a Stepwise Line Search (SLS), which has been implementing in NWS since 2005, and also its modifications that were developed especially for automated operational forecasting of flash floods in regions where high resolution and high quality forcing data are not available. The SLS-family includes several simple yet efficient calibration algorithms: 1) SLS-F, which supposes simultaneous natural smoothing of the response surface by quasi-local estimation of F-indices, what allows finding the most stable and reliable parameters that can be different from "global" optima in usual sense. (Thus, this method slightly transforms the original objective function); 2) SLS-2L (Two-Loop SLS), which is suitable for basins where hydraulic properties of soil are unknown; 3) SLS-2LF, which represents a conjunction of the SLS-F and SLS-2L algorithms and allows obtaining the SAC-SMA parameters that can be transferred to ungauged catchments; 4) SLS-E, which also supposes stochastic filtering of the model input through

  15. Do current connectionist learning models account for reading development in different languages?

    Science.gov (United States)

    Hutzler, Florian; Ziegler, Johannes C; Perry, Conrad; Wimmer, Heinz; Zorzi, Marco

    2004-04-01

    Learning to read a relatively irregular orthography, such as English, is harder and takes longer than learning to read a relatively regular orthography, such as German. At the end of grade 1, the difference in reading performance on a simple set of words and nonwords is quite dramatic. Whereas children using regular orthographies are already close to ceiling, English children read only about 40% of the words and nonwords correctly. It takes almost 4 years for English children to come close to the reading level of their German peers. In the present study, we investigated to what extent recent connectionist learning models are capable of simulating this cross-language learning rate effect as measured by nonword decoding accuracy. We implemented German and English versions of two major connectionist reading models, Plaut et al.'s (Plaut, D. C., McClelland, J. L., Seidenberg, M. S., & Patterson, K. (1996). Understanding normal and impaired word reading: computational principles in quasi-regular domains. Psychological Review, 103, 56-115) parallel distributed model and Zorzi et al.'s (Zorzi, M., Houghton, G., & Butterworth, B. (1998a). Two routes or one in reading aloud? A connectionist dual-process model. Journal of Experimental Psychology: Human Perception and Performance, 24, 1131-1161); two-layer associative network. While both models predicted an overall advantage for the more regular orthography (i.e. German over English), they failed to predict that the difference between children learning to read regular versus irregular orthographies is larger earlier on. Further investigations showed that the two-layer network could be brought to simulate the cross-language learning rate effect when cross-language differences in teaching methods (phonics versus whole-word approach) were taken into account. The present work thus shows that in order to adequately capture the pattern of reading acquisition displayed by children, current connectionist models must not only be

  16. The Charitable Trust Model: An Alternative Approach For Department Of Defense Accounting

    Science.gov (United States)

    2016-12-01

    Constitution declares, “No Money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law ; and a regular Statement and Account...accounting to supplant the current corporate-style financial management and reporting practices mandated by federal law . First, the researcher identifies... administration . The researcher then analyzes how the misapplied logic of private sector accounting creates weakness and inconsistencies in federal

  17. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    Science.gov (United States)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS

  18. Short-run analysis of fiscal policy and the current account in a finite horizon model

    OpenAIRE

    Heng-fu Zou

    1995-01-01

    This paper utilizes a technique developed by Judd to quantify the short-run effects of fiscal policies and income shocks on the current account in a small open economy. It is found that: (1) a future increase in government spending improves the short-run current account; (2) a future tax increase worsens the short-run current account; (3) a present increase in the government spending worsens the short-run current account dollar by dollar, while a present increase in the income improves the cu...

  19. Efficient three-dimensional global models for climate studies - Models I and II

    Science.gov (United States)

    Russel, G.; Rind, D.; Lacis, A.; Travis, L.; Stone, P.; Lebedeff, S.; Ruedy, R.; Hansen, J.

    1983-01-01

    Climate modeling based on numerical solution of the fundamental equations for atmospheric structure and motion permits the explicit modeling of physical processes in the climate system and the natural treatment of interactions and feedbacks among parts of the system. The main difficulty concerning this approach is related to the computational requirements. The present investigation is concerned with the development of a grid-point model which is programmed so that both horizontal and vertical resolutions can easily be changed. Attention is given to a description of Model I, the performance of sensitivity experiments by varying parameters, the definition of an improved Model II, and a study of the dependence of climate simulation on resolution with Model II. It is shown that the major features of global climate can be simulated reasonably well with a horizontal resolution as coarse as 1000 km. Such a resolution allows the possibility of long-range climate studies with moderate computer resources.

  20. Process Accounting

    OpenAIRE

    Gilbertson, Keith

    2002-01-01

    Standard utilities can help you collect and interpret your Linux system's process accounting data. Describes the uses of process accounting, standard process accounting commands, and example code that makes use of process accounting utilities.

  1. Photoionization models of the CALIFA H II regions. I. Hybrid models

    Science.gov (United States)

    Morisset, C.; Delgado-Inglada, G.; Sánchez, S. F.; Galbany, L.; García-Benito, R.; Husemann, B.; Marino, R. A.; Mast, D.; Roth, M. M.

    2016-10-01

    Photoionization models of H ii regions require as input a description of the ionizing spectral energy distribution (SED) and of the gas distribution, in terms of ionization parameter U and chemical abundances (e.g., O/H and N/O).A strong degeneracy exists between the hardness of the SED and U, which in turn leads to high uncertainties in the determination of the other parameters, including abundances. One way to resolve the degeneracy is to fix one of the parameters using additional information. For each of the ~20 000 sources of the CALIFA H ii regions catalog, a grid of photoionization models is computed assuming the ionizing SED to be described by the underlying stellar population obtained from spectral synthesis modeling. The ionizing SED is then defined as the sum of various stellar bursts of different ages and metallicities. This solves the degeneracy between the shape of the ionizing SED and U. The nebular metallicity (associated with O/H) is defined using the classical strong line method O3N2 (which gives our models the status of "hybrids"). The remaining free parameters are the abundance ratio N/O and the ionization parameter U, which are determined by looking for the model fitting [N ii]/Hα and [O iii]/Hβ. The models are also selected to fit [O ii]/Hβ. This process leads to a set of ~3200 models that reproduce the three observations simultaneously. We find that the regions associated with young stellar bursts (i.e., ionized by OB stars) are affected by leaking of ionizing photons,the proportion of escaping photons having a median of 80%. The set of photoionization models satisfactorily reproduces the electron temperature derived from the [O iii]λ4363/5007 line ratio. We determine new relations between the nebular parameters, like the ionization parameter U and the [O ii]/[O iii] or [S ii]/[S iii] line ratios. A new relation between N/O and O/H is obtained, mostly compatible with previous empirical determinations (and not with previous results obtained

  2. Equilibrium modeling of mono and binary sorption of Cu(II and Zn(II onto chitosan gel beads

    Directory of Open Access Journals (Sweden)

    Nastaj Józef

    2016-12-01

    Full Text Available The objective of the work are in-depth experimental studies of Cu(II and Zn(II ion removal on chitosan gel beads from both one- and two-component water solutions at the temperature of 303 K. The optimal process conditions such as: pH value, dose of sorbent and contact time were determined. Based on the optimal process conditions, equilibrium and kinetic studies were carried out. The maximum sorption capacities equaled: 191.25 mg/g and 142.88 mg/g for Cu(II and Zn(II ions respectively, when the sorbent dose was 10 g/L and the pH of a solution was 5.0 for both heavy metal ions. One-component sorption equilibrium data were successfully presented for six of the most useful three-parameter equilibrium models: Langmuir-Freundlich, Redlich-Peterson, Sips, Koble-Corrigan, Hill and Toth. Extended forms of Langmuir-Freundlich, Koble-Corrigan and Sips models were also well fitted to the two-component equilibrium data obtained for different ratios of concentrations of Cu(II and Zn(II ions (1:1, 1:2, 2:1. Experimental sorption data were described by two kinetic models of the pseudo-first and pseudo-second order. Furthermore, an attempt to explain the mechanisms of the divalent metal ion sorption process on chitosan gel beads was undertaken.

  3. Physics Of Eclipsing Binaries. II. Toward the Increased Model Fidelity

    Science.gov (United States)

    Prša, A.; Conroy, K. E.; Horvat, M.; Pablo, H.; Kochoska, A.; Bloemen, S.; Giammarco, J.; Hambleton, K. M.; Degroote, P.

    2016-12-01

    The precision of photometric and spectroscopic observations has been systematically improved in the last decade, mostly thanks to space-borne photometric missions and ground-based spectrographs dedicated to finding exoplanets. The field of eclipsing binary stars strongly benefited from this development. Eclipsing binaries serve as critical tools for determining fundamental stellar properties (masses, radii, temperatures, and luminosities), yet the models are not capable of reproducing observed data well, either because of the missing physics or because of insufficient precision. This led to a predicament where radiative and dynamical effects, insofar buried in noise, started showing up routinely in the data, but were not accounted for in the models. PHOEBE (PHysics Of Eclipsing BinariEs; http://phoebe-project.org) is an open source modeling code for computing theoretical light and radial velocity curves that addresses both problems by incorporating missing physics and by increasing the computational fidelity. In particular, we discuss triangulation as a superior surface discretization algorithm, meshing of rotating single stars, light travel time effects, advanced phase computation, volume conservation in eccentric orbits, and improved computation of local intensity across the stellar surfaces that includes the photon-weighted mode, the enhanced limb darkening treatment, the better reflection treatment, and Doppler boosting. Here we present the concepts on which PHOEBE is built and proofs of concept that demonstrate the increased model fidelity.

  4. Modeling Degradation in Solid Oxide Electrolysis Cells - Volume II

    Energy Technology Data Exchange (ETDEWEB)

    Manohar Motwani

    2011-09-01

    Idaho National Laboratory has an ongoing project to generate hydrogen from steam using solid oxide electrolysis cells (SOECs). To accomplish this, technical and degradation issues associated with the SOECs will need to be addressed. This report covers various approaches being pursued to model degradation issues in SOECs. An electrochemical model for degradation of SOECs is presented. The model is based on concepts in local thermodynamic equilibrium in systems otherwise in global thermodynamic non-equilibrium. It is shown that electronic conduction through the electrolyte, however small, must be taken into account for determining local oxygen chemical potential,, within the electrolyte. The within the electrolyte may lie out of bounds in relation to values at the electrodes in the electrolyzer mode. Under certain conditions, high pressures can develop in the electrolyte just near the oxygen electrode/electrolyte interface, leading to oxygen electrode delamination. These predictions are in accordance with the reported literature on the subject. Development of high pressures may be avoided by introducing some electronic conduction in the electrolyte. By combining equilibrium thermodynamics, non-equilibrium (diffusion) modeling, and first-principles, atomic scale calculations were performed to understand the degradation mechanisms and provide practical recommendations on how to inhibit and/or completely mitigate them.

  5. Underwriting information-theoretic accounts of quantum mechanics with a realist, psi-epistemic model

    Science.gov (United States)

    Stuckey, W. M.; Silberstein, Michael; McDevitt, Timothy

    2016-05-01

    We propose an adynamical interpretation of quantum theory called Relational Blockworld (RBW) where the fundamental ontological element is a 4D graphical amalgam of space, time and sources called a “spacetimesource element.” These are fundamental elements of space, time and sources, not source elements in space and time. The transition amplitude for a spacetimesource element is computed using a path integral with discrete graphical action. The action for a spacetimesource element is constructed from a difference matrix K and source vector J on the graph, as in lattice gauge theory. K is constructed from graphical field gradients so that it contains a non-trivial null space and J is then restricted to the row space of K, so that it is divergence-free and represents a conserved exchange of energy-momentum. This construct of K and J represents an adynamical global constraint between sources, the spacetime metric and the energy-momentum content of the spacetimesource element, rather than a dynamical law for time-evolved entities. To illustrate this interpretation, we explain the simple EPR-Bell and twin-slit experiments. This interpretation of quantum mechanics constitutes a realist, psi-epistemic model that might underwrite certain information-theoretic accounts of the quantum.

  6. Design of a Competency-Based Assessment Model in the Field of Accounting

    Science.gov (United States)

    Ciudad-Gómez, Adelaida; Valverde-Berrocoso, Jesús

    2012-01-01

    This paper presents the phases involved in the design of a methodology to contribute both to the acquisition of competencies and to their assessment in the field of Financial Accounting, within the European Higher Education Area (EHEA) framework, which we call MANagement of COMpetence in the areas of Accounting (MANCOMA). Having selected and…

  7. A pluralistic account of homology: adapting the models to the data.

    Science.gov (United States)

    Haggerty, Leanne S; Jachiet, Pierre-Alain; Hanage, William P; Fitzpatrick, David A; Lopez, Philippe; O'Connell, Mary J; Pisani, Davide; Wilkinson, Mark; Bapteste, Eric; McInerney, James O

    2014-03-01

    Defining homologous genes is important in many evolutionary studies but raises obvious issues. Some of these issues are conceptual and stem from our assumptions of how a gene evolves, others are practical, and depend on the algorithmic decisions implemented in existing software. Therefore, to make progress in the study of homology, both ontological and epistemological questions must be considered. In particular, defining homologous genes cannot be solely addressed under the classic assumptions of strong tree thinking, according to which genes evolve in a strictly tree-like fashion of vertical descent and divergence and the problems of homology detection are primarily methodological. Gene homology could also be considered under a different perspective where genes evolve as "public goods," subjected to various introgressive processes. In this latter case, defining homologous genes becomes a matter of designing models suited to the actual complexity of the data and how such complexity arises, rather than trying to fit genetic data to some a priori tree-like evolutionary model, a practice that inevitably results in the loss of much information. Here we show how important aspects of the problems raised by homology detection methods can be overcome when even more fundamental roots of these problems are addressed by analyzing public goods thinking evolutionary processes through which genes have frequently originated. This kind of thinking acknowledges distinct types of homologs, characterized by distinct patterns, in phylogenetic and nonphylogenetic unrooted or multirooted networks. In addition, we define "family resemblances" to include genes that are related through intermediate relatives, thereby placing notions of homology in the broader context of evolutionary relationships. We conclude by presenting some payoffs of adopting such a pluralistic account of homology and family relationship, which expands the scope of evolutionary analyses beyond the traditional, yet

  8. Toward an Human Resource Accounting (HRA)-Based Model for Designing an Organizational Effectiveness Audit in Education.

    Science.gov (United States)

    Myroon, John L.

    The major purpose of this paper was to develop a Human Resource Accounting (HRA) macro-model that could be used for designing a school organizational effectiveness audit. Initially, the paper reviewed the advent and definition of HRA. In order to develop the proposed model, the different approaches to measuring effectiveness were reviewed,…

  9. An Interactive Activation Model of Context Effects in Letter Perception: Part 1. An Account of Basic Findings.

    Science.gov (United States)

    McClelland, James L.; Rumelhart, David E.

    1981-01-01

    A model of context effects in perception is applied to perception of letters. Perception results from excitatory and inhibitory interactions of detectors for visual features, letters, and words. The model produces facilitation for letters in pronounceable pseudowords as well as words and accounts for rule-governed performance without any rules.…

  10. Materials measurement and accounting in an operating plutonium conversion and purification process. Phase I. Process modeling and simulation. [PUCSF code

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, C.C. Jr.; Ostenak, C.A.; Gutmacher, R.G.; Dayem, H.A.; Kern, E.A.

    1981-04-01

    A model of an operating conversion and purification process for the production of reactor-grade plutonium dioxide was developed as the first component in the design and evaluation of a nuclear materials measurement and accountability system. The model accurately simulates process operation and can be used to identify process problems and to predict the effect of process modifications.

  11. An Interactive Activation Model of Context Effects in Letter Perception: Part 1. An Account of Basic Findings.

    Science.gov (United States)

    McClelland, James L.; Rumelhart, David E.

    1981-01-01

    A model of context effects in perception is applied to perception of letters. Perception results from excitatory and inhibitory interactions of detectors for visual features, letters, and words. The model produces facilitation for letters in pronounceable pseudowords as well as words and accounts for rule-governed performance without any rules.…

  12. Modelling of trace metal uptake by roots taking into account complexation by exogenous organic ligands

    Science.gov (United States)

    Jean-Marc, Custos; Christian, Moyne; Sterckeman, Thibault

    2010-05-01

    The context of this study is phytoextraction of soil trace metals such as Cd, Pb or Zn. Trace metal transfer from soil to plant depends on physical and chemical processes such as minerals alteration, transport, adsorption/desorption, reactions in solution and biological processes including the action of plant roots and of associated micro-flora. Complexation of metal ions by organic ligands is considered to play a role on the availability of trace metals for roots in particular in the event that synthetic ligands (EDTA, NTA, etc.) are added to the soil to increase the solubility of the contaminants. As this role is not clearly understood, we wanted to simulate it in order to quantify the effect of organic ligands on root uptake of trace metals and produce a tool which could help in optimizing the conditions of phytoextraction.We studied the effect of an aminocarboxilate ligand on the absorption of the metal ion by roots, both in hydroponic solution and in soil solution, for which we had to formalize the buffer power for the metal. We assumed that the hydrated metal ion is the only form which can be absorbed by the plants. Transport and reaction processes were modelled for a system made up of the metal M, a ligand L and the metal complex ML. The Tinker-Nye-Barber model was adapted to describe the transport of solutes M, L and ML in the soil and absorption of M by the roots. This allowed to represent the interactions between transport, chelating reactions, absorption of the solutes at the root surface, root growth with time, in order to simulate metal uptake by a whole root system.Several assumptions were tested such as i) absorption of the metal by an infinite sink and according to a Michaelis-Menten kinetics, solutes transport by diffusion with and without ii) mass flow and iii) soil buffer power for the ligand L. In hydroponic solution (without soil buffer power), ligands decreased the trace metal flux towards roots, as they reduced the concentration of hydrated

  13. Dimensional and hierarchical models of depression using the Beck Depression Inventory-II in an Arab college student sample

    Directory of Open Access Journals (Sweden)

    Ohaeri Jude U

    2010-07-01

    Full Text Available Abstract Background An understanding of depressive symptomatology from the perspective of confirmatory factor analysis (CFA could facilitate valid and interpretable comparisons across cultures. The objectives of the study were: (i using the responses of a sample of Arab college students to the Beck Depression Inventory (BDI-II in CFA, to compare the "goodness of fit" indices of the original dimensional three-and two-factor first-order models, and their modifications, with the corresponding hierarchical models (i.e., higher - order and bifactor models; (ii to assess the psychometric characteristics of the BDI-II, including convergent/discriminant validity with the Hopkins Symptom Checklist (HSCL-25. Method Participants (N = 624 were Kuwaiti national college students, who completed the questionnaires in class. CFA was done by AMOS, version 16. Eleven models were compared using eight "fit" indices. Results In CFA, all the models met most "fit" criteria. While the higher-order model did not provide improved fit over the dimensional first - order factor models, the bifactor model (BFM had the best fit indices (CMNI/DF = 1.73; GFI = 0.96; RMSEA = 0.034. All regression weights of the dimensional models were significantly different from zero (P Conclusion The broadly adequate fit of the various models indicates that they have some merit and implies that the relationship between the domains of depression probably contains hierarchical and dimensional elements. The bifactor model is emerging as the best way to account for the clinical heterogeneity of depression. The psychometric characteristics of the BDI-II lend support to our CFA results.

  14. Radiation-hydrodynamical modelling of underluminous type II plateau Supernovae

    CERN Document Server

    Pumo, M L; Spiro, S; Pastorello, A; Benetti, S; Cappellaro, E; Manicò, G; Turatto, M

    2016-01-01

    With the aim of improving our knowledge about the nature of the progenitors of low-luminosity Type II plateau supernovae (LL SNe IIP), we made radiation-hydrodynamical models of the well-sampled LL SNe IIP 2003Z, 2008bk and 2009md. For these three SNe we infer explosion energies of $0.16$-$0.18$ foe, radii at explosion of $1.8$-$3.5 \\times 10^{13}$ cm, and ejected masses of $10$-$11.3$\\Msun. The estimated progenitor mass on the main sequence is in the range $\\sim 13.2$-$15.1$\\Msun\\, for SN 2003Z and $\\sim 11.4$-$12.9$\\Msun\\, for SNe 2008bk and 2009md, in agreement with estimates from observations of the progenitors. These results together with those for other LL SNe IIP modelled in the same way, enable us also to conduct a comparative study on this SN sub-group. The results suggest that: a) the progenitors of faint SNe IIP are slightly less massive and have less energetic explosions than those of intermediate-luminosity SNe IIP, b) both faint and intermediate-luminosity SNe IIP originate from low-energy explo...

  15. Renewable energy for passive house heating - Part II. Model

    Energy Technology Data Exchange (ETDEWEB)

    Badescu, V. [Candida Oancea Institute of Solar Energy, Faculty of Mechanical Engineering, Polytechnic University of Bucharest, Bucharest (Romania); Sicre, B. [Computational Physics, Technical University of Chemnitz, Institute of Physics, Chemnitz (Germany)

    2003-07-01

    The evaluation of renewable energy used to increase the environmental friendliness of passive houses (PH) is the topic of this paper. A time-dependent model of passive house thermal behavior is developed. The heat transfer through the high thermal inertia elements is analyzed by using a one-dimensional time-dependent conduction heat-transfer equation that is solved numerically by using a standard Netlib solver (PDECHEB). Appropriate models for the conduction through the low thermal inertia elements are used, as well as a simple approach of the solar radiation transmission through the windows. The model takes into account in a detailed fashion the internal heat sources. Also, the operation of ventilation/heating system is described and common-practice control strategies are implemented. Three renewable energy sources are considered. First, there is the passive solar heating due to the large window on the facade oriented south. Second, the active solar collector system provides thermal energy for space heating or domestic hot water preparation. Third, a ground heat exchanger (GHE) increases the fresh air temperature during the cold season. The model was applied to the Pirmasens Passive House (Rhineland Palatinate, Germany). The passive solar heating system provides most part of the heating energy during November, December, February and March while in January the ground heat exchanger is the most important renewable energy source. January and February require use of additional conventional energy sources. A clever use of the active solar heating system could avoid consuming classical fuels during November, December and March. The ground heat exchanger is a reliable renewable source of energy. It provides heat during all the day and its (rather small) heat flux is increasing when the weather becomes colder. The air temperature at heater exit is normally lower than 46 {sup o}C. This is a good reason for the use of renewable energy to replace the classical fuel or the

  16. Crash Simulation of Roll Formed Parts by Damage Modelling Taking Into Account Preforming Effects

    Science.gov (United States)

    Till, Edwin T.; Hackl, Benjamin; Schauer, Hermann

    2011-08-01

    Complex phase steels of strength levels up to 1200 MPa are suitable to roll forming. These may be applied in automotive structures for enhancing the crashworthiness, e. g. as stiffeners in doors. Even though the strain hardening of the material is low there is considerable bending formability. However ductility decreases with the strength level. Higher strength requires more focus to the structural integrity of the part during the process planning stage and with respect to the crash behavior. Nowadays numerical simulation is used as a process design tool for roll-forming in a production environment. The assessment of the stability of a roll forming process is quite challenging for AHSS grades. There are two objectives of the present work. First to provide a reliable assessment tool to the roll forming analyst for failure prediction. Second to establish simulation procedures in order to predict the part's behavior in crash applications taking into account damage and failure. Today adequate ductile fracture models are available which can be used in forming and crash applications. These continuum models are based on failure strain curves or surfaces which depend on the stress triaxiality (e. g. Crach or GISSMO) and may additionally include the Lode angle (extended Mohr Coulomb or extended GISSMO model). A challenging task is to obtain the respective failure strain curves. In the paper the procedure is described in detail how these failure strain curves are obtained using small scale tests within voestalpine Stahl, notch tensile-, bulge and shear tests. It is shown that capturing the surface strains is not sufficient for obtaining reliable material failure parameters. The simulation tool for roll-forming at the site of voestalpine Krems is Copra® FEA RF, which is a 3D continuum finite element solver based on MSC.Marc. The simulation environment for crash applications is LS-DYNA. Shell elements are used for this type of analyses. A major task is to provide results of

  17. The scattering polarization of the Ly-alpha lines of H I and He II taking into account PRD and J-state interference effects

    CERN Document Server

    Belluzzi, Luca; Stepan, Jiri

    2012-01-01

    Recent theoretical investigations have pointed out that the cores of the Ly-alpha lines of H I and He II should show measurable scattering polarization signals when observing the solar disk, and that the magnetic sensitivity, through the Hanle effect, of such linear polarization signals is suitable for exploring the magnetism of the solar transition region. Such investigations were carried out in the limit of complete frequency redistribution (CRD) and neglecting quantum interference between the two upper J-levels of each line. Here we relax both approximations and show that the joint action of partial frequency redistribution (PRD) and J-state interference produces much more complex fractional linear polarization (Q/I) profiles, with large amplitudes in their wings. Such wing polarization signals turn out to be very sensitive to the temperature structure of the atmospheric model, so that they can be exploited for constraining the thermal properties of the solar chromosphere. Finally, we show that the approxi...

  18. Do prevailing societal models influence reports of near-death experiences?: a comparison of accounts reported before and after 1975.

    Science.gov (United States)

    Athappilly, Geena K; Greyson, Bruce; Stevenson, Ian

    2006-03-01

    Transcendental near-death experiences show some cross-cultural variation that suggests they may be influenced by societal beliefs. The prevailing Western model of near-death experiences was defined by Moody's description of the phenomenon in 1975. To explore the influence of this cultural model, we compared near-death experience accounts collected before and after 1975. We compared the frequency of 15 phenomenological features Moody defined as characteristic of near-death experiences in 24 accounts collected before 1975 and in 24 more recent accounts matched on relevant demographic and situational variables. Near-death experience accounts collected after 1975 differed from those collected earlier only in increased frequency of tunnel phenomena, which other research has suggested may not be integral to the experience, and not in any of the remaining 14 features defined by Moody as characteristic of near-death experiences. These data challenge the hypothesis that near-death experience accounts are substantially influenced by prevailing cultural models.

  19. Radiation-hydrodynamical modelling of underluminous Type II plateau supernovae

    Science.gov (United States)

    Pumo, M. L.; Zampieri, L.; Spiro, S.; Pastorello, A.; Benetti, S.; Cappellaro, E.; Manicò, G.; Turatto, M.

    2017-01-01

    With the aim of improving our knowledge about the nature of the progenitors of low-luminosity Type II plateau supernovae (LL SNe IIP), we made radiation-hydrodynamical models of the well-sampled LL SNe IIP 2003Z, 2008bk and 2009md. For these three SNe, we infer explosion energies of 0.16-0.18 foe, radii at explosion of 1.8-3.5 × 1013 cm and ejected masses of 10-11.3 M⊙. The estimated progenitor mass on the main sequence is in the range ˜13.2-15.1 M⊙ for SN 2003Z and ˜11.4-12.9 M⊙ for SNe 2008bk and 2009md, in agreement with estimates from observations of the progenitors. These results together with those for other LL SNe IIP modelled in the same way enable us also to conduct a comparative study on this SN sub-group. The results suggest that (a) the progenitors of faint SNe IIP are slightly less massive and have less energetic explosions than those of intermediate-luminosity SNe IIP; (b) both faint and intermediate-luminosity SNe IIP originate from low-energy explosions of red (or yellow) supergiant stars of low to intermediate mass; (c) some faint objects may also be explained as electron-capture SNe from massive super-asymptotic giant branch stars; and (d) LL SNe IIP form the underluminous tail of the SNe IIP family, where the main parameter `guiding' the distribution seems to be the ratio of the total explosion energy to the ejected mass. Further hydrodynamical studies should be performed and compared to a more extended sample of LL SNe IIP before drawing any conclusion on the relevance of fall-back to this class of events.

  20. 工业企业会计的集中核算模式%Centralized Accounting Model on Industrial Enterprises

    Institute of Scientific and Technical Information of China (English)

    李宇

    2012-01-01

    工业企业会计管理工作关系到企业的发展,当前的会计核算模式已经无法适应工业企业发展的需要。我们需要建立会计集中核算制度,提高工业企业会计管理水平,促使降低生产经营成本。涉及资金利用问题企业的可持续发展,资金利用率是会计工作的重要内容。工业企业需要实现从资金会计核算向会计监督管理转变,从根苯上提升会计管理水平。%Management of industrial enterprises in accounting related to the business development, the current accounting model is unable to adapt the needs of industrial enterprises. We need to estaldish the centralized accounting system, to improve the management level of industrial enterprises, to promote lower costs. Utilization of the funds of industrial enterprises related to the sustainable development. Capital utilization rate is an important part of accounting. Industrial companies need to achieve a transformation from accoumting of funds to accounting supervision and manangement. Fundamentally enhance the accounting management.

  1. THE MODEL OF MATERIALS AND STRUCTURES ENDURANCE, WITH TAKING INTO ACCOUNT THE EVOLUTION OF THEIR MECHANICAL CHARACTERISTICS

    Directory of Open Access Journals (Sweden)

    V. L. Gorobets

    2008-03-01

    Full Text Available In the article the mathematical model describing the process of changing a limit of durability of the railway rolling stock materials and structures with taking into account the change of parameters of durability curve during loading them is presented.

  2. Accounting for non-linear chemistry of ship plumes in the GEOS-Chem global chemistry transport model

    NARCIS (Netherlands)

    Vinken, G.C.M.; Boersma, K.F.; Jacob, D.J.; Meijer, E.W.

    2011-01-01

    We present a computationally efficient approach to account for the non-linear chemistry occurring during the dispersion of ship exhaust plumes in a global 3-D model of atmospheric chemistry (GEOS-Chem). We use a plume-in-grid formulation where ship emissions age chemically for 5 h before being relea

  3. Accounting for non-linear chemistry of ship plumes in the GEOS-Chem global chemistry transport model

    NARCIS (Netherlands)

    Meijer, E.W.; Vinken, G.C.M.; Boersma, K.F.; Jacob, D.J.

    2011-01-01

    Abstract. We present a computationally efficient approach to account for the non-linear chemistry occurring during the dispersion of ship exhaust plumes in a global 3-D model of atmospheric chemistry (GEOS-Chem). We use a plume-ingrid formulation where ship emissions age chemically for 5 h before be

  4. Closing the Gaps : Taking into Account the Effects of Heat stress and Fatique Modeling in an Operational Analysis

    NARCIS (Netherlands)

    Woodill, G.; Barbier, R.R.; Fiamingo, C.

    2010-01-01

    Traditional, combat model based analysis of Dismounted Combatant Operations (DCO) has focused on the ‘lethal’ aspects in an engagement, and to a limited extent the environment in which the engagement takes place. These are however only two of the factors that should be taken into account when conduc

  5. Structural equation models using partial least squares: an example of the application of SmartPLS® in accounting research

    Directory of Open Access Journals (Sweden)

    João Carlos Hipólito Bernardes do Nascimento

    2016-08-01

    Full Text Available In view of the Accounting academy’s increasing in the investigation of latent phenomena, researchers have used robust multivariate techniques. Although Structural Equation Models are frequently used in the international literature, however, the Accounting academy has made little use of the variant based on Partial Least Squares (PLS-SEM, mostly due to lack of knowledge on the applicability and benefits of its use for Accounting research. Even if the PLS-SEM approach is regularly used in surveys, this method is appropriate to model complex relations with multiple relationships of dependence and independence between latent variables. In that sense, it is very useful for application in experiments and file data. In that sense, a literature review is presented of Accounting studies that used the PLS-SEM technique. Next, as no specific publications were observed that exemplified the application of the technique in Accounting, a PLS-SEM application is developed to encourage exploratory research by means of the software SmartPLS®, being particularly useful to graduate students. Therefore, the main contribution of this article is methodological, given its objective to clearly identify the guidelines for the appropriate use of PLS. By presenting an example of how to conduct an exploratory research using PLS-SEM, the intention is to contribute to researchers’ enhanced understanding of how to use and report on the technique in their research.

  6. Current-account effects of a devaluation in an optimizing model with capital accumulation

    DEFF Research Database (Denmark)

    Nielsen, Søren Bo

    1991-01-01

    short, the devaluation is bound to improve the current account on impact, whereas this will deteriorate in the case of a long contract period, and the more so the smaller are adjustment costs in investment. In addition, we study the consequences for the terms of trade and for the stocks of foreign...

  7. A two-phase moisture transport model accounting for sorption hysteresis in layered porous building constructions

    DEFF Research Database (Denmark)

    Johannesson, Björn; Janz, Mårten

    2009-01-01

    , with account also to sorption hysteresis. The different materials in the considered layered construction are assigned different properties, i.e. vapor and liquid water diffusivities and boundary (wetting and drying) sorption curves. Further, the scanning behavior between wetting and drying boundary curves...

  8. Accounting Department Chairpersons' Perceptions of Business School Performance Using a Market Orientation Model

    Science.gov (United States)

    Webster, Robert L.; Hammond, Kevin L.; Rothwell, James C.

    2013-01-01

    This manuscript is part of a stream of continuing research examining market orientation within higher education and its potential impact on organizational performance. The organizations researched are business schools and the data collected came from chairpersons of accounting departments of AACSB member business schools. We use a reworded Narver…

  9. Accounting Department Chairpersons' Perceptions of Business School Performance Using a Market Orientation Model

    Science.gov (United States)

    Webster, Robert L.; Hammond, Kevin L.; Rothwell, James C.

    2013-01-01

    This manuscript is part of a stream of continuing research examining market orientation within higher education and its potential impact on organizational performance. The organizations researched are business schools and the data collected came from chairpersons of accounting departments of AACSB member business schools. We use a reworded Narver…

  10. Longitudinal Stability of the Beck Depression Inventory II: A Latent Trait-State-Occasion Model

    Science.gov (United States)

    Wu, Pei-Chen

    2016-01-01

    In a six-wave longitudinal study with two cohorts (660 adolescents and 630 young adults), this study investigated the longitudinal stability of the Beck Depression Inventory II (BDI-II) using the Trait-State-Occasion (TSO) model. The results revealed that the full TSO model was the best fitting representation of the depression measured by the…

  11. Internet Accounting

    NARCIS (Netherlands)

    Pras, Aiko; Beijnum, van Bert-Jan; Sprenkels, Ron; Párhonyi, Robert

    2001-01-01

    This article provides an introduction to Internet accounting and discusses the status of related work within the IETF and IRTF, as well as certain research projects. Internet accounting is different from accounting in POTS. To understand Internet accounting, it is important to answer questions like

  12. Polarized light scanning cryomacroscopy, part II: Thermal modeling and analysis of experimental observations.

    Science.gov (United States)

    Feig, Justin S G; Solanki, Prem K; Eisenberg, David P; Rabin, Yoed

    2016-10-01

    This study aims at developing thermal analysis tools and explaining experimental observations made by means of polarized-light cryomacroscopy (Part I). Thermal modeling is based on finite elements analysis (FEA), where two model parameters are extracted from thermal measurements: (i) the overall heat transfer coefficient between the cuvette and the cooling chamber, and (ii) the effective thermal conductivity within the cryoprotective agent (CPA) at the upper part of the cryogenic temperature range. The effective thermal conductivity takes into account enhanced heat transfer due to convection currents within the CPA, creating the so-called Bénard cells. Comparison of experimental results with simulation data indicates that the uncertainty in simulations due to the propagation of uncertainty in measured physical properties exceeds the uncertainty in experimental measurements, which validates the modeling approach. It is shown in this study that while a cavity may form in the upper-center portion of the vitrified CPA, it has very little effect on estimating the temperature distribution within the domain. This cavity is driven by thermal contraction of the CPA, with the upper-center of the domain transitioning to glass last. Finally, it is demonstrated in this study that additional stresses may develop within the glass transition temperature range due to nonlinear behavior of the thermal expansion coefficient. This effect is reported here for the first time in the context of cryobiology, using the capabilities of polarized-light cryomacroscopy.

  13. A regional-scale, high resolution dynamical malaria model that accounts for population density, climate and surface hydrology.

    Science.gov (United States)

    Tompkins, Adrian M; Ermert, Volker

    2013-02-18

    The relative roles of climate variability and population related effects in malaria transmission could be better understood if regional-scale dynamical malaria models could account for these factors. A new dynamical community malaria model is introduced that accounts for the temperature and rainfall influences on the parasite and vector life cycles which are finely resolved in order to correctly represent the delay between the rains and the malaria season. The rainfall drives a simple but physically based representation of the surface hydrology. The model accounts for the population density in the calculation of daily biting rates. Model simulations of entomological inoculation rate and circumsporozoite protein rate compare well to data from field studies from a wide range of locations in West Africa that encompass both seasonal endemic and epidemic fringe areas. A focus on Bobo-Dioulasso shows the ability of the model to represent the differences in transmission rates between rural and peri-urban areas in addition to the seasonality of malaria. Fine spatial resolution regional integrations for Eastern Africa reproduce the malaria atlas project (MAP) spatial distribution of the parasite ratio, and integrations for West and Eastern Africa show that the model grossly reproduces the reduction in parasite ratio as a function of population density observed in a large number of field surveys, although it underestimates malaria prevalence at high densities probably due to the neglect of population migration. A new dynamical community malaria model is publicly available that accounts for climate and population density to simulate malaria transmission on a regional scale. The model structure facilitates future development to incorporate migration, immunity and interventions.

  14. Adaptation of an Electrochemistry-based Li-Ion Battery Model to Account for Deterioration Observed Under Randomized Use

    Science.gov (United States)

    2014-10-02

    Adaptation of an Electrochemistry -based Li-Ion Battery Model to Account for Deterioration Observed Under Randomized Use Brian Bole1, Chetan S...application’s accuracy requirements and available resources (Daigle et al., 2011). In this paper, we use an electrochemistry -based lithium ion (Li-ion...the use of UKF not only to estimate the states in an electrochemistry model that vary over a charge- discharge cycle, but also to adapt certain

  15. Accounting Automation

    OpenAIRE

    Laynebaril1

    2017-01-01

    Accounting Automation   Click Link Below To Buy:   http://hwcampus.com/shop/accounting-automation/  Or Visit www.hwcampus.com Accounting Automation” Please respond to the following: Imagine you are a consultant hired to convert a manual accounting system to an automated system. Suggest the key advantages and disadvantages of automating a manual accounting system. Identify the most important step in the conversion process. Provide a rationale for your response. ...

  16. Physical and Theoretical Models of Heat Pollution Applied to Cramped Conditions Welding Taking into Account the Different Types of Heat

    Science.gov (United States)

    Bulygin, Y. I.; Koronchik, D. A.; Legkonogikh, A. N.; Zharkova, M. G.; Azimova, N. N.

    2017-05-01

    The standard k-epsilon turbulence model, adapted for welding workshops, equipped with fixed workstations with sources of pollution took into account only the convective component of heat transfer, which is quite reasonable for large-volume rooms (with low density distribution of sources of pollution) especially the results of model calculations taking into account only the convective component correlated well with experimental data. For the purposes of this study, when we are dealing with a small confined space where necessary to take account of the body heated to a high temperature (for welding), located next to each other as additional sources of heat, it can no longer be neglected radiative heat exchange. In the task - to experimentally investigate the various types of heat transfer in a limited closed space for welding and behavior of a mathematical model, describing the contribution of the various components of the heat exchange, including radiation, influencing the formation of fields of concentration, temperature, air movement and thermal stress in the test environment. Conducted field experiments to model cubic body, allowing you to configure and debug the model of heat and mass transfer processes with the help of the developed approaches, comparing the measurement results of air flow velocity and temperature with the calculated data showed qualitative and quantitative agreement between process parameters, that is an indicator of the adequacy of heat and mass transfer model.

  17. Investigation of a new model accounting for rotors of finite tip-speed ratio in yaw or tilt

    DEFF Research Database (Denmark)

    Branlard, Emmanuel; Gaunaa, Mac; Machefaux, Ewan

    2014-01-01

    from the MEXICO experiment are used as a basis for validation. Three tools using the same 2D airfoil coefficient data are compared: a BEM code, an Actuator-Line and a vortex code. The vortex code is further used to validate the results from the newly implemented BEM yaw-model. Significant improvements......The main results from a recently developed vortex model are implemented into a Blade Element Momentum(BEM) code. This implementation accounts for the effect of finite tip-speed ratio, an effect which was not considered in standard BEM yaw-models. The model and its implementation are presented. Data...

  18. Analytic Models for Radiation Induced Loss in Optical Fibers II. A Physical Model,

    Science.gov (United States)

    1984-06-01

    and identify by Mock number) PIEL GRUP UB.GR. Optical fibers Analytical models Radiation effects 19. ABSTRACT (ConinueII. anl mwr,f fneciua,, and...conditions specified in the derivation of the equations existed during the irradiations. This is because the functional form of the equations is not...tion is not necessarily incorrect. If one assumes a relatively simple form of re- covery as a function of time, such as an exponential recovery, it can

  19. On the treatment of evapotranspiration, soil moisture accounting, and aquifer recharge in monthly water balance models.

    Science.gov (United States)

    Alley, W.M.

    1984-01-01

    Several two- to six-parameter regional water balance models are examined by using 50-year records of monthly streamflow at 10 sites in New Jersey. These models include variants of the Thornthwaite-Mather model, the Palmer model, and the more recent Thomas abcd model. Prediction errors are relatively similar among the models. However, simulated values of state variables such as soil moisture storage differ substantially among the models, and fitted parameter values for different models sometimes indicated an entirely different type of basin response to precipitation.-from Author

  20. 会计监督的数学模型%The Math Model for the Accounting Supervision

    Institute of Scientific and Technical Information of China (English)

    韩英

    2001-01-01

    应用概率论和优化理论,对会计行为进行了系统的分析 ,建立了会计监督的数学模型,确定了会计单位和监督者的反应函数及最优行动选择,同时 给出监管者对会计单位违规处罚的最低值。%By analyzing the profit distributing of the managers and account ing units with probability and optimization theories,reaction function of manage rs and accounting units is present,and the math model of the accounting supervis ion is established.Minimum value of the punishment to the illegality in the acco unting units is determined.

  1. A simple model to quantitatively account for periodic outbreaks of the measles in the Dutch Bible Belt

    Science.gov (United States)

    Bier, Martin; Brak, Bastiaan

    2015-04-01

    In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.

  2. Accounting for Slipping and Other False Negatives in Logistic Models of Student Learning

    Science.gov (United States)

    MacLellan, Christopher J.; Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    Additive Factors Model (AFM) and Performance Factors Analysis (PFA) are two popular models of student learning that employ logistic regression to estimate parameters and predict performance. This is in contrast to Bayesian Knowledge Tracing (BKT) which uses a Hidden Markov Model formalism. While all three models tend to make similar predictions,…

  3. Equity Valuation and Accounting Numbers: Applying Zhang (2000 and Zhang and Chen (2007 models to Brazilian Market

    Directory of Open Access Journals (Sweden)

    Fernando Caio Galdi

    2011-03-01

    Full Text Available This paper investigates how accounting variables explain cross-sectional stocks returns in Brazilian capital markets. The analysis is based on Zhang (2000 and Zhang and Chen (2007 models. These models predict that stock returns are a function of net income, change in profitability, invested capital, changes in opportunity growths and discount rate. Generally, the empirical results for the Brazilian capital market are consistent with the theoretical relations that models describe, similarly to the results found in the US. Using different empirical tests (pooled regressions, Fama-Macbeth and panel data the results and coefficients remain similar, what support the robustness of our findings.

  4. Educational Accountability

    Science.gov (United States)

    Pincoffs, Edmund L.

    1973-01-01

    Discusses educational accountability as the paradigm of performance contracting, presents some arguments for and against accountability, and discusses the goals of education and the responsibility of the teacher. (Author/PG)

  5. Efficient modeling of sun/shade canopy radiation dynamics explicitly accounting for scattering

    Directory of Open Access Journals (Sweden)

    P. Bodin

    2012-04-01

    Full Text Available The separation of global radiation (Rg into its direct (Rb and diffuse constituents (Rg is important when modeling plant photosynthesis because a high Rd:Rg ratio has been shown to enhance Gross Primary Production (GPP. To include this effect in vegetation models, the plant canopy must be separated into sunlit and shaded leaves. However, because such models are often too intractable and computationally expensive for theoretical or large scale studies, simpler sun-shade approaches are often preferred. A widely used and computationally efficient sun-shade model was developed by Goudriaan (1977 (GOU. However, compared to more complex models, this model's realism is limited by its lack of explicit treatment of radiation scattering.

    Here we present a new model based on the GOU model, but which in contrast explicitly simulates radiation scattering by sunlit leaves and the absorption of this radiation by the canopy layers above and below (2-stream approach. Compared to the GOU model our model predicts significantly different profiles of scattered radiation that are in better agreement with measured profiles of downwelling diffuse radiation. With respect to these data our model's performance is equal to a more complex and much slower iterative radiation model while maintaining the simplicity and computational efficiency of the GOU model.

  6. Accounting outsourcing

    OpenAIRE

    Richtáriková, Paulína

    2012-01-01

    The thesis deals with accounting outsourcing and provides a comprehensive explanation of the topic. At first the thesis defines basic concepts (outsourcing, insourcing, offshoring and outplacement) and describes differences between the accounting outsourcing and outsourcing of other business activities. The emphasis is put on a decision whether or not to implement the accounting outsourcing. Thus the thesis describes main reasons why to implement the accounting outsourcing and risks that are ...

  7. An individual-based model of zebrafish population dynamics accounting for energy dynamics.

    Directory of Open Access Journals (Sweden)

    Rémy Beaudouin

    Full Text Available Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model was coupled to an individual based model of zebrafish population dynamics (IBM model. Next, we fitted the DEB model to new experimental data on zebrafish growth and reproduction thus improving existing models. We further analysed the DEB-model and DEB-IBM using a sensitivity analysis. Finally, the predictions of the DEB-IBM were compared to existing observations on natural zebrafish populations and the predicted population dynamics are realistic. While our zebrafish DEB-IBM model can still be improved by acquiring new experimental data on the most uncertain processes (e.g. survival or feeding, it can already serve to predict the impact of compounds at the population level.

  8. A comparison of Graham and Piotroski investment models using accounting information and efficacy measurement

    OpenAIRE

    2016-01-01

    We examine the investment models of Benjamin Graham and Joseph Piotroski and compare the efficacy of these two models by running backtest, using screening rules and ranking systems built in Portfolio 123. Using different combinations of screening rules and ranking systems, we also examine the performance of Piotroski and Graham investment models. We find that the combination of Piotroski and Graham investment models performs better than S&P 500. We also find that the Piotroski screening with ...

  9. Accounting outsourcing

    OpenAIRE

    Klečacká, Tereza

    2009-01-01

    This thesis gives a complex view on accounting outsourcing, deals with the outsourcing process from its beginning (condition of collaboration, making of contract), through collaboration to its possible ending. This work defines outsourcing, indicates the main advatages, disadvatages and arguments for its using. The main object of thesis is mainly practical side of accounting outsourcing and providing of first quality accounting services.

  10. Accounting standards

    NARCIS (Netherlands)

    Stellinga, B.; Mügge, D.

    2014-01-01

    The European and global regulation of accounting standards have witnessed remarkable changes over the past twenty years. In the early 1990s, EU accounting practices were fragmented along national lines and US accounting standards were the de facto global standards. Since 2005, all EU listed companie

  11. Accounting for subgrid scale topographic variations in flood propagation modeling using MODFLOW

    DEFF Research Database (Denmark)

    Milzow, Christian; Kinzelbach, W.

    2010-01-01

    To be computationally viable, grid-based spatially distributed hydrological models of large wetlands or floodplains must be set up using relatively large cells (order of hundreds of meters to kilometers). Computational costs are especially high when considering the numerous model runs or model time...

  12. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  13. THE CURRENT ACCOUNT DEFICIT AND THE FIXED EXCHANGE RATE. ADJUSTING MECHANISMS AND MODELS.

    Directory of Open Access Journals (Sweden)

    HATEGAN D.B. Anca

    2010-07-01

    Full Text Available The main purpose of the paper is to explain what measures can be taken in order to fix the trade deficit, and the pressure that is upon a country by imposing such measures. The international and the national supply and demand conditions change rapidly, and if a country doesn’t succeed in keeping a tight control over its deficit, a lot of factors will affect its wellbeing. In order to reduce the external trade deficit, the government needs to resort to several techniques. The desired result is to have a balanced current account, and therefore, the government is free to use measures such as fixing its exchange rate, reducing government spending etc. We have shown that all these measures will have a certain impact upon an economy, by allowing its exports to thrive and eliminate the danger from excessive imports, or vice-versa. The main conclusion our paper is that government intervention is allowed in order to maintain the balance of the current account.

  14. Future Performance Trend Indicators: A Current Value Approach to Human Resources Accounting. Report II: Internal Consistencies and Relationships to Performance in Organization VI. Technical Report.

    Science.gov (United States)

    Pecorella, Patricia A.; Bowers, David G.

    Conventional accounting systems provide no indication as to what conditions and events lead to reported outcomes, since they traditionally do not include measurements of the human organization and its relationship to events at the outcome stage. Human resources accounting is used to measure these additional types of data. This research is…

  15. Nonlinear analysis of a new car-following model accounting for the global average optimal velocity difference

    Science.gov (United States)

    Peng, Guanghan; Lu, Weizhen; He, Hongdi

    2016-09-01

    In this paper, a new car-following model is proposed by considering the global average optimal velocity difference effect on the basis of the full velocity difference (FVD) model. We investigate the influence of the global average optimal velocity difference on the stability of traffic flow by making use of linear stability analysis. It indicates that the stable region will be enlarged by taking the global average optimal velocity difference effect into account. Subsequently, the mKdV equation near the critical point and its kink-antikink soliton solution, which can describe the traffic jam transition, is derived from nonlinear analysis. Furthermore, numerical simulations confirm that the effect of the global average optimal velocity difference can efficiently improve the stability of traffic flow, which show that our new consideration should be taken into account to suppress the traffic congestion for car-following theory.

  16. Research on the Accounting Model of Circular Economy%循环经济会计模式的研究

    Institute of Scientific and Technical Information of China (English)

    王傲舒媞

    2015-01-01

    In this paper, the author combined with the characteristics of circular economy, the important significance of developing circular economy and points out the limitations of current accounting model in the development of circular economy, and puts forward the accounting model suitable for the development of circular economy.%笔者结合循环经济的特点,从发展循环经济的重要意义入手,指出我国现行会计模式在循环经济发展中的局限性,提出适合循环经济发展的会计模式,以期为进一步促进我国循环经济会计体系的良好构建做出有益的参考.

  17. Studying Impact of Organizational Factors in Information Technology Acceptance in Accounting Occupation by Use of TAM Model (Iranian Case Study)

    OpenAIRE

    Akbar Allahyari; Morteza Ramazani

    2012-01-01

    Nowadays, information technology attitudes as the beneficial part of industry, economic and culture. Accounting posits as profession that provide information for decision- making of users and in the complex world, organizations must use information technology to present information for users in time. This research is by purpose of studying impact of organizational factors in information technology acceptance by use of TAM model in study descriptive-surveying method that researcher has used to...

  18. Accounting for spatial effects in land use regression for urban air pollution modeling.

    Science.gov (United States)

    Bertazzon, Stefania; Johnson, Markey; Eccles, Kristin; Kaplan, Gilaad G

    2015-01-01

    In order to accurately assess air pollution risks, health studies require spatially resolved pollution concentrations. Land-use regression (LUR) models estimate ambient concentrations at a fine spatial scale. However, spatial effects such as spatial non-stationarity and spatial autocorrelation can reduce the accuracy of LUR estimates by increasing regression errors and uncertainty; and statistical methods for resolving these effects--e.g., spatially autoregressive (SAR) and geographically weighted regression (GWR) models--may be difficult to apply simultaneously. We used an alternate approach to address spatial non-stationarity and spatial autocorrelation in LUR models for nitrogen dioxide. Traditional models were re-specified to include a variable capturing wind speed and direction, and re-fit as GWR models. Mean R(2) values for the resulting GWR-wind models (summer: 0.86, winter: 0.73) showed a 10-20% improvement over traditional LUR models. GWR-wind models effectively addressed both spatial effects and produced meaningful predictive models. These results suggest a useful method for improving spatially explicit models.

  19. JOMAR - A model for accounting the environmental loads from building constructions

    Energy Technology Data Exchange (ETDEWEB)

    Roenning, Anne; Nereng, Guro; Vold, Mie; Bjoerberg, Svein; Lassen, Niels

    2008-07-01

    The objective for this project was to develop a model as a basis for calculation of environmental profile for whole building constructions, based upon data from databases and general LCA software, in addition to the model structure from the Nordic project on LCC assessment of buildings. The model has been tested on three building constructions; timber based, flexible and heavy as well as heavy. Total energy consumption and emissions contributing to climate change are calculated in a total life cycle perspective. The developed model and exemplifying case assessments have shown that a holistic model including operation phase is both important and possible to implement. The project has shown that the operation phase causes the highest environmental loads when it comes to the exemplified impact categories. A suggestion on further development of the model along two different axes in collaboration with a broader representation from the building sector is given in the report (author)(tk)

  20. Modelling reverse characteristics of power LEDs with thermal phenomena taken into account

    Science.gov (United States)

    Ptak, Przemysław; Górecki, Krzysztof

    2016-01-01

    This paper refers to modelling characteristics of power LEDs with a particular reference to thermal phenomena. Special attention is paid to modelling characteristics of the circuit protecting the considered device against the excessive value of the reverse voltage and to the description of the temperature influence on optical power. The network form of the worked out model is presented and some results of experimental verification of this model for the selected diodes operating at different cooling conditions are described. The very good agreement between the calculated and measured characteristics is obtained.

  1. Mathematical modeling taking into account of intrinsic kinetic properties of cylinder-type vanadium catalyst

    Institute of Scientific and Technical Information of China (English)

    陈振兴; 李洪桂; 王零森

    2004-01-01

    The method to calculate internal surface effective factor of cylinder-type vanadium catalyst Ls-9 was given. Based on hypothesis of subjunctive one dimension diffusion and combined shape adjustment factor with threestep catalytic mechanism model, the macroscopic kinetic model equation about SO2 oxidation on Ls-9 was deduced.With fixed-bed integral reactor and under the conditions of temperature 350 - 410 ℃, space velocity 1 800 - 5 000h-1, SO2 inlet content 7 %- 12%, the macroscopic kinetic data were detected. Through model parameter estimation,the macroscopic kinetic model equation was obtained.

  2. A hybrid mode choice model to account for the dynamic effect of inertia over time

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Börjesson, Maria; Bierlaire, Michel

    gathered over a continuous period of time, six weeks, to study both inertia and the influence of habits. Tendency to stick with the same alternative is measured through lagged variables that link the current choice with the previous trip made with the same purpose, mode and time of day. However, the lagged......The influence of habits, giving rise to inertia effect, in the choice process has been intensely debated in the literature. Typically inertia is accounted for by letting the indirect utility functions of the alternatives of the choice situation at time t depend on the outcome of the choice made...... at a previous point in time. However, according to the psychological literature, inertia is the results of a habit, which is formed in a longer process where many past decisions (not only the immediately previous one) remain in the memory of the consumer and influence behavior. In this study we use panel data...

  3. Modeling the pulse shape of Q-switched lasers to account for terminal-level relaxation

    Institute of Scientific and Technical Information of China (English)

    Zeng Qin-Yong; Wan Yong; Xiong Ji-Chuan; Zhu Da-Yong

    2011-01-01

    To account for the effect of lower-level relaxation, we have derived a characteristic equation for describing the laser pulse from the modified rate equations for Q-switched lasers. The pulse temporal profile is related to the ratio of the lower-level lifetime to the cavity lifetime and the number of times the population inversion density is above the threshold. By solving the coupled rate equations numerically, the effect of terminal-level lifetime on pulse temporal behaviour is analysed. The mode is applied to the case of a diode-pumped Nd:YAG laser that is passively Q-switched by a Cr4+:YAG absorber. Theoretical results show good agreement with the experiments.

  4. Microsimulation Model Estimating Czech Farm Income from Farm Accountancy Data Network Database

    Directory of Open Access Journals (Sweden)

    Z. Hloušková

    2014-09-01

    Full Text Available Agricultural income is one of the most important measures of economic status of agricultural farms and the whole agricultural sector. This work is focused on finding the optimal method of estimating national agricultural income from micro-economic database managed by the Farm Accountancy Data Network (FADN. Use of FADN data base is relevant due to the representativeness of the results for the whole country and the opportunity to carry out micro-level analysis. The main motivation for this study was a first forecast of national agricultural income from FADN data undertaken 9 months before the final official FADN results were published. Our own method of estimating the income estimation and the simulation procedure were established and successfully tested on the whole database on data from two preceding years. Present paper also provides information on used method of agricultural income prediction and on tests of its suitability.

  5. QGSjet II and EPOS hadronic interaction models: comparison with the Yakutsk EAS array data

    Energy Technology Data Exchange (ETDEWEB)

    Knurenko, S.P.; Sabourov, A.V. [Yu. G. Shafer Institute for Cosmophysical Research and Aeronomy (Russian Federation)

    2009-12-15

    Various hadronic interaction models were used in extensive air showers simulations. This resulted in ambiguous estimation of primary energy, cosmic ray flux intensity, mass composition, etc. Several revisions of models have been made recently; for example, third major version of QGSjet II (QGSjet II-03) model was released, new models based on actual accelerator data appeared (EPOS). Employment of newer models always leads to new comprehension of experimental results. Nevertheless, in this case there still is some ambiguity. It is a matter of how correct does the model extrapolate characteristics of primary particle interaction with nuclei of the air from high energies to ultra-high.

  6. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  7. Accounting for correlated observations in an age-based state-space stock assessment model

    DEFF Research Database (Denmark)

    Berg, Casper Willestofte; Nielsen, Anders

    2016-01-01

    Fish stock assessment models often relyon size- or age-specific observations that are assumed to be statistically independent of each other. In reality, these observations are not raw observations, but rather they are estimates from a catch-standardization model or similar summary statistics based...

  8. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  9. Value-Added Models of Assessment: Implications for Motivation and Accountability

    Science.gov (United States)

    Anderman, Eric M.; Anderman, Lynley H.; Yough, Michael S.; Gimbert, Belinda G.

    2010-01-01

    In this article, we examine the relations of value-added models of measuring academic achievement to student motivation. Using an achievement goal orientation theory perspective, we argue that value-added models, which focus on the progress of individual students over time, are more closely aligned with research on student motivation than are more…

  10. Demand model for production of an enterprise taking into account factor of consumer expectations

    Directory of Open Access Journals (Sweden)

    L.V. Potrashkova

    2012-12-01

    Full Text Available This article presents a dynamic mathematical model of demand for innovative and uninnovative production of enterprises. The model allows to estimate future demand as a result of consumer expectations of production quality. Consumer expectations are considered as the resource component of enterprise marketing potential.

  11. Modelling of L-valine Repeated Fed-batch Fermentation Process Taking into Account the Dissolved Oxygen Tension

    Directory of Open Access Journals (Sweden)

    Tzanko Georgiev

    2009-03-01

    Full Text Available This article deals with synthesis of dynamic unstructured model of variable volume fed-batch fermentation process with intensive droppings for L-valine production. The presented approach of the investigation includes the following main procedures: description of the process by generalized stoichiometric equations; preliminary data processing and calculation of specific rates for main kinetic variables; identification of the specific rates takes into account the dissolved oxygen tension; establishment and optimisation of dynamic model of the process; simulation researches. MATLAB is used as a research environment.

  12. Computational evaluation of unsaturated carbonitriles as neutral receptor model for beryllium(II) recognition.

    Science.gov (United States)

    Rosli, Ahmad Nazmi; Ahmad, Mohd Rais; Alias, Yatimah; Zain, Sharifuddin Md; Lee, Vannajan Sanghiran; Woi, Pei Meng

    2014-12-01

    Design of neutral receptor molecules (ionophores) for beryllium(II) using unsaturated carbonitrile models has been carried out via density functional theory, G3, and G4 calculations. The first part of this work focuses on gas phase binding energies between beryllium(II) and 2-cyano butadiene (2-CN BD), 3-cyano propene (3-CN P), and simpler models with two separate fragments; acrylonitrile and ethylene. Interactions between beryllium(II) and cyano nitrogen and terminal olefin in the models have been examined in terms of geometrical changes, distribution of charge over the entire π-system, and rehybridization of vinyl carbon orbitals. NMR shieldings and vibrational frequencies probed charge centers and strength of interactions. The six-membered cyclic complexes have planar structures with the rehybridized carbon slightly out of plane (16° in 2-CN BD). G3 results show that in 2-CN BD complex participation of vinyl carbon further stabilizes the cyclic adduct by 16.3 kcal mol(-1), whereas, in simpler models, interaction between beryllium(II) and acetonitrile is favorable by 46.4 kcal mol(-1) compared with that of ethylene. The terminal vinyl carbon in 2-CN BD rehybridizes to sp (3) with an increase of 7 % of s character to allow interaction with beryllium(II). G4 calculations show that the Be(II) and 2-CN BD complex is more strongly bound than those with Mg(II) and Ca(II) by 98.5 and 139.2 kcal mol(-1) (-1), respectively. QST2 method shows that the cyclic and acyclic forms of Be(II)-2-CN BD complexes are separated by 12.3 kcal mol(-1) barrier height. Overlap population analysis reveals that Ca(II) can be discriminated based on its tendency to form ionic interaction with the receptor models.

  13. Modeling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    Directory of Open Access Journals (Sweden)

    C. Evenhuis

    2014-01-01

    Full Text Available Coral reefs are diverse ecosystems threatened by rising CO2 levels that are driving the observed increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are able to explicitly modelled by linking the rates of growth, recovery and calcification to the rates of bleaching and temperature stress induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The different characteristics of this model are also assessed against independent data to show that the model captures the observed response of corals. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles for understanding the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can reproduce much of the observed response of corals to changes in temperature and ocean acidification.

  14. Modeling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    Science.gov (United States)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2014-01-01

    Coral reefs are diverse ecosystems threatened by rising CO2 levels that are driving the observed increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are able to explicitly modelled by linking the rates of growth, recovery and calcification to the rates of bleaching and temperature stress induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The different characteristics of this model are also assessed against independent data to show that the model captures the observed response of corals. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles for understanding the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can reproduce much of the observed response of corals to changes in temperature and ocean acidification.

  15. Reduced models accounting for parallel magnetic perturbations: gyrofluid and finite Larmor radius-Landau fluid approaches

    Science.gov (United States)

    Tassi, E.; Sulem, P. L.; Passot, T.

    2016-12-01

    Reduced models are derived for a strongly magnetized collisionless plasma at scales which are large relative to the electron thermal gyroradius and in two asymptotic regimes. One corresponds to cold ions and the other to far sub-ion scales. By including the electron pressure dynamics, these models improve the Hall reduced magnetohydrodynamics (MHD) and the kinetic Alfvén wave model of Boldyrev et al. (2013 Astrophys. J., vol. 777, 2013, p. 41), respectively. We show that the two models can be obtained either within the gyrofluid formalism of Brizard (Phys. Fluids, vol. 4, 1992, pp. 1213-1228) or as suitable weakly nonlinear limits of the finite Larmor radius (FLR)-Landau fluid model of Sulem and Passot (J. Plasma Phys., vol 81, 2015, 325810103) which extends anisotropic Hall MHD by retaining low-frequency kinetic effects. It is noticeable that, at the far sub-ion scales, the simplifications originating from the gyroaveraging operators in the gyrofluid formalism and leading to subdominant ion velocity and temperature fluctuations, correspond, at the level of the FLR-Landau fluid, to cancellation between hydrodynamic contributions and ion finite Larmor radius corrections. Energy conservation properties of the models are discussed and an explicit example of a closure relation leading to a model with a Hamiltonian structure is provided.

  16. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    Energy Technology Data Exchange (ETDEWEB)

    Bengtsson, J.

    2010-10-08

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al

  17. An individual-based model of Zebrafish population dynamics accounting for energy dynamics

    DEFF Research Database (Denmark)

    Beaudouin, Remy; Goussen, Benoit; Piccini, Benjamin

    2015-01-01

    Developing population dynamics models for zebrafish is crucial in order to extrapolate from toxicity data measured at the organism level to biological levels relevant to support and enhance ecological risk assessment. To achieve this, a dynamic energy budget for individual zebrafish (DEB model......, the predictions of the DEB-IBM were compared to existing observations on natural zebrafish populations and the predicted population dynamics are realistic. While our zebrafish DEB-IBM model can still be improved by acquiring new experimental data on the most uncertain processes (e.g. survival or feeding), it can...

  18. Mathematical modelling of complex equilibria taking into account experimental data on activities of components

    Energy Technology Data Exchange (ETDEWEB)

    Nikolaeva, L.S.; Evseev, A.M.; Rozen, A.M.; Bobytev, A.P.; Kir' yanov, Yu.A. (Moskovskij Gosudarstvennyj Univ. (USSR))

    1981-09-01

    The extraction systems, the application of which is possible when reprocessing irradiated nuclear fuels are considered. It is shown that the selection of the component activities as the observed properties of the system (responses) provides a possibility to model the equilibria using methods of the regression analysis. The mathematical model of nitric acid extraction with tributyl phosphate is presented. The data on the composition of the complexes in the system studied, obtained by the method of mathematical modelling, are confirmed with the study of the IR spectra of the extracts.

  19. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    Science.gov (United States)

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. An agent-based simulation model of patient choice of health care providers in accountable care organizations.

    Science.gov (United States)

    Alibrahim, Abdullah; Wu, Shinyi

    2016-10-04

    Accountable care organizations (ACO) in the United States show promise in controlling health care costs while preserving patients' choice of providers. Understanding the effects of patient choice is critical in novel payment and delivery models like ACO that depend on continuity of care and accountability. The financial, utilization, and behavioral implications associated with a patient's decision to forego local health care providers for more distant ones to access higher quality care remain unknown. To study this question, we used an agent-based simulation model of a health care market composed of providers able to form ACO serving patients and embedded it in a conditional logit decision model to examine patients capable of choosing their care providers. This simulation focuses on Medicare beneficiaries and their congestive heart failure (CHF) outcomes. We place the patient agents in an ACO delivery system model in which provider agents decide if they remain in an ACO and perform a quality improving CHF disease management intervention. Illustrative results show that allowing patients to choose their providers reduces the yearly payment per CHF patient by $320, reduces mortality rates by 0.12 percentage points and hospitalization rates by 0.44 percentage points, and marginally increases provider participation in ACO. This study demonstrates a model capable of quantifying the effects of patient choice in a theoretical ACO system and provides a potential tool for policymakers to understand implications of patient choice and assess potential policy controls.

  1. Alternative Models of Service, Centralized Machine Operations. Phase II Report. Volume II.

    Science.gov (United States)

    Technology Management Corp., Alexandria, VA.

    A study was conducted to determine if the centralization of playback machine operations for the national free library program would be feasible, economical, and desirable. An alternative model of playback machine services was constructed and compared with existing network operations considering both cost and service. The alternative model was…

  2. Making Collaborative Innovation Accountable

    DEFF Research Database (Denmark)

    Sørensen, Eva

    The public sector is increasingly expected to be innovative, but the prize for a more innovative public sector might be that it becomes difficult to hold public authorities to account for their actions. The article explores the tensions between innovative and accountable governance, describes...... the foundation for these tensions in different accountability models, and suggest directions to take in analyzing the accountability of collaborative innovation processes....

  3. Accounting for scattering in the Landauer-Datta-Lundstrom transport model

    Directory of Open Access Journals (Sweden)

    Юрій Олексійович Кругляк

    2015-03-01

    Full Text Available Scattering of carriers in the LDL transport model during the changes of the scattering times in the collision processes is considered qualitatively. The basic relationship between the transmission coefficient T and the average mean free path  is derived for 1D conductor. As an example, the experimental data for Si MOSFET are analyzed with the use of various models of reliability.

  4. A retinal circuit model accounting for wide-field amacrine cells

    OpenAIRE

    SAĞLAM, Murat; Hayashida, Yuki; Murayama, Nobuki

    2008-01-01

    In previous experimental studies on the visual processing in vertebrates, higher-order visual functions such as the object segregation from background were found even in the retinal stage. Previously, the “linear–nonlinear” (LN) cascade models have been applied to the retinal circuit, and succeeded to describe the input-output dynamics for certain parts of the circuit, e.g., the receptive field of the outer retinal neurons. And recently, some abstract models composed of LN cascades as the cir...

  5. Loading Processes Dynamics Modelling Taking into Account the Bucket-Soil Interaction

    Directory of Open Access Journals (Sweden)

    Carmen Debeleac

    2007-10-01

    Full Text Available The author propose three dynamic models specialized for the vibrations and resistive forces analysis that appear at the loading process with different construction equipment like frontal loaders and excavators.The models used putting into evidence the components of digging: penetration, cutting, and loading.The conclusions of this study consist by evidentiate the dynamic overloads that appear on the working state and that induced the self-oscillations into the equipment structure.

  6. An extended continuum model accounting for the driver's timid and aggressive attributions

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Rongjun; Ge, Hongxia [Faculty of Maritime and Transportation, Ningbo University, Ningbo 315211 (China); Jiangsu Province Collaborative Innovation Center for Modern Urban Traffic Technologies, Nanjing 210096 (China); National Traffic Management Engineering and Technology Research Centre Ningbo University Sub-centre, Ningbo 315211 (China); Wang, Jufeng, E-mail: wjf@nit.zju.edu.cn [Ningbo Institute of Technology, Zhejiang University, Ningbo 315100 (China)

    2017-04-18

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV–Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption. - Highlights: • A new continuum model is developed with the consideration of the driver's timid and aggressive behaviors simultaneously. • Applying the linear stability theory, the new model's linear stability is obtained. • Through nonlinear analysis, the KdV–Burgers equation is derived. • The energy consumption for this model is studied.

  7. A retinal circuit model accounting for wide-field amacrine cells.

    Science.gov (United States)

    Sağlam, Murat; Hayashida, Yuki; Murayama, Nobuki

    2009-03-01

    In previous experimental studies on the visual processing in vertebrates, higher-order visual functions such as the object segregation from background were found even in the retinal stage. Previously, the "linear-nonlinear" (LN) cascade models have been applied to the retinal circuit, and succeeded to describe the input-output dynamics for certain parts of the circuit, e.g., the receptive field of the outer retinal neurons. And recently, some abstract models composed of LN cascades as the circuit elements could explain the higher-order retinal functions. However, in such a model, each class of retinal neurons is mostly omitted and thus, how those neurons play roles in the visual computations cannot be explored. Here, we present a spatio-temporal computational model of the vertebrate retina, based on the response function for each class of retinal neurons and on the anatomical inter-cellular connections. This model was capable of not only reproducing the spatio-temporal filtering properties of the outer retinal neurons, but also realizing the object segregation mechanism in the inner retinal circuit involving the "wide-field" amacrine cells. Moreover, the first-order Wiener kernels calculated for the neurons in our model showed a reasonable fit to the kernels previously measured in the real retinal neuron in situ.

  8. Analysis of homogeneous/non-homogeneous nanofluid models accounting for nanofluid-surface interactions

    Science.gov (United States)

    Ahmad, R.

    2016-07-01

    This article reports an unbiased analysis for the water based rod shaped alumina nanoparticles by considering both the homogeneous and non-homogeneous nanofluid models over the coupled nanofluid-surface interface. The mechanics of the surface are found for both the homogeneous and non-homogeneous models, which were ignored in previous studies. The viscosity and thermal conductivity data are implemented from the international nanofluid property benchmark exercise. All the simulations are being done by using the experimentally verified results. By considering the homogeneous and non-homogeneous models, the precise movement of the alumina nanoparticles over the surface has been observed by solving the corresponding system of differential equations. For the non-homogeneous model, a uniform temperature and nanofluid volume fraction are assumed at the surface, and the flux of the alumina nanoparticle is taken as zero. The assumption of zero nanoparticle flux at the surface makes the non-homogeneous model physically more realistic. The differences of all profiles for both the homogeneous and nonhomogeneous models are insignificant, and this is due to small deviations in the values of the Brownian motion and thermophoresis parameters.

  9. Modelling coral calcification accounting for the impacts of coral bleaching and ocean acidification

    Science.gov (United States)

    Evenhuis, C.; Lenton, A.; Cantin, N. E.; Lough, J. M.

    2015-05-01

    Coral reefs are diverse ecosystems that are threatened by rising CO2 levels through increases in sea surface temperature and ocean acidification. Here we present a new unified model that links changes in temperature and carbonate chemistry to coral health. Changes in coral health and population are explicitly modelled by linking rates of growth, recovery and calcification to rates of bleaching and temperature-stress-induced mortality. The model is underpinned by four key principles: the Arrhenius equation, thermal specialisation, correlated up- and down-regulation of traits that are consistent with resource allocation trade-offs, and adaption to local environments. These general relationships allow this model to be constructed from a range of experimental and observational data. The performance of the model is assessed against independent data to demonstrate how it can capture the observed response of corals to stress. We also provide new insights into the factors that determine calcification rates and provide a framework based on well-known biological principles to help understand the observed global distribution of calcification rates. Our results suggest that, despite the implicit complexity of the coral reef environment, a simple model based on temperature, carbonate chemistry and different species can give insights into how corals respond to changes in temperature and ocean acidification.

  10. Carbon accounting and economic model uncertainty of emissions from biofuels-induced land use change.

    Science.gov (United States)

    Plevin, Richard J; Beckman, Jayson; Golub, Alla A; Witcover, Julie; O'Hare, Michael

    2015-03-03

    Few of the numerous published studies of the emissions from biofuels-induced "indirect" land use change (ILUC) attempt to propagate and quantify uncertainty, and those that have done so have restricted their analysis to a portion of the modeling systems used. In this study, we pair a global, computable general equilibrium model with a model of greenhouse gas emissions from land-use change to quantify the parametric uncertainty in the paired modeling system's estimates of greenhouse gas emissions from ILUC induced by expanded production of three biofuels. We find that for the three fuel systems examined--US corn ethanol, Brazilian sugar cane ethanol, and US soybean biodiesel--95% of the results occurred within ±20 g CO2e MJ(-1) of the mean (coefficient of variation of 20-45%), with economic model parameters related to crop yield and the productivity of newly converted cropland (from forestry and pasture) contributing most of the variance in estimated ILUC emissions intensity. Although the experiments performed here allow us to characterize parametric uncertainty, changes to the model structure have the potential to shift the mean by tens of grams of CO2e per megajoule and further broaden distributions for ILUC emission intensities.

  11. Magnetization of type-II superconductors in the form of short cylinders and the screening supercurrent distribution in the Bean model

    Science.gov (United States)

    Kuz'michev, N. D.; Fedchenko, A. A.

    2012-05-01

    The simplest analytical form of the screening supercurrent distribution is found, and the magnetization of type-II (hard) superconductors in the form of finite-length cylinders and disks (pellets) is calculated. Calculations are carried out in terms of the Bean model taking into account the curvature of magnetic field lines. From this distribution, the total magnetic field intensity and the magnetization hysteresis loop are calculated for such samples in different cases.

  12. Modelling Solar Oscillation Power Spectra: II. Parametric Model of Spectral Lines Observed in Doppler Velocity Measurements

    CERN Document Server

    Vorontsov, Sergei V

    2013-01-01

    We describe a global parametric model for the observed power spectra of solar oscillations of intermediate and low degree. A physically motivated parameterization is used as a substitute for a direct description of mode excitation and damping as these mechanisms remain poorly understood. The model is targeted at the accurate fitting of power spectra coming from Doppler velocity measurements and uses an adaptive response function that accounts for both the vertical and horizontal components of the velocity field on the solar surface and for possible instrumental and observational distortions. The model is continuous in frequency, can easily be adapted to intensity measurements and extends naturally to the analysis of high-frequency pseudo modes (interference peaks at frequencies above the atmospheric acoustic cutoff).

  13. Propeller aircraft interior noise model. II - Scale-model and flight-test comparisons

    Science.gov (United States)

    Willis, C. M.; Mayes, W. H.

    1987-01-01

    A program for predicting the sound levels inside propeller driven aircraft arising from sidewall transmission of airborne exterior noise is validated through comparisons of predictions with both scale-model test results and measurements obtained in flight tests on a turboprop aircraft. The program produced unbiased predictions for the case of the scale-model tests, with a standard deviation of errors of about 4 dB. For the case of the flight tests, the predictions revealed a bias of 2.62-4.28 dB (depending upon whether or not the data for the fourth harmonic were included) and the standard deviation of the errors ranged between 2.43 and 4.12 dB. The analytical model is shown to be capable of taking changes in the flight environment into account.

  14. Accounting for anthropogenic actions in modeling of stream flow at the regional scale

    Science.gov (United States)

    David, C. H.; Famiglietti, J. S.

    2013-12-01

    The modeling of the horizontal movement of water from land to coasts at scales ranging from 10^5 km^2 to 10^6 km^2 has benefited from extensive research within the past two decades. In parallel, community technology for gathering/sharing surface water observations and datasets for describing the geography of terrestrial water bodies have recently had groundbreaking advancements. Yet, the fields of computational hydrology and hydroinformatics have barely started to work hand-in-hand, and much research remains to be performed before we can better understand the anthropogenic impact on surface water through combined observations and models. Here, we build on our existing river modeling approach that leverages community state-of-the-art tools such as atmospheric data from the second phase of the North American Land Data Assimilation System (NLDAS2), river networks from the enhanced National Hydrography Dataset (NHDPlus), and observations from the U.S. Geological Survey National Water Information System (NWIS) obtained through CUAHSI webservices. Modifications are made to our integrated observational/modeling system to include treatment for anthropogenic actions such as dams, pumping and divergences in river networks. Initial results of a study focusing on the entire State of California suggest that availability of data describing human alterations on natural river networks associated with proper representation of such actions in our models could help advance hydrology further. Snapshot from an animation of flow in California river networks. The full animation is available at: http://www.ucchm.org/david/rapid.htm.

  15. An extended continuum model accounting for the driver's timid and aggressive attributions

    Science.gov (United States)

    Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng

    2017-04-01

    Considering the driver's timid and aggressive behaviors simultaneously, a new continuum model is put forwarded in this paper. By applying the linear stability theory, we presented the analysis of new model's linear stability. Through nonlinear analysis, the KdV-Burgers equation is derived to describe density wave near the neutral stability line. Numerical results verify that aggressive driving is better than timid act because the aggressive driver will adjust his speed timely according to the leading car's speed. The key improvement of this new model is that the timid driving deteriorates traffic stability while the aggressive driving will enhance traffic stability. The relationship of energy consumption between the aggressive and timid driving is also studied. Numerical results show that aggressive driver behavior can not only suppress the traffic congestion but also reduce the energy consumption.

  16. Model of Environmental Development of the Urbanized Areas: Accounting of Ecological and other Factors

    Science.gov (United States)

    Abanina, E. N.; Pandakov, K. G.; Agapov, D. A.; Sorokina, Yu V.; Vasiliev, E. H.

    2017-05-01

    Modern cities and towns are often characterized by poor administration, which could be the reason of environmental degradation, the poverty growth, decline in economic growth and social isolation. In these circumstances it is really important to conduct fresh researches forming new ways of sustainable development of administrative districts. This development of the urban areas depends on many interdependent factors: ecological, economic, social. In this article we show some theoretical aspects of forming a model of environmental progress of the urbanized areas. We submit some model containing four levels including natural resources capacities of the territory, its social features, economic growth and human impact. The author describes the interrelations of elements of the model. In this article the program of environmental development of a city is offered and it could be used in any urban area.

  17. Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality

    Energy Technology Data Exchange (ETDEWEB)

    Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis

    2015-03-01

    This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.

  18. Does Don Fisher's high-pressure manifold model account for phloem transport and resource partitioning?

    Science.gov (United States)

    Patrick, John W

    2013-01-01

    The pressure flow model of phloem transport envisaged by Münch (1930) has gained wide acceptance. Recently, however, the model has been questioned on structural and physiological grounds. For instance, sub-structures of sieve elements may reduce their hydraulic conductances to levels that impede flow rates of phloem sap and observed magnitudes of pressure gradients to drive flow along sieve tubes could be inadequate in tall trees. A variant of the Münch pressure flow model, the high-pressure manifold model of phloem transport introduced by Donald Fisher may serve to reconcile at least some of these questions. To this end, key predicted features of the high-pressure manifold model of phloem transport are evaluated against current knowledge of the physiology of phloem transport. These features include: (1) An absence of significant gradients in axial hydrostatic pressure in sieve elements from collection to release phloem accompanied by transport properties of sieve elements that underpin this outcome; (2) Symplasmic pathways of phloem unloading into sink organs impose a major constraint over bulk flow rates of resources translocated through the source-path-sink system; (3) Hydraulic conductances of plasmodesmata, linking sieve elements with surrounding phloem parenchyma cells, are sufficient to support and also regulate bulk flow rates exiting from sieve elements of release phloem. The review identifies strong circumstantial evidence that resource transport through the source-path-sink system is consistent with the high-pressure manifold model of phloem transport. The analysis then moves to exploring mechanisms that may link demand for resources, by cells of meristematic and expansion/storage sinks, with plasmodesmal conductances of release phloem. The review concludes with a brief discussion of how these mechanisms may offer novel opportunities to enhance crop biomass yields.

  19. Taking dietary habits into account: A computational method for modeling food choices that goes beyond price.

    Science.gov (United States)

    Beheshti, Rahmatollah; Jones-Smith, Jessica C; Igusa, Takeru

    2017-01-01

    Computational models have gained popularity as a predictive tool for assessing proposed policy changes affecting dietary choice. Specifically, they have been used for modeling dietary changes in response to economic interventions, such as price and income changes. Herein, we present a novel addition to this type of model by incorporating habitual behaviors that drive individuals to maintain or conform to prior eating patterns. We examine our method in a simulated case study of food choice behaviors of low-income adults in the US. We use data from several national datasets, including the National Health and Nutrition Examination Survey (NHANES), the US Bureau of Labor Statistics and the USDA, to parameterize our model and develop predictive capabilities in 1) quantifying the influence of prior diet preferences when food budgets are increased and 2) simulating the income elasticities of demand for four food categories. Food budgets can increase because of greater affordability (due to food aid and other nutritional assistance programs), or because of higher income. Our model predictions indicate that low-income adults consume unhealthy diets when they have highly constrained budgets, but that even after budget constraints are relaxed, these unhealthy eating behaviors are maintained. Specifically, diets in this population, before and after changes in food budgets, are characterized by relatively low consumption of fruits and vegetables and high consumption of fat. The model results for income elasticities also show almost no change in consumption of fruit and fat in response to changes in income, which is in agreement with data from the World Bank's International Comparison Program (ICP). Hence, the proposed method can be used in assessing the influences of habitual dietary patterns on the effectiveness of food policies.

  20. Mathematical modeling of pigment dispersion taking into account the full agglomerate particle size distribution

    DEFF Research Database (Denmark)

    Kiil, Søren

    2017-01-01

    particle size distribution was simulated. Data from two previous experimental investigations were used for model validation. The first concerns two different yellow organic pigments dispersed in nitrocellulose/ethanol vehicles in a ball mill and the second a red organic pigment dispersed in a solvent-based....... The only adjustable parameter used was an apparent rate constant for the linear agglomerate erosion rate. Model simulations, at selected values of time, for the full agglomerate particle size distribution were in good qualitative agreement with the measured values. A quantitative match of the experimental...

  1. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia

    Directory of Open Access Journals (Sweden)

    Supriyati

    2015-12-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the crite-ria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may dif-fer, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, theAudit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  2. Creative Accounting Model for Increasing Banking Industries’ Competitive Advantage in Indonesia (P.197-207

    Directory of Open Access Journals (Sweden)

    Supriyati Supriyati

    2017-01-01

    Full Text Available Bank Indonesia demands that the national banks should improve their transparency of financial condition and performance for public in line with the development of their products and activities. Furthermore, the banks’ financial statements of Bank Indonesia have become the basis for determining the status of their soundness. In fact, they tend to practice earnings management in order that they can meet the criteria required by Bank Indonesia. For internal purposes, the initiative of earning management has a positive impact on the performance of management. However, for the users of financial statements, it may differ, for example for the value of company, length of time the financial audit, and other aspects of tax evasion by the banks. This study tries to find out 1 the effect of GCG on Earnings Management, 2 the effect of earning management on Company value, the Audit Report Lag, and Taxation, and 3 the effect of Audit Report Lag on Corporate Value and Taxation. This is a quantitative research with the data collected from the bank financial statements, GCG implementation report, and the banks’ annual reports of 2003-2013. There were 41 banks taken using purposive sampling, as listed on the Indonesia Stock Exchange. The results showed that the implementation of GCG affects the occurrence of earning management. Accounting policy flexibility through earning management is expected to affect the length of the audit process and the accuracy of the financial statements presentation on public side. This research is expected to provide managerial implications in order to consider the possibility of earnings management practices in the banking industry. In the long term, earning management is expected to improve the banks’ competitiveness through an increase in the value of the company. Explicitly, earning management also affects the tax avoidance; therefore, the banks intend to pay lower taxes without breaking the existing legislation Taxation

  3. Accountability and non-proliferation nuclear regime: a review of the mutual surveillance Brazilian-Argentine model for nuclear safeguards; Accountability e regime de nao proliferacao nuclear: uma avaliacao do modelo de vigilancia mutua brasileiro-argentina de salvaguardas nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Xavier, Roberto Salles

    2014-08-01

    The regimes of accountability, the organizations of global governance and institutional arrangements of global governance of nuclear non-proliferation and of Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards are the subject of research. The starting point is the importance of the institutional model of global governance for the effective control of non-proliferation of nuclear weapons. In this context, the research investigates how to structure the current arrangements of the international nuclear non-proliferation and what is the performance of model Mutual Vigilance Brazilian-Argentine of Nuclear Safeguards in relation to accountability regimes of global governance. For that, was searched the current literature of three theoretical dimensions: accountability, global governance and global governance organizations. In relation to the research method was used the case study and the treatment technique of data the analysis of content. The results allowed: to establish an evaluation model based on accountability mechanisms; to assess how behaves the model Mutual Vigilance Brazilian-Argentine Nuclear Safeguards front of the proposed accountability regime; and to measure the degree to which regional arrangements that work with systems of global governance can strengthen these international systems. (author)

  4. An Exemplar-Model Account of Feature Inference from Uncertain Categorizations

    Science.gov (United States)

    Nosofsky, Robert M.

    2015-01-01

    In a highly systematic literature, researchers have investigated the manner in which people make feature inferences in paradigms involving uncertain categorizations (e.g., Griffiths, Hayes, & Newell, 2012; Murphy & Ross, 1994, 2007, 2010a). Although researchers have discussed the implications of the results for models of categorization and…

  5. Teachers' Conceptions of Assessment in Chinese Contexts: A Tripartite Model of Accountability, Improvement, and Irrelevance

    Science.gov (United States)

    Brown, Gavin T. L.; Hui, Sammy K. F.; Yu, Flora W. M.; Kennedy, Kerry J.

    2011-01-01

    The beliefs teachers have about assessment influence classroom practices and reflect cultural and societal differences. This paper reports the development of a new self-report inventory to examine beliefs teachers in Hong Kong and southern China contexts have about the nature and purpose of assessment. A statistically equivalent model for Hong…

  6. Accounting for false-positive acoustic detections of bats using occupancy models

    Science.gov (United States)

    Clement, Matthew J.; Rodhouse, Thomas J.; Ormsbee, Patricia C.; Szewczak, Joseph M.; Nichols, James D.

    2014-01-01

    1. Acoustic surveys have become a common survey method for bats and other vocal taxa. Previous work shows that bat echolocation may be misidentified, but common analytic methods, such as occupancy models, assume that misidentifications do not occur. Unless rare, such misidentifications could lead to incorrect inferences with significant management implications.

  7. A Mathematical Model Accounting for the Organisation in Multiplets of the Genetic Code

    OpenAIRE

    Sciarrino, A.

    2001-01-01

    Requiring stability of genetic code against translation errors, modelised by suitable mathematical operators in the crystal basis model of the genetic code, the main features of the organisation in multiplets of the mitochondrial and of the standard genetic code are explained.

  8. Practical Model for First Hyperpolarizability Dispersion Accounting for Both Homogeneous and Inhomogeneous Broadening Effects.

    Science.gov (United States)

    Campo, Jochen; Wenseleers, Wim; Hales, Joel M; Makarov, Nikolay S; Perry, Joseph W

    2012-08-16

    A practical yet accurate dispersion model for the molecular first hyperpolarizability β is presented, incorporating both homogeneous and inhomogeneous line broadening because these affect the β dispersion differently, even if they are indistinguishable in linear absorption. Consequently, combining the absorption spectrum with one free shape-determining parameter Ginhom, the inhomogeneous line width, turns out to be necessary and sufficient to obtain a reliable description of the β dispersion, requiring no information on the homogeneous (including vibronic) and inhomogeneous line broadening mechanisms involved, providing an ideal model for practical use in extrapolating experimental nonlinear optical (NLO) data. The model is applied to the efficient NLO chromophore picolinium quinodimethane, yielding an excellent fit of the two-photon resonant wavelength-dependent data and a dependable static value β0 = 316 × 10(-30) esu. Furthermore, we show that including a second electronic excited state in the model does yield an improved description of the NLO data at shorter wavelengths but has only limited influence on β0.

  9. Fluid Simulations with Atomistic Resolution: Multiscale Model with Account of Nonlocal Momentum Transfer

    NARCIS (Netherlands)

    Svitenkov, A.I.; Chivilikhin, S.A.; Hoekstra, A.G.; Boukhanovsky, A.V.

    2015-01-01

    Nano- and microscale flow phenomena turn out to be highly non-trivial for simulation and require the use of heterogeneous modeling approaches. While the continuum Navier-Stokes equations and related boundary conditions quickly break down at those scales, various direct simulation methods and hybrid

  10. Accountability in Training Transfer: Adapting Schlenker's Model of Responsibility to a Persistent but Solvable Problem

    Science.gov (United States)

    Burke, Lisa A.; Saks, Alan M.

    2009-01-01

    Decades have been spent studying training transfer in organizational environments in recognition of a transfer problem in organizations. Theoretical models of various antecedents, empirical studies of transfer interventions, and studies of best practices have all been advanced to address this continued problem. Yet a solution may not be so…

  11. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    Science.gov (United States)

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  12. A coupled surface/subsurface flow model accounting for air entrapment and air pressure counterflow

    DEFF Research Database (Denmark)

    Delfs, Jens Olaf; Wang, Wenqing; Kalbacher, Thomas

    2013-01-01

    This work introduces the soil air system into integrated hydrology by simulating the flow processes and interactions of surface runoff, soil moisture and air in the shallow subsurface. The numerical model is formulated as a coupled system of partial differential equations for hydrostatic (diffusive...

  13. Taking the Error Term of the Factor Model into Account: The Factor Score Predictor Interval

    Science.gov (United States)

    Beauducel, Andre

    2013-01-01

    The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…

  14. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    NARCIS (Netherlands)

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts, allowin

  15. Spatial modelling and ecosystem accounting for land use planning: addressing deforestation and oil palm expansion in Central Kalimantan, Indonesia

    NARCIS (Netherlands)

    Sumarga, E.

    2015-01-01

    Ecosystem accounting is a new area of environmental economic accounting that aims to measure ecosystem services in a way that is in line with national accounts. The key characteristics of ecosystem accounting include the extension of the valuation boundary of the System of National Accounts,

  16. Thermodynamic Modeling of Developed Structural Turbulence Taking into Account Fluctuations of Energy Dissipation

    Science.gov (United States)

    Kolesnichenko, A. V.

    2004-03-01

    A thermodynamic approach to the construction of a phenomenological macroscopic model of developed turbulence in a compressible fluid is considered with regard for the formation of space-time dissipative structures. A set of random variables were introduced into the model as internal parameters of the turbulent-chaos subsystem. This allowed us to obtain, by methods of nonequilibrium thermodynamics, the kinetic Fokker-Planck equation in the configuration space. This equation serves to determine the temporary evolution of the conditional probability distribution function of structural parameters pertaining to the cascade process of fragmentation of large-scale eddies and temperature inhomogeneities and to analyze Markovian stochastic processes of transition from one nonequilibrium stationary turbulent-motion state to another as a result of successive loss of stability caused by a change in the governing parameters. An alternative method for investigating the mechanisms of such transitions, based on the stochastic Langevin-type equation intimately related to the derived kinetic equation, is also considered. Some postulates and physical and mathematical assumptions used in the thermodynamic model of structurized turbulence are discussed in detail. In particular, we considered, using the deterministic transport equation for conditional means, the cardinal problem of the developed approach-the possibility of the existence of asymptotically stable stationary states of the turbulent-chaos subsystem. Also proposed is the nonequilibrium thermodynamic potential for internal coordinates, which extends the well-known Boltzmann-Planck relationship for equilibrium states to the nonequilibrium stationary states of the representing ensemble. This potential is shown to be the Lyapunov function for such states. The relation is also explored between the internal intermittence in the inertial interval of scales and the fluctuations of the energy of dissipation. This study is aimed at

  17. MODIS Inundation Estimate Assimilation into Soil Moisture Accounting Hydrologic Model: A Case Study in Southeast Asia

    Directory of Open Access Journals (Sweden)

    Ari Posner

    2014-11-01

    Full Text Available Flash Flood Guidance consists of indices that estimate the amount of rain of a certain duration that is needed over a given small basin in order to cause minor flooding. Backwater catchment inundation from swollen rivers or regional groundwater inputs are not significant over the spatial and temporal scales for the majority of upland flash flood prone basins, as such, these effects are not considered. However, some lowland areas and flat terrain near large rivers experience standing water long after local precipitation has ceased. NASA is producing an experimental product from the MODIS that detects standing water. These observations were assimilated into the hydrologic model in order to more accurately represent soil moisture conditions within basins, from sources of water from outside of the basin. Based on the upper soil water content, relations are used to derive an error estimate for the modeled soil saturation fraction; whereby, the soil saturation fraction model state can be updated given the availability of satellite observed inundation. Model error estimates were used in a Monte Carlo ensemble forecast of soil water and flash flood potential. Numerical experiments with six months of data (July 2011–December 2011 showed that MODIS inundation data, when assimilated to correct soil moisture estimates, increased the likelihood that bankfull flow would occur, over non-assimilated modeling, at catchment outlets for approximately 44% of basin-days during the study time period. While this is a much more realistic representation of conditions, no actual events occurred allowing for validation during the time period.

  18. A Semi-Empirical Model for Tilted-Gun Planar Magnetron Sputtering Accounting for Chimney Shadowing

    Science.gov (United States)

    Bunn, J. K.; Metting, C. J.; Hattrick-Simpers, J.

    2015-01-01

    Integrated computational materials engineering (ICME) approaches to composition and thickness profiles of sputtered thin-film samples are the key to expediting materials exploration for these materials. Here, an ICME-based semi-empirical approach to modeling the thickness of thin-film samples deposited via magnetron sputtering is developed. Using Yamamura's dimensionless differential angular sputtering yield and a measured deposition rate at a point in space for a single experimental condition, the model predicts the deposition profile from planar DC sputtering sources. The model includes corrections for off-center, tilted gun geometries as well as shadowing effects from gun chimneys used in most state-of-the-art sputtering systems. The modeling algorithm was validated by comparing its results with experimental deposition rates obtained from a sputtering system utilizing sources with a multi-piece chimney assembly that consists of a lower ground shield and a removable gas chimney. Simulations were performed for gun-tilts ranging from 0° to 31.3° from the vertical with and without the gas chimney installed. The results for the predicted and experimental angular dependence of the sputtering deposition rate were found to have an average magnitude of relative error of for a 0°-31.3° gun-tilt range without the gas chimney, and for a 17.7°-31.3° gun-tilt range with the gas chimney. The continuum nature of the model renders this approach reverse-optimizable, providing a rapid tool for assisting in the understanding of the synthesis-composition-property space of novel materials.

  19. Using state-and-transition modeling to account for imperfect detection in invasive species management

    Science.gov (United States)

    Frid, Leonardo; Holcombe, Tracy; Morisette, Jeffrey T.; Olsson, Aaryn D.; Brigham, Lindy; Bean, Travis M.; Betancourt, Julio L.; Bryan, Katherine

    2013-01-01

    Buffelgrass, a highly competitive and flammable African bunchgrass, is spreading rapidly across both urban and natural areas in the Sonoran Desert of southern and central Arizona. Damages include increased fire risk, losses in biodiversity, and diminished revenues and quality of life. Feasibility of sustained and successful mitigation will depend heavily on rates of spread, treatment capacity, and cost–benefit analysis. We created a decision support model for the wildland–urban interface north of Tucson, AZ, using a spatial state-and-transition simulation modeling framework, the Tool for Exploratory Landscape Scenario Analyses. We addressed the issues of undetected invasions, identifying potentially suitable habitat and calibrating spread rates, while answering questions about how to allocate resources among inventory, treatment, and maintenance. Inputs to the model include a state-and-transition simulation model to describe the succession and control of buffelgrass, a habitat suitability model, management planning zones, spread vectors, estimated dispersal kernels for buffelgrass, and maps of current distribution. Our spatial simulations showed that without treatment, buffelgrass infestations that started with as little as 80 ha (198 ac) could grow to more than 6,000 ha by the year 2060. In contrast, applying unlimited management resources could limit 2060 infestation levels to approximately 50 ha. The application of sufficient resources toward inventory is important because undetected patches of buffelgrass will tend to grow exponentially. In our simulations, areas affected by buffelgrass may increase substantially over the next 50 yr, but a large, upfront investment in buffelgrass control could reduce the infested area and overall management costs.

  20. Available for the Apple II: FIRM: Florida InteRactive Modeler.

    Science.gov (United States)

    Levy, C. Michael; And Others

    1983-01-01

    The Apple II microcomputer program described allows instructors with minimal programing experience to construct computer models of psychological phenomena for students to investigate. Use of these models eliminates need to maintain/house/breed animals or purchase sophisticated laboratory equipment. Several content models are also described,…

  1. Codon-substitution models to detect adaptive evolution that account for heterogeneous selective pressures among site classes.

    Science.gov (United States)

    Yang, Ziheng; Swanson, Willie J

    2002-01-01

    The nonsynonymous to synonymous substitution rate ratio (omega = d(N)/d(S)) provides a sensitive measure of selective pressure at the protein level, with omega values 1 indicating purifying selection, neutral evolution, and diversifying selection, respectively. Maximum likelihood models of codon substitution developed recently account for variable selective pressures among amino acid sites by employing a statistical distribution for the omega ratio among sites. Those models, called random-sites models, are suitable when we do not know a priori which sites are under what kind of selective pressure. Sometimes prior information (such as the tertiary structure of the protein) might be available to partition sites in the protein into different classes, which are expected to be under different selective pressures. It is then sensible to use such information in the model. In this paper, we implement maximum likelihood models for prepartitioned data sets, which account for the heterogeneity among site partitions by using different omega parameters for the partitions. The models, referred to as fixed-sites models, are also useful for combined analysis of multiple genes from the same set of species. We apply the models to data sets of the major histocompatibility complex (MHC) class I alleles from human populations and of the abalone sperm lysin genes. Structural information is used to partition sites in MHC into two classes: those in the antigen recognition site (ARS) and those outside. Positive selection is detected in the ARS by the fixed-sites models. Similarly, sites in lysin are classified into the buried and solvent-exposed classes according to the tertiary structure, and positive selection was detected at the solvent-exposed sites. The random-sites models identified a number of sites under positive selection in each data set, confirming and elaborating the results of the fixed-sites models. The analysis demonstrates the utility of the fixed-sites models, as well as

  2. Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction

    Science.gov (United States)

    Wilson, Teresa; Bartlett, Jennifer L.

    2016-01-01

    Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.

  3. Tree biomass in the Swiss landscape: nationwide modelling for improved accounting for forest and non-forest trees.

    Science.gov (United States)

    Price, B; Gomez, A; Mathys, L; Gardi, O; Schellenberger, A; Ginzler, C; Thürig, E

    2017-03-01

    Trees outside forest (TOF) can perform a variety of social, economic and ecological functions including carbon sequestration. However, detailed quantification of tree biomass is usually limited to forest areas. Taking advantage of structural information available from stereo aerial imagery and airborne laser scanning (ALS), this research models tree biomass using national forest inventory data and linear least-square regression and applies the model both inside and outside of forest to create a nationwide model for tree biomass (above ground and below ground). Validation of the tree biomass model against TOF data within settlement areas shows relatively low model performance (R (2) of 0.44) but still a considerable improvement on current biomass estimates used for greenhouse gas inventory and carbon accounting. We demonstrate an efficient and easily implementable approach to modelling tree biomass across a large heterogeneous nationwide area. The model offers significant opportunity for improved estimates on land use combination categories (CC) where tree biomass has either not been included or only roughly estimated until now. The ALS biomass model also offers the advantage of providing greater spatial resolution and greater within CC spatial variability compared to the current nationwide estimates.

  4. (I) A Declarative Framework for ERP Systems(II) Reactors: A Data-Driven Programming Model for Distributed Applications

    DEFF Research Database (Denmark)

    Stefansen, Christian Oskar Erik

    on the idea of soft constraints the paper explains the design, semantics, and use of a language for allocating work in business processes. The language lets process designers express both hard constraints and soft constraints. (II) The Reactors programming model: • Reactors: A Data-Oriented Synchronous......, namely the general ledger and accounts receivable. The result is an event-based approach to designing ERP systems and an abstract-level sketch of the architecture. • Compositional Specification of Commercial Contracts. The paper describes the design, multiple semantics, and use of a domain......-specific language (DSL) for modeling commercial contracts. • SMAWL: A SMAll Workflow Language Based on CCS. The paper shows how workflow patterns can be encoded in CCS and proceeds to design a macro language, SMAWL, for workflows based on those patterns. The semantics of SMAWL is defined via translation to CCS...

  5. Modelling and experimental validation for off-design performance of the helical heat exchanger with LMTD correction taken into account

    Energy Technology Data Exchange (ETDEWEB)

    Phu, Nguyen Minh; Trinh, Nguyen Thi Minh [Vietnam National University, Ho Chi Minh City (Viet Nam)

    2016-07-15

    Today the helical coil heat exchanger is being employed widely due to its dominant advantages. In this study, a mathematical model was established to predict off-design works of the helical heat exchanger. The model was based on the LMTD and e-NTU methods, where a LMTD correction factor was taken into account to increase accuracy. An experimental apparatus was set-up to validate the model. Results showed that errors of thermal duty, outlet hot fluid temperature, outlet cold fluid temperature, shell-side pressure drop, and tube-side pressure drop were respectively +-5%, +-1%, +-1%, +-5% and +-2%. Diagrams of dimensionless operating parameters and a regression function were also presented as design-maps, a fast calculator for usage in design and operation of the exchanger. The study is expected to be a good tool to estimate off-design conditions of the single-phase helical heat exchangers.

  6. Where's the problem? Considering Laing and Esterson's account of schizophrenia, social models of disability, and extended mental disorder.

    Science.gov (United States)

    Cooper, Rachel

    2017-08-01

    In this article, I compare and evaluate R. D. Laing and A. Esterson's account of schizophrenia as developed in Sanity, Madness and the Family (1964), social models of disability, and accounts of extended mental disorder. These accounts claim that some putative disorders (schizophrenia, disability, certain mental disorders) should not be thought of as reflecting biological or psychological dysfunction within the afflicted individual, but instead as external problems (to be located in the family, or in the material and social environment). In this article, I consider the grounds on which such claims might be supported. I argue that problems should not be located within an individual putative patient in cases where there is some acceptable test environment in which there is no problem. A number of cases where such an argument can show that there is no internal disorder are discussed. I argue, however, that Laing and Esterson's argument-that schizophrenia is not within diagnosed patients-does not work. The problem with their argument is that they fail to show that the diagnosed women in their study function adequately in any environment.

  7. Method for determining the duration of construction basing on evolutionary modeling taking into account random organizational expectations

    Directory of Open Access Journals (Sweden)

    Alekseytsev Anatoliy Viktorovich

    2016-10-01

    Full Text Available One of the problems of construction planning is failure to meet time constraints and increase of workflow duration. In the recent years informational technologies are efficiently used to solve the problem of estimation of construction period. The issue of optimal estimate of the duration of construction, taking into account the possible organizational expectations is considered in the article. In order to solve this problem the iteration scheme of evolutionary modeling, in which random values of organizational expectations are used as variable parameters is developed. Adjustable genetic operators are used to improve the efficiency of the search for solutions. The reliability of the proposed approach is illustrated by an example of formation of construction schedules of monolithic foundations for buildings, taking into account possible disruptions of supply of concrete and reinforcement cages. Application of the presented methodology enables automated acquisition of several alternative scheduling of construction in accordance with standard or directive duration. Application of this computational procedure has the prospects of taking into account of construction downtime due to weather, accidents related to construction machinery breakdowns or local emergency collapses of the structures being erected.

  8. Accounting for non-linear chemistry of ship plumes in the GEOS-Chem global chemistry transport model

    Directory of Open Access Journals (Sweden)

    G. C. M. Vinken

    2011-11-01

    Full Text Available We present a computationally efficient approach to account for the non-linear chemistry occurring during the dispersion of ship exhaust plumes in a global 3-D model of atmospheric chemistry (GEOS-Chem. We use a plume-in-grid formulation where ship emissions age chemically for 5 h before being released in the global model grid. Besides reducing the original ship NOx emissions in GEOS-Chem, our approach also releases the secondary compounds ozone and HNO3, produced during the 5 h after the original emissions, into the model. We applied our improved method and also the widely used "instant dilution" approach to a 1-yr GEOS-Chem simulation of global tropospheric ozone-NOx-VOC-aerosol chemistry. We also ran simulations with the standard model (emitting 10 molecules O3 and 1 molecule HNO3 per ship NOx molecule, and a model without any ship emissions at all. The model without any ship emissions simulates up to 0.1 ppbv (or 50% lower NOx concentrations over the North Atlantic in July than our improved GEOS-Chem model. "Instant dilution" overestimates NOx concentrations by 0.1 ppbv (50% and ozone by 3–5 ppbv (10–25%, compared to our improved model over this region. These conclusions are supported by comparing simulated and observed NOx and ozone concentrations in the lower troposphere over the Pacific Ocean. The comparisons show that the improved GEOS-Chem model simulates NOx concentrations in between the instant dilution model and the model without ship emissions, which results in lower O3 concentrations than the instant dilution model. The relative differences in simulated NOx and ozone between our improved approach and instant dilution are smallest over strongly polluted seas (e.g. North Sea, suggesting that accounting for in-plume chemistry is most relevant for pristine marine areas.

  9. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    Energy Technology Data Exchange (ETDEWEB)

    Beckon, William N., E-mail: William_Beckon@fws.gov

    2016-07-15

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  10. Accounting for Long Term Sediment Storage in a Watershed Scale Numerical Model for Suspended Sediment Routing

    Science.gov (United States)

    Keeler, J. J.; Pizzuto, J. E.; Skalak, K.; Karwan, D. L.; Benthem, A.; Ackerman, T. R.

    2015-12-01

    Quantifying the delivery of suspended sediment from upland sources to downstream receiving waters is important for watershed management, but current routing models fail to accurately represent lag times in delivery resulting from sediment storage. In this study, we route suspended sediment tagged by a characteristic tracer using a 1-dimensional model that implicitly includes storage and remobilization processes and timescales. From an input location where tagged sediment is added, the model advects suspended sediment downstream at the velocity of the stream (adjusted for the intermittency of transport events). Deposition rates are specified by the fraction of the suspended load stored per kilometer of downstream transport (presumably available from a sediment budget). Tagged sediment leaving storage is evaluated from a convolution equation based on the probability distribution function (pdf) of sediment storage waiting times; this approach avoids the difficulty of accurately representing complex processes of sediment remobilization from floodplain and other deposits. To illustrate the role of storage on sediment delivery, we compare exponential and bounded power-law waiting time pdfs with identical means of 94 years. In both cases, the median travel time for sediment to reach the depocenter in fluvial systems less than 40km long is governed by in-channel transport and is unaffected by sediment storage. As the channel length increases, however, the median sediment travel time reflects storage rather than in-channel transport; travel times do not vary significantly between the two different waiting time functions. At distances of 50, 100, and 200 km, the median travel time for suspended sediment is 36, 136, and 325 years, orders of magnitude slower than travel times associated with in-channel transport. These computations demonstrate that storage can be neglected for short rivers, but for longer systems, storage controls the delivery of suspended sediment.

  11. An extended macro traffic flow model accounting for multiple optimal velocity functions with different probabilities

    Science.gov (United States)

    Cheng, Rongjun; Ge, Hongxia; Wang, Jufeng

    2017-08-01

    Due to the maximum velocity and safe headway distance of the different vehicles are not exactly the same, an extended macro model of traffic flow with the consideration of multiple optimal velocity functions with probabilities is proposed in this paper. By means of linear stability theory, the new model's linear stability condition considering multiple probabilities optimal velocity is obtained. The KdV-Burgers equation is derived to describe the propagating behavior of traffic density wave near the neutral stability line through nonlinear analysis. The numerical simulations of influences of multiple maximum velocities and multiple safety distances on model's stability and traffic capacity are carried out. The cases of two different kinds of maximum speeds with same safe headway distance, two different types of safe headway distances with same maximum speed and two different max velocities and two different time-gaps are all explored by numerical simulations. First cases demonstrate that when the proportion of vehicles with a larger vmax increase, the traffic tends to unstable, which also means that jerk and brakes is not conducive to traffic stability and easier to result in stop and go phenomenon. Second cases show that when the proportion of vehicles with greater safety spacing increases, the traffic tends to be unstable, which also means that too cautious assumptions or weak driving skill is not conducive to traffic stability. Last cases indicate that increase of maximum speed is not conducive to traffic stability, while reduction of the safe headway distance is conducive to traffic stability. Numerical simulation manifests that the mixed driving and traffic diversion does not have effect on the traffic capacity when traffic density is low or heavy. Numerical results also show that mixed driving should be chosen to increase the traffic capacity when the traffic density is lower, while the traffic diversion should be chosen to increase the traffic capacity when

  12. Regional input-output models and the treatment of imports in the European System of Accounts

    OpenAIRE

    Kronenberg, Tobias

    2011-01-01

    Input-output models are often used in regional science due to their versatility and their ability to capture many of the distinguishing features of a regional economy. Input-output tables are available for all EU member countries, but they are hard to find at the regional level, since many regional governments lack the resources or the will to produce reliable, survey-based regional input-output tables. Therefore, in many cases researchers adopt nonsurvey techniques to derive regional input-o...

  13. Accounting for heaping in retrospectively reported event data - a mixture-model approach.

    Science.gov (United States)

    Bar, Haim Y; Lillard, Dean R

    2012-11-30

    When event data are retrospectively reported, more temporally distal events tend to get 'heaped' on even multiples of reporting units. Heaping may introduce a type of attenuation bias because it causes researchers to mismatch time-varying right-hand side variables. We develop a model-based approach to estimate the extent of heaping in the data and how it affects regression parameter estimates. We use smoking cessation data as a motivating example, but our method is general. It facilitates the use of retrospective data from the multitude of cross-sectional and longitudinal studies worldwide that collect and potentially could collect event data.

  14. Research destruction ice under dynamic loading. Part 1. Modeling explosive ice cover into account the temperature

    Directory of Open Access Journals (Sweden)

    Bogomolov Gennady N.

    2017-01-01

    Full Text Available In the research, the behavior of ice under shock and explosive loads is analyzed. Full-scale experiments were carried out. It is established that the results of 2013 practically coincide with the results of 2017, which is explained by the temperature of the formation of river ice. Two research objects are considered, including freshwater ice and river ice cover. The Taylor test was simulated numerically. The results of the Taylor test are presented. Ice is described by an elastoplastic model of continuum mechanics. The process of explosive loading of ice by emulsion explosives is numerically simulated. The destruction of the ice cover under detonation products is analyzed in detail.

  15. 物流作业成本核算模型研究%Study on Logistics Activity Cost Accounting Model

    Institute of Scientific and Technical Information of China (English)

    李淼; 程国全

    2011-01-01

    通过对物流中心作业流程的标准化设计,基于作业成本法的思想,运用表格工具进行各项作业的成本参数设置,从而建立起物流作业成本核算模型.%Through designing the standard logistics center activity process and using the method of activity-based costing, the paper employs diagram and table drawing softwares to the setting of the cost parameters of various logistics activities which are then used to formulate the cost accounting model of logistics activities.

  16. Neurobiologia do parkinsonismo: II. modelos experimentais Neurobiology of parkinsonism: II. experimental models

    Directory of Open Access Journals (Sweden)

    Silvia Ponzoni

    1995-09-01

    Full Text Available O emprego de modelos experimentais de parkinsonismo tem contribuído não só para explicar o conhecimento das funções dos gânglios basais como também tem permitido o surgimento de várias hipóteses para explicar os processos neurodegeneratives do sistema nervoso central. Nesta revisão são apresentados e discutidos os modelos de parkinsonismo que utilizam neurotoxins como a 6-hidroxidopamina, MPTP e o manganês.The study of experimental models of parkinsonism has contributed to the knowledge of basal ganglia functions, as well as to the establishment of several hypothesis for the explanation of the cause and expression of central neurodegenerative disorders. In this review we present and discuss several models such as 6-hydroxydopamine, MPTP and manganese, all of them widely used to characterize the behavioral, cellular and molecular mechanisms of parkinsonism.

  17. Account of near-cathode sheath in numerical models of high-pressure arc discharges

    Science.gov (United States)

    Benilov, M. S.; Almeida, N. A.; Baeva, M.; Cunha, M. D.; Benilova, L. G.; Uhrlandt, D.

    2016-06-01

    Three approaches to describing the separation of charges in near-cathode regions of high-pressure arc discharges are compared. The first approach employs a single set of equations, including the Poisson equation, in the whole interelectrode gap. The second approach employs a fully non-equilibrium description of the quasi-neutral bulk plasma, complemented with a newly developed description of the space-charge sheaths. The third, and the simplest, approach exploits the fact that significant power is deposited by the arc power supply into the near-cathode plasma layer, which allows one to simulate the plasma-cathode interaction to the first approximation independently of processes in the bulk plasma. It is found that results given by the different models are generally in good agreement, and in some cases the agreement is even surprisingly good. It follows that the predicted integral characteristics of the plasma-cathode interaction are not strongly affected by details of the model provided that the basic physics is right.

  18. Singing with yourself: evidence for an inverse modeling account of poor-pitch singing.

    Science.gov (United States)

    Pfordresher, Peter Q; Mantell, James T

    2014-05-01

    Singing is a ubiquitous and culturally significant activity that humans engage in from an early age. Nevertheless, some individuals - termed poor-pitch singers - are unable to match target pitches within a musical semitone while singing. In the experiments reported here, we tested whether poor-pitch singing deficits would be reduced when individuals imitate recordings of themselves as opposed to recordings of other individuals. This prediction was based on the hypothesis that poor-pitch singers have not developed an abstract "inverse model" of the auditory-vocal system and instead must rely on sensorimotor associations that they have experienced directly, which is true for sequences an individual has already produced. In three experiments, participants, both accurate and poor-pitch singers, were better able to imitate sung recordings of themselves than sung recordings of other singers. However, this self-advantage was enhanced for poor-pitch singers. These effects were not a byproduct of self-recognition (Experiment 1), vocal timbre (Experiment 2), or the absolute pitch of target recordings (i.e., the advantage remains when recordings are transposed, Experiment 3). Results support the conceptualization of poor-pitch singing as an imitative deficit resulting from a deficient inverse model of the auditory-vocal system with respect to pitch. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Taking into account topography rapid variations in forward modelling and inverse problems in seismology

    Science.gov (United States)

    Capdeville, Y.; Jean-Jacques, M.

    2011-12-01

    The modeling of seismic elastic wave full waveform in a limited frequency band is now well established with a set of efficient numerical methods like the spectral element, the discontinuous Galerking or the finite difference methods. The constant increase of computing power with time has now allowed the use of seismic elastic wave full waveforms in a limited frequency band to image the elastic properties of the earth. Nevertheless, inhomogeneities of scale much smaller the minimum wavelength of the wavefield associated to the maximum frequency of the limited frequency band, are still a challenge for both forward and inverse problems. In this work, we tackle the problem of a topography varying much faster than the minimum wavelength. Using a non periodic homogenization theory and a matching asymptotic technique, we show how to remove the fast variation of the topography and replace it by a smooth Dirichlet to Neumann operator at the surface. After showing some 2D forward modeling numerical examples, we will discuss the implications of such a development for both forward and inverse problems.

  20. One-dimensional model of oxygen transport impedance accounting for convection perpendicular to the electrode

    Energy Technology Data Exchange (ETDEWEB)

    Mainka, J. [Laboratorio Nacional de Computacao Cientifica (LNCC), CMC 6097, Av. Getulio Vargas 333, 25651-075 Petropolis, RJ, Caixa Postal 95113 (Brazil); Maranzana, G.; Thomas, A.; Dillet, J.; Didierjean, S.; Lottin, O. [Laboratoire d' Energetique et de Mecanique Theorique et Appliquee (LEMTA), Universite de Lorraine, 2, avenue de la Foret de Haye, 54504 Vandoeuvre-les-Nancy (France); LEMTA, CNRS, 2, avenue de la Foret de Haye, 54504 Vandoeuvre-les-Nancy (France)

    2012-10-15

    A one-dimensional (1D) model of oxygen transport in the diffusion media of proton exchange membrane fuel cells (PEMFC) is presented, which considers convection perpendicular to the electrode in addition to diffusion. The resulting analytical expression of the convecto-diffusive impedance is obtained using a convection-diffusion equation instead of a diffusion equation in the case of classical Warburg impedance. The main hypothesis of the model is that the convective flux is generated by the evacuation of water produced at the cathode which flows through the porous media in vapor phase. This allows the expression of the convective flux velocity as a function of the current density and of the water transport coefficient {alpha} (the fraction of water being evacuated at the cathode outlet). The resulting 1D oxygen transport impedance neglects processes occurring in the direction parallel to the electrode that could have a significant impact on the cell impedance, like gas consumption or concentration oscillations induced by the measuring signal. However, it enables us to estimate the impact of convection perpendicular to the electrode on PEMFC impedance spectra and to determine in which conditions the approximation of a purely diffusive oxygen transport is valid. Experimental observations confirm the numerical results. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  1. Modeling co-occurrence of northern spotted and barred owls: accounting for detection probability differences

    Science.gov (United States)

    Bailey, Larissa L.; Reid, Janice A.; Forsman, Eric D.; Nichols, James D.

    2009-01-01

    Barred owls (Strix varia) have recently expanded their range and now encompass the entire range of the northern spotted owl (Strix occidentalis caurina). This expansion has led to two important issues of concern for management of northern spotted owls: (1) possible competitive interactions between the two species that could contribute to population declines of northern spotted owls, and (2) possible changes in vocalization behavior and detection probabilities of northern spotted owls induced by presence of barred owls. We used a two-species occupancy model to investigate whether there was evidence of competitive exclusion between the two species at study locations in Oregon, USA. We simultaneously estimated detection probabilities for both species and determined if the presence of one species influenced the detection of the other species. Model selection results and associated parameter estimates provided no evidence that barred owls excluded spotted owls from territories. We found strong evidence that detection probabilities differed for the two species, with higher probabilities for northern spotted owls that are the object of current surveys. Non-detection of barred owls is very common in surveys for northern spotted owls, and detection of both owl species was negatively influenced by the presence of the congeneric species. Our results suggest that analyses directed at hypotheses of barred owl effects on demographic or occupancy vital rates of northern spotted owls need to deal adequately with imperfect and variable detection probabilities for both species.

  2. Accounting emergy flows to determine the best production model of a coffee plantation

    Energy Technology Data Exchange (ETDEWEB)

    Giannetti, B.F.; Ogura, Y.; Bonilla, S.H. [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil); Almeida, C.M.V.B., E-mail: cmvbag@terra.com.br [Universidade Paulista, Programa de Pos Graduacao em Engenharia de Producao, R. Dr. Bacelar, 1212 Sao Paulo SP (Brazil)

    2011-11-15

    Cerrado, a savannah region, is Brazil's second largest ecosystem after the Amazon rainforest and is also threatened with imminent destruction. In the present study emergy synthesis was applied to assess the environmental performance of a coffee farm located in Coromandel, Minas Gerais, in the Brazilian Cerrado. The effects of land use on sustainability were evaluated by comparing the emergy indices along ten years in order to assess the energy flows driving the production process, and to determine the best production model combining productivity and environmental performance. The emergy indices are presented as a function of the annual crop. Results show that Santo Inacio farm should produce approximately 20 bags of green coffee per hectare to accomplish its best performance regarding both the production efficiency and the environment. The evaluation of coffee trade complements those obtained by contrasting productivity and environmental performance, and despite of the market prices variation, the optimum interval for Santo Inacio's farm is between 10 and 25 coffee bags/ha. - Highlights: > Emergy synthesis is used to assess the environmental performance of a coffee farm in Brazil. > The effects of land use on sustainability were evaluated along ten years. > The energy flows driving the production process were assessed. > The best production model combining productivity and environmental performance was determined.

  3. Models of cuspy triaxial stellar systems. II. Regular orbits

    CERN Document Server

    Muzzio, J C; Zorzi, A F

    2012-01-01

    In the first paper of this series we used the N--body method to build a dozen cuspy (gamma ~ 1) triaxial models of stellar systems, and we showed that they were highly stable over time intervals of the order of a Hubble time, even though they had very large fractions of chaotic orbits (more than 85 per cent in some cases). The models were grouped in four sets, each one comprising models morphologically resembling E2, E3, E4 and E5 galaxies, respectively. The three models within each set, although different, had the same global properties and were statistically equivalent. In the present paper we use frequency analysis to classify the regular orbits of those models. The bulk of those orbits are short axis tubes (SATs), with a significant fraction of long axis tubes (LATs) in the E2 models that decreases in the E3 and E4 models to become negligibly small in the E5 models. Most of the LATs in the E2 and E3 models are outer LATs, but the situation reverses in the E4 and E5 models where the few LATs are mainly inn...

  4. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections.

    Science.gov (United States)

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-06-18

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model.

  5. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections

    Science.gov (United States)

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-01-01

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model

  6. Two-Higgs-doublet model of type II confronted with the LHC run I and run II data

    Science.gov (United States)

    Wang, Lei; Zhang, Feng; Han, Xiao-Fang

    2017-06-01

    We examine the parameter space of the two-Higgs-doublet model of type II after imposing the relevant theoretical and experimental constraints from the precision electroweak data, B -meson decays, and the LHC run I and run II data. We find that the searches for Higgs bosons via the τ+τ- , W W , Z Z , γ γ , h h , h Z , H Z , and A Z channels can give strong constraints on the C P -odd Higgs A and heavy C P -even Higgs H , and the parameter space excluded by each channel is respectively carved out in detail assuming that either mA or mH are fixed to 600 or 700 GeV in the scans. The surviving samples are discussed in two different regions. (i) In the standard model-like coupling region of the 125 GeV Higgs, mA is allowed to be as low as 350 GeV, and a strong upper limit is imposed on tan β . mH is allowed to be as low as 200 GeV for the appropriate values of tan β , sin (β -α ), and mA, but is required to be larger than 300 GeV for mA=700 GeV . (ii) In the wrong-sign Yukawa coupling region of the 125 GeV Higgs, the b b ¯→A /H →τ+τ- channel can impose the upper limits on tan β and sin (β -α ), and the A →h Z channel can give the lower limits on tan β and sin (β -α ). mA and mH are allowed to be as low as 60 and 200 GeV, respectively, but 320 GeV

  7. The application of multilevel modelling to account for the influence of walking speed in gait analysis.

    Science.gov (United States)

    Keene, David J; Moe-Nilssen, Rolf; Lamb, Sarah E

    2016-01-01

    Differences in gait performance can be explained by variations in walking speed, which is a major analytical problem. Some investigators have standardised speed during testing, but this can result in an unnatural control of gait characteristics. Other investigators have developed test procedures where participants walking at their self-selected slow, preferred and fast speeds, with computation of gait characteristics at a standardised speed. However, this analysis is dependent upon an overlap in the ranges of gait speed observed within and between participants, and this is difficult to achieve under self-selected conditions. In this report a statistical analysis procedure is introduced that utilises multilevel modelling to analyse data from walking tests at self-selected speeds, without requiring an overlap in the range of speeds observed or the routine use of data transformations.

  8. Accounting for crustal magnetization in models of the core magnetic field

    Science.gov (United States)

    Jackson, Andrew

    1990-01-01

    The problem of determining the magnetic field originating in the earth's core in the presence of remanent and induced magnetization is considered. The effect of remanent magnetization in the crust on satellite measurements of the core magnetic field is investigated. The crust as a zero-mean stationary Gaussian random process is modelled using an idea proposed by Parker (1988). It is shown that the matrix of second-order statistics is proportional to the Gram matrix, which depends only on the inner-products of the appropriate Green's functions, and that at a typical satellite altitude of 400 km the data are correlated out to an angular separation of approximately 15 deg. Accurate and efficient means of calculating the matrix elements are given. It is shown that the variance of measurements of the radial component of a magnetic field due to the crust is expected to be approximately twice that in horizontal components.

  9. A coupled surface/subsurface flow model accounting for air entrapment and air pressure counterflow

    DEFF Research Database (Denmark)

    Delfs, Jens Olaf; Wang, Wenqing; Kalbacher, Thomas

    2013-01-01

    This work introduces the soil air system into integrated hydrology by simulating the flow processes and interactions of surface runoff, soil moisture and air in the shallow subsurface. The numerical model is formulated as a coupled system of partial differential equations for hydrostatic (diffusive...... algorithm, leakances operate as a valve for gas pressure in a liquid-covered porous medium facilitating the simulation of air out-break events through the land surface. General criteria are stated to guarantee stability in a sequential iterative coupling algorithm and, in addition, for leakances to control...... the mass exchange between compartments. A benchmark test, which is based on a classic experimental data set on infiltration excess (Horton) overland flow, identified a feedback mechanism between surface runoff and soil air pressures. Our study suggests that air compression in soils amplifies surface runoff...

  10. Accounting for sex differences in PTSD: A multi-variable mediation model

    DEFF Research Database (Denmark)

    Christiansen, Dorte M.; Hansen, Maj

    2015-01-01

    ABSTRACT Background: Approximately twice as many females as males are diagnosed with posttraumatic stress disorder (PTSD). However, little is known about why females report more PTSD symptoms than males. Prior studies have generally focused on few potential mediators at a time and have often used...... specifically to test a multiple mediator model. Results: Females reported more PTSD symptoms than males and higher levels of neuroticism, depression, physical anxiety sensitivity, peritraumatic fear, horror, and helplessness (the A2 criterion), tonic immobility, panic, dissociation, negative posttraumatic...... that females report more PTSD symptoms because they experience higher levels of associated risk factors. The results are relevant to other trauma populations and to other trauma- related psychiatric disorders more prevalent in females, such as depression and anxiety. Keywords: Posttraumatic stress disorder...

  11. A hybrid mode choice model to account for the dynamic effect of inertia over time

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Börjesson, Maria; Bierlaire, Michel

    gathered over a continuous period of time, six weeks, to study both inertia and the influence of habits. Tendency to stick with the same alternative is measured through lagged variables that link the current choice with the previous trip made with the same purpose, mode and time of day. However, the lagged...... effect of the previous trips is not constant but it depends on the individual propensity to undertake habitual trips which is captured by the individual specific latent variable. And the frequency of the trips in the previous week is used as an indicator of the habitual behavior. The model estimation...... confirms that the tendency to stick with the same alternative varies not only among modes but also across individuals as a function of the individual propensity to undertake habitual behavior....

  12. Accounting for subordinate perceptions of supervisor power: an identity-dependence model.

    Science.gov (United States)

    Farmer, Steven M; Aguinis, Herman

    2005-11-01

    The authors present a model that explains how subordinates perceive the power of their supervisors and the causal mechanisms by which these perceptions translate into subordinate outcomes. Drawing on identity and resource-dependence theories, the authors propose that supervisors have power over their subordinates when they control resources needed for the subordinates' enactment and maintenance of current and desired identities. The joint effect of perceptions of supervisor power and supervisor intentions to provide such resources leads to 4 conditions ranging from highly functional to highly dysfunctional: confirmation, hope, apathy, and progressive withdrawal. Each of these conditions is associated with specific outcomes such as the quality of the supervisor-subordinate relationship, turnover, and changes in the type and centrality of various subordinate identities. ((c) 2005 APA, all rights reserved).

  13. Does Reading Cause Later Intelligence? Accounting for Stability in Models of Change.

    Science.gov (United States)

    Bailey, Drew H; Littlefield, Andrew K

    2016-11-08

    This study reanalyzes data presented by Ritchie, Bates, and Plomin (2015) who used a cross-lagged monozygotic twin differences design to test whether reading ability caused changes in intelligence. The authors used data from a sample of 1,890 monozygotic twin pairs tested on reading ability and intelligence at five occasions between the ages of 7 and 16, regressing twin differences in intelligence on twin differences in prior intelligence and twin differences in prior reading ability. Results from a state-trait model suggest that reported effects of reading ability on later intelligence may be artifacts of previously uncontrolled factors, both environmental in origin and stable during this developmental period, influencing both constructs throughout development. Implications for cognitive developmental theory and methods are discussed. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  14. Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation

    KAUST Repository

    De La Garza Martinez, Pablo

    2016-05-01

    Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.

  15. Accounting for non-linear chemistry of ship plumes in the GEOS-Chem global chemistry transport model

    Directory of Open Access Journals (Sweden)

    G. C. M. Vinken

    2011-06-01

    Full Text Available We present a computationally efficient approach to account for the non-linear chemistry occurring during the dispersion of ship exhaust plumes in a global 3-D model of atmospheric chemistry (GEOS-Chem. We use a plume-in-grid formulation where ship emissions age chemically for 5 h before being released in the global model grid. Besides reducing the original ship NOx emissions in GEOS-Chem, our approach also releases the secondary compounds ozone and HNO3, produced in the 5 h after the original emissions, into the model. We applied our improved method and also the widely used "instant dilution" approach to a 1-yr GEOS-Chem simulation of global tropospheric ozone-NOx-VOC-aerosol chemistry. We also ran simulations with the standard model, and a model without any ship emissions at all. Our improved GEOS-Chem model simulates up to 0.1 ppbv (or 90 % more NOx over the North Atlantic in July than GEOS-Chem versions without any ship NOx emissions at all. "Instant dilution" overestimates NOx concentrations by 50 % (0.1 ppbv and ozone by 10–25 % (3–5 ppbv over this region. These conclusions are supported by comparing simulated and observed NOx and ozone concentrations in the lower troposphere over the Pacific Ocean. The comparisons show that the improved GEOS-Chem model simulates NOx concentrations in between the instant diluting model and the model with no ship emissions, and results in lower O3 concentrations than the instant diluting model. The relative differences in simulated NOx and ozone between our improved approach and instant dilution are smallest over strongly polluted seas (e.g. North Sea, suggesting that accounting for in-plume chemistry is most relevant for pristine marine areas.

  16. Surface complexation modeling calculation of Pb(II) adsorption onto the calcined diatomite

    Science.gov (United States)

    Ma, Shu-Cui; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia

    2015-12-01

    Removal of noxious heavy metal ions (e.g. Pb(II)) by surface adsorption of minerals (e.g. diatomite) is an important means in the environmental aqueous pollution control. Thus, it is very essential to understand the surface adsorptive behavior and mechanism. In this work, the Pb(II) apparent surface complexation reaction equilibrium constants on the calcined diatomite and distributions of Pb(II) surface species were investigated through modeling calculations of Pb(II) based on diffuse double layer model (DLM) with three amphoteric sites. Batch experiments were used to study the adsorption of Pb(II) onto the calcined diatomite as a function of pH (3.0-7.0) and different ionic strengths (0.05 and 0.1 mol L-1 NaCl) under ambient atmosphere. Adsorption of Pb(II) can be well described by Freundlich isotherm models. The apparent surface complexation equilibrium constants (log K) were obtained by fitting the batch experimental data using the PEST 13.0 together with PHREEQC 3.1.2 codes and there is good agreement between measured and predicted data. Distribution of Pb(II) surface species on the diatomite calculated by PHREEQC 3.1.2 program indicates that the impurity cations (e.g. Al3+, Fe3+, etc.) in the diatomite play a leading role in the Pb(II) adsorption and dominant formation of complexes and additional electrostatic interaction are the main adsorption mechanism of Pb(II) on the diatomite under weak acidic conditions.

  17. Modelling of a mecanum wheel taking into account the geometry of road rollers

    Science.gov (United States)

    Hryniewicz, P.; Gwiazda, A.; Banaś, W.; Sękala, A.; Foit, K.

    2017-08-01

    During the process planning in a company one of the basic factors associated with the production costs is the operation time for particular technological jobs. The operation time consists of time units associated with the machining tasks of a workpiece as well as the time associated with loading and unloading and the transport operations of this workpiece between machining stands. Full automation of manufacturing in industry companies tends to a maximal reduction in machine downtimes, thereby the fixed costs simultaneously decreasing. The new construction of wheeled vehicles, using Mecanum wheels, reduces the transport time of materials and workpieces between machining stands. These vehicles have the ability to simultaneously move in two axes and thus more rapid positioning of the vehicle relative to the machining stand. The Mecanum wheel construction implies placing, around the wheel free rollers that are mounted at an angle 450, which allow the movement of the vehicle not only in its axis but also perpendicular thereto. The improper selection of the rollers can cause unwanted vertical movement of the vehicle, which may cause difficulty in positioning of the vehicle in relation to the machining stand and the need for stabilisation. Hence the proper design of the free rollers is essential in designing the whole Mecanum wheel construction. It allows avoiding the disadvantageous and unwanted vertical vibrations of a whole vehicle with these wheels. In the article the process of modelling the free rollers, in order to obtain the desired shape of unchanging, horizontal trajectory of the vehicle is presented. This shape depends on the desired diameter of the whole Mecanum wheel, together with the road rollers, and the width of the drive wheel. Another factor related with the curvature of the trajectory shape is the length of the road roller and its diameter decreases depending on the position with respect to its centre. The additional factor, limiting construction of

  18. Accounting assessment

    Directory of Open Access Journals (Sweden)

    Kafka S.М.

    2017-03-01

    Full Text Available The proper evaluation of accounting objects influences essentially upon the reliability of assessing the financial situation of a company. Thus, the problem in accounting estimate is quite relevant. The works of home and foreign scholars on the issues of assessment of accounting objects, regulatory and legal acts of Ukraine controlling the accounting and compiling financial reporting are a methodological basis for the research. The author uses the theoretical methods of cognition (abstraction and generalization, analysis and synthesis, induction and deduction and other methods producing conceptual knowledge for the synthesis of theoretical and methodological principles in the evaluation of assets accounting, liabilities and equity. The tabular presentation and information comparison methods are used for analytical researches. The article considers the modern approaches to the issue of evaluation of accounting objects and financial statements items. The expedience to keep records under historical value is proved and the articles of financial statements are to be presented according to the evaluation on the reporting date. In connection with the evaluation the depreciation of fixed assets is considered as a process of systematic return into circulation of the before advanced funds on the purchase (production, improvement of fixed assets and intangible assets by means of including the amount of wear in production costs. Therefore it is proposed to amortize only the actual costs incurred, i.e. not to depreciate the fixed assets received free of charge and surplus valuation of different kinds.

  19. An extended macro traffic flow model accounting for the driver's bounded rationality and numerical tests

    Science.gov (United States)

    Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan

    2017-02-01

    In this paper, we propose a macro traffic flow model to explore the effects of the driver's bounded rationality on the evolutions of traffic waves (which include shock and rarefaction waves) and small perturbation, and on the fuel consumption and emissions (that include CO, HC and NOX) during the evolution process. The numerical results illustrate that considering the driver's bounded rationality can prominently smooth the wavefront of the traffic waves and improve the stability of traffic flow, which shows that the driver's bounded rationality has positive impacts on traffic flow; but considering the driver's bounded rationality reduces the fuel consumption and emissions only at the upstream of the rarefaction wave while enhances the fuel consumption and emissions under other situations, which shows that the driver's bounded rationality has positive impacts on the fuel consumption and emissions only at the upstream of the rarefaction wave, while negative effects on the fuel consumption and emissions under other situations. In addition, the numerical results show that the driver's bounded rationality has little prominent impact on the total fuel consumption, and emissions during the whole evolution of small perturbation.

  20. Rapid prediction of damage on a struck ship accounting for side impact scenario models

    Science.gov (United States)

    Prabowo, Aditya Rio; Bae, Dong Myung; Sohn, Jung Min; Zakki, Ahmad Fauzan; Cao, Bo

    2017-04-01

    The impact phenomenon is inseparable part of every physical things, from substantial particle until macrostructure namely ship. In ship collisions, short-period load is distributed during impact process from striking ship into struck ship. The kinetic energy that is used to move striking ship is absorbed by struck ship that makes its structure undergoes plastic deformation and failure. This paper presents study that focuses on predicting occurred damage on side hull of struck ship for various impact scenario models. These scenarios are calculated by finite element approach to obtain characteristic on damage, energy as well as load during and after impact processes. The results indicate that the damages on impact to longitudinal components such as main and car decks are smaller than impact to transverse structure components. The damage and deformation are widely distributed to almost side structures including inner structure. The width between outer and inner shells is very affecting the damage mode where the width below the two meters will make inner shell experience damage beyond plastic deformation. The contribution of structure components is proofed deliver significant effect to damage mode and material strengths clearly affect the results in energy and load characteristic.

  1. Rhode Island Model Evaluation & Support System: Support Professional. Edition II

    Science.gov (United States)

    Rhode Island Department of Education, 2015

    2015-01-01

    Rhode Island educators believe that implementing a fair, accurate, and meaningful evaluation and support system for support professionals will help improve student outcomes. The primary purpose of the Rhode Island Model Support Professional Evaluation and Support System (Rhode Island Model) is to help all support professionals do their best work…

  2. Multiscale geometric modeling of macromolecules II: Lagrangian representation.

    Science.gov (United States)

    Feng, Xin; Xia, Kelin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei

    2013-09-15

    Geometric modeling of biomolecules plays an essential role in the conceptualization of biolmolecular structure, function, dynamics, and transport. Qualitatively, geometric modeling offers a basis for molecular visualization, which is crucial for the understanding of molecular structure and interactions. Quantitatively, geometric modeling bridges the gap between molecular information, such as that from X-ray, NMR, and cryo-electron microscopy, and theoretical/mathematical models, such as molecular dynamics, the Poisson-Boltzmann equation, and the Nernst-Planck equation. In this work, we present a family of variational multiscale geometric models for macromolecular systems. Our models are able to combine multiresolution geometric modeling with multiscale electrostatic modeling in a unified variational framework. We discuss a suite of techniques for molecular surface generation, molecular surface meshing, molecular volumetric meshing, and the estimation of Hadwiger's functionals. Emphasis is given to the multiresolution representations of biomolecules and the associated multiscale electrostatic analyses as well as multiresolution curvature characterizations. The resulting fine resolution representations of a biomolecular system enable the detailed analysis of solvent-solute interaction, and ion channel dynamics, whereas our coarse resolution representations highlight the compatibility of protein-ligand bindings and possibility of protein-protein interactions.

  3. A Primary Prevention Program: Teaching Models I and II.

    Science.gov (United States)

    Harlan, Nancy T; Tschiderer, Patricia A.

    Two teaching models of a service delivery program designed to prevent speech-language problems in lower socioeconomic children were compared. Specific goals included increasing mothers' awareness of the sensory input to which infants are responsive and increasing mothers' abilities to read infant nonverbal signals. In Model 1, two speech-language…

  4. Shunted-Josephson-junction model. II. The nonautonomous case

    DEFF Research Database (Denmark)

    Belykh, V. N.; Pedersen, Niels Falsig; Sørensen, O. H.

    1977-01-01

    The shunted-Josephson-junction model with a monochromatic ac current drive is discussed employing the qualitative methods of the theory of nonlinear oscillations. As in the preceding paper dealing with the autonomous junction, the model includes a phase-dependent conductance and a shunt capacitance...

  5. Filament winding cylinders. II - Validation of the process model

    Science.gov (United States)

    Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.

    1990-01-01

    Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.

  6. A statistical model-based technique for accounting for prostate gland deformation in endorectal coil-based MR imaging.

    Science.gov (United States)

    Tahmasebi, Amir M; Sharifi, Reza; Agarwal, Harsh K; Turkbey, Baris; Bernardo, Marcelino; Choyke, Peter; Pinto, Peter; Wood, Bradford; Kruecker, Jochen

    2012-01-01

    In prostate brachytherapy procedures, combining high-resolution endorectal coil (ERC)-MRI with Computed Tomography (CT) images has shown to improve the diagnostic specificity for malignant tumors. Despite such advantage, there exists a major complication in fusion of the two imaging modalities due to the deformation of the prostate shape in ERC-MRI. Conventionally, nonlinear deformable registration techniques have been utilized to account for such deformation. In this work, we present a model-based technique for accounting for the deformation of the prostate gland in ERC-MR imaging, in which a unique deformation vector is estimated for every point within the prostate gland. Modes of deformation for every point in the prostate are statistically identified using a set of MR-based training set (with and without ERC-MRI). Deformation of the prostate from a deformed (ERC-MRI) to a non-deformed state in a different modality (CT) is then realized by first calculating partial deformation information for a limited number of points (such as surface points or anatomical landmarks) and then utilizing the calculated deformation from a subset of the points to determine the coefficient values for the modes of deformations provided by the statistical deformation model. Using a leave-one-out cross-validation, our results demonstrated a mean estimation error of 1mm for a MR-to-MR registration.

  7. Rooting opinions in the minds: a cognitive model and a formal account of opinions and their dynamics

    CERN Document Server

    Giardini, Francesca; Conte, Rosaria

    2011-01-01

    The study of opinions, their formation and change, is one of the defining topics addressed by social psychology, but in recent years other disciplines, like computer science and complexity, have tried to deal with this issue. Despite the flourishing of different models and theories in both fields, several key questions still remain unanswered. The understanding of how opinions change and the way they are affected by social influence are challenging issues requiring a thorough analysis of opinion per se but also of the way in which they travel between agents' minds and are modulated by these exchanges. To account for the two-faceted nature of opinions, which are mental entities undergoing complex social processes, we outline a preliminary model in which a cognitive theory of opinions is put forward and it is paired with a formal description of them and of their spreading among minds. Furthermore, investigating social influence also implies the necessity to account for the way in which people change their minds...

  8. Model Fitting Versus Curve Fitting: A Model of Renormalization Provides a Better Account of Age Aftereffects Than a Model of Local Repulsion.

    Science.gov (United States)

    O'Neil, Sean F; Mac, Amy; Rhodes, Gillian; Webster, Michael A

    2015-12-01

    Recently, we proposed that the aftereffects of adapting to facial age are consistent with a renormalization of the perceived age (e.g., so that after adapting to a younger or older age, all ages appear slightly older or younger, respectively). This conclusion has been challenged by arguing that the aftereffects can also be accounted for by an alternative model based on repulsion (in which facial ages above or below the adapting age are biased away from the adaptor). However, we show here that this challenge was based on allowing the fitted functions to take on values which are implausible and incompatible across the different adapting conditions. When the fits are constrained or interpreted in terms of standard assumptions about normalization and repulsion, then the two analyses both agree in pointing to a pattern of renormalization in age aftereffects.

  9. Conceptual Modeling in the Time of the Revolution: Part II

    Science.gov (United States)

    Mylopoulos, John

    Conceptual Modeling was a marginal research topic at the very fringes of Computer Science in the 60s and 70s, when the discipline was dominated by topics focusing on programs, systems and hardware architectures. Over the years, however, the field has moved to centre stage and has come to claim a central role both in Computer Science research and practice in diverse areas, such as Software Engineering, Databases, Information Systems, the Semantic Web, Business Process Management, Service-Oriented Computing, Multi-Agent Systems, Knowledge Management, and more. The transformation was greatly aided by the adoption of standards in modeling languages (e.g., UML), and model-based methodologies (e.g., Model-Driven Architectures) by the Object Management Group (OMG) and other standards organizations. We briefly review the history of the field over the past 40 years, focusing on the evolution of key ideas. We then note some open challenges and report on-going research, covering topics such as the representation of variability in conceptual models, capturing model intentions, and models of laws.

  10. Synthetic Model of the Oxygen-Evolving Center: Photosystem II under the Spotlight.

    Science.gov (United States)

    Yu, Yang; Hu, Cheng; Liu, Xiaohong; Wang, Jiangyun

    2015-09-21

    The oxygen-evolving center (OEC) in photosystem II catalyzes a water splitting reaction. Great efforts have already been made to artificially synthesize the OEC, in order to elucidate the structure-function relationship and the mechanism of the reaction. Now, a new synthetic model makes the best mimic yet of the OEC. This recent study opens up the possibility to study the mechanism of photosystem II and photosynthesis in general for applications in renewable energy and synthetic biology.

  11. Models of the SL9 Impacts II. Radiative-hydrodynamic Modeling of the Plume Splashback

    CERN Document Server

    Deming, D; Deming, Drake; Harrington, Joseph

    2001-01-01

    We model the plume "splashback" phase of the SL9 collisions with Jupiter using the ZEUS-3D hydrodynamic code. We modified the Zeus code to include gray radiative transport, and we present validation tests. We couple the infalling mass and momentum fluxes of SL9 plume material (from paper I) to a jovian atmospheric model. A strong and complex shock structure results. The modeled shock temperatures agree well with observations, and the structure and evolution of the modeled shocks account for the appearance of high excitation molecular line emission after the peak of the continuum light curve. The splashback region cools by radial expansion as well as by radiation. The morphology of our synthetic continuum light curves agree with observations over a broad wavelength range (0.9 to 12 microns). A feature of our ballistic plume is a shell of mass at the highest velocities, which we term the "vanguard". Portions of the vanguard ejected on shallow trajectories produce a lateral shock front, whose initial expansion a...

  12. Accounting for uncertainty due to 'last observation carried forward' outcome imputation in a meta-analysis model.

    Science.gov (United States)

    Dimitrakopoulou, Vasiliki; Efthimiou, Orestis; Leucht, Stefan; Salanti, Georgia

    2015-02-28

    Missing outcome data are a problem commonly observed in randomized control trials that occurs as a result of participants leaving the study before its end. Missing such important information can bias the study estimates of the relative treatment effect and consequently affect the meta-analytic results. Therefore, methods on manipulating data sets with missing participants, with regard to incorporating the missing information in the analysis so as to avoid the loss of power and minimize the bias, are of interest. We propose a meta-analytic model that accounts for possible error in the effect sizes estimated in studies with last observation carried forward (LOCF) imputed patients. Assuming a dichotomous outcome, we decompose the probability of a successful unobserved outcome taking into account the sensitivity and specificity of the LOCF imputation process for the missing participants. We fit the proposed model within a Bayesian framework, exploring different prior formulations for sensitivity and specificity. We illustrate our methods by performing a meta-analysis of five studies comparing the efficacy of amisulpride versus conventional drugs (flupenthixol and haloperidol) on patients diagnosed with schizophrenia. Our meta-analytic models yield estimates similar to meta-analysis with LOCF-imputed patients. Allowing for uncertainty in the imputation process, precision is decreased depending on the priors used for sensitivity and specificity. Results on the significance of amisulpride versus conventional drugs differ between the standard LOCF approach and our model depending on prior beliefs on the imputation process. Our method can be regarded as a useful sensitivity analysis that can be used in the presence of concerns about the LOCF process.

  13. Temperature dependence of the epidermal growth factor receptor signaling network can be accounted for by a kinetic model.

    Science.gov (United States)

    Moehren, Gisela; Markevich, Nick; Demin, Oleg; Kiyatkin, Anatoly; Goryanin, Igor; Hoek, Jan B; Kholodenko, Boris N

    2002-01-08

    Stimulation of isolated hepatocytes with epidermal growth factor (EGF) causes rapid tyrosine phosphorylation of the EGF receptor (EGFR) and adapter/target proteins, which was monitored with 1 and 2 s resolution at 37, 20, and 4 degrees C. The temporal responses detected for multiple signaling proteins involve both transient and sustained phosphorylation patterns, which change dramatically at low temperatures. To account quantitatively for complex responses, we employed a mechanistic kinetic model of the EGFR pathway, formulated in molecular terms as cascades of protein interactions and phosphorylation and dephosphorylation reactions. Assuming differential temperature dependencies for different reaction groups, such as SH2 and PTB domain-mediated interactions, the EGFR kinase, and the phosphatases, good quantitative agreement was obtained between computer-simulated and measured responses. The kinetic model demonstrates that, for each protein-protein interaction, the dissociation rate constant, k(off), strongly decreases at low temperatures, whereas this decline may or may not be accompanied by a large decrease in the k(on) value. Temperature-induced changes in the maximal activities of the reactions catalyzed by the EGFR kinase were moderate, compared to such changes in the V(max) of the phosphatases. However, strong changes in both the V(max) and K(m) for phosphatases resulted in moderate changes in the V(max)/K(m) ratio, comparable to the corresponding changes in EGFR kinase activity, with a single exception for the receptor phosphatase at 4 degrees C. The model suggests a significant decrease in the rates of the EGF receptor dimerization and its dephosphorylation at 4 degrees C, which can be related to the phase transition in the membrane lipids. A combination of high-resolution experimental monitoring and molecular level kinetic modeling made it possible to quantitatively account for the temperature dependence of the integrative signaling responses.

  14. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  15. Realizability algebras II : new models of ZF + DC

    CERN Document Server

    Krivine, Jean-Louis

    2010-01-01

    Using the proof-program (Curry-Howard) correspondence, we give a new method to obtain models of ZF and relative consistency results. We show the relative consistency of ZF + DC + some unusual properties for the power set of R.

  16. Supersymmetric standard model from the heterotic string (II)

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, W. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Hamaguchi, K. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Tokyo Univ. (Japan). Dept. of Physics; Lebedev, O.; Ratz, M. [Bonn Univ. (Germany). Physikalisches Inst.

    2006-06-15

    We describe in detail a Z{sub 6} orbifold compactification of the heterotic E{sub 8} x E{sub 8} string which leads to the (supersymmetric) standard model gauge group and matter content. The quarks and leptons appear as three 16-plets of SO(10), two of which are localized at fixed points with local SO(10) symmetry. The model has supersymmetric vacua without exotics at low energies and is consistent with gauge coupling unification. Supersymmetry can be broken via gaugino condensation in the hidden sector. The model has large vacuum degeneracy. Certain vacua with approximate B-L symmetry have attractive phenomenological features. The top quark Yukawa coupling arises from gauge interactions and is of the order of the gauge couplings. The other Yukawa couplings are suppressed by powers of standard model singlet fields, similarly to the Froggatt-Nielsen mechanism. (Orig.)

  17. Horns Rev II, 2D-Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Brorsen, Michael

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU), Denmark. The starting point for the present report is the previously carried out run-up tests described in Lykke Andersen & Frigaard, 2006......-shaped access platforms on piles. The Model tests include mainly regular waves and a few irregular wave tests. These tests have been conducted at Aalborg University from 9. November, 2006 to 17. November, 2006....

  18. IFRS 9 replacing IAS 39 : A study about how the implementation of the Expected Credit Loss Model in IFRS 9 i beleived to impact comparability in accounting

    OpenAIRE

    Klefvenberg, Louise; Nordlander, Viktoria

    2015-01-01

    This thesis examines how the implementation process of Expected Credit Loss Model in the accounting standard IFRS 9 – Financial instruments is perceived and interpreted and how these factors can affect comparability in accounting. One of the main changes with IFRS 9 is that companies need to account for expected credit losses rather than just incurred ones. The data is primarily collected through a web survey where all of Nordic banks and credit institutes with a minimum book value of total a...

  19. Probabilistic modelling in urban drainage – two approaches that explicitly account for temporal variation of model errors

    DEFF Research Database (Denmark)

    Löwe, Roland; Del Giudice, Dario; Mikkelsen, Peter Steen

    to observations. After a brief discussion of the assumptions made for likelihood-based parameter inference, we illustrated the basic principles of both approaches on the example of sewer flow modelling with a conceptual rainfallrunoff model. The results from a real-world case study suggested that both approaches...

  20. A field-scale infiltration model accounting for spatial heterogeneity of rainfall and soil saturated hydraulic conductivity

    Science.gov (United States)

    Morbidelli, Renato; Corradini, Corrado; Govindaraju, Rao S.

    2006-04-01

    This study first explores the role of spatial heterogeneity, in both the saturated hydraulic conductivity Ks and rainfall intensity r, on the integrated hydrological response of a natural slope. On this basis, a mathematical model for estimating the expected areal-average infiltration is then formulated. Both Ks and r are considered as random variables with assessed probability density functions. The model relies upon a semi-analytical component, which describes the directly infiltrated rainfall, and an empirical component, which accounts further for the infiltration of surface water running downslope into pervious soils (the run-on effect). Monte Carlo simulations over a clay loam soil and a sandy loam soil were performed for constructing the ensemble averages of field-scale infiltration used for model validation. The model produced very accurate estimates of the expected field-scale infiltration rate, as well as of the outflow generated by significant rainfall events. Furthermore, the two model components were found to interact appropriately for different weights of the two infiltration mechanisms involved.

  1. A micromechanics-inspired constitutive model for shape-memory alloys that accounts for initiation and saturation of phase transformation

    Science.gov (United States)

    Kelly, Alex; Stebner, Aaron P.; Bhattacharya, Kaushik

    2016-12-01

    A constitutive model to describe macroscopic elastic and transformation behaviors of polycrystalline shape-memory alloys is formulated using an internal variable thermodynamic framework. In a departure from prior phenomenological models, the proposed model treats initiation, growth kinetics, and saturation of transformation distinctly, consistent with physics revealed by recent multi-scale experiments and theoretical studies. Specifically, the proposed approach captures the macroscopic manifestations of three micromechanial facts, even though microstructures are not explicitly modeled: (1) Individual grains with favorable orientations and stresses for transformation are the first to nucleate martensite, and the local nucleation strain is relatively large. (2) Then, transformation interfaces propagate according to growth kinetics to traverse networks of grains, while previously formed martensite may reorient. (3) Ultimately, transformation saturates prior to 100% completion as some unfavorably-oriented grains do not transform; thus the total transformation strain of a polycrystal is modest relative to the initial, local nucleation strain. The proposed formulation also accounts for tension-compression asymmetry, processing anisotropy, and the distinction between stress-induced and temperature-induced transformations. Consequently, the model describes thermoelastic responses of shape-memory alloys subject to complex, multi-axial thermo-mechanical loadings. These abilities are demonstrated through detailed comparisons of simulations with experiments.

  2. Low Energy Atomic Models Suggesting a Pilus Structure that could Account for Electrical Conductivity of Geobacter sulfurreducens Pili.

    Science.gov (United States)

    Xiao, Ke; Malvankar, Nikhil S; Shu, Chuanjun; Martz, Eric; Lovley, Derek R; Sun, Xiao

    2016-03-22

    The metallic-like electrical conductivity of Geobacter sulfurreducens pili has been documented with multiple lines of experimental evidence, but there is only a rudimentary understanding of the structural features which contribute to this novel mode of biological electron transport. In order to determine if it was feasible for the pilin monomers of G. sulfurreducens to assemble into a conductive filament, theoretical energy-minimized models of Geobacter pili were constructed with a previously described approach, in which pilin monomers are assembled using randomized structural parameters and distance constraints. The lowest energy models from a specific group of predicted structures lacked a central channel, in contrast to previously existing pili models. In half of the no-channel models the three N-terminal aromatic residues of the pilin monomer are arranged in a potentially electrically conductive geometry, sufficiently close to account for the experimentally observed metallic like conductivity of the pili that has been attributed to overlapping pi-pi orbitals of aromatic amino acids. These atomic resolution models capable of explaining the observed conductive properties of Geobacter pili are a valuable tool to guide further investigation of the metallic-like conductivity of the pili, their role in biogeochemical cycling, and applications in bioenergy and bioelectronics.

  3. Low angular momentum flow model II for Sgr A*

    CERN Document Server

    Okuda, Toru

    2014-01-01

    We examine 1D two-temperature accretion flows around a supermassive black hole, adopting the specific angular momentum \\lambda, the total specific energy \\epsilon and the input accretion rate \\dot M_{input} = 4.0x10^{-6} solar mass/yr estimated in the recent analysis of stellar wind of nearby stars around Sgr A*. The two-temperature flow is almost adiabatic even if we take account of the heating of electrons by ions, the bremsstrahlung cooling and the synchrotron cooling, as long as the ratio \\beta of the magnetic energy density to the thermal energy density is taken to be as \\beta < 1. The different temperatures of ions and electrons are caused by the different adiabatic indices of ions and electrons which depend on their temperature states under the relativistic regime. The total luminosity increases with increasing \\beta and results in - 10^{35} - 10^{36} erg/s for \\beta=10^{-3} - 1. Furthermore, from 2D time-dependent hydrodynamical calculations of the above flow, we find that the irregularly oscillati...

  4. Goals and Psychological Accounting

    DEFF Research Database (Denmark)

    Koch, Alexander Karl; Nafziger, Julia

    -induced reference points make substandard performance psychologically painful and motivate the individual to stick to his goals. How strong the commitment to goals is depends on the type of psychological account. We provide conditions when it is optimal to evaluate goals in narrow accounts. The key intuition......We model how people formulate and evaluate goals to overcome self-control problems. People often attempt to regulate their behavior by evaluating goal-related outcomes separately (in narrow psychological accounts) rather than jointly (in a broad account). To explain this evidence, our theory...... of endogenous narrow or broad psychological accounts combines insights from the literatures on goals and mental accounting with models of expectations-based reference-dependent preferences. By formulating goals the individual creates expectations that induce reference points for task outcomes. These goal...

  5. Analysis of a model for the dynamics of prions II

    Science.gov (United States)

    Engler, Hans; Pruss, Jan; Webb, Glenn F.

    2006-12-01

    A new mathematical model for the dynamics of prion proliferation involving an ordinary differential equation coupled with a partial integro-differential equation is analyzed, continuing the work in [J. Pruss, L. Pujo-Menjouet, G.F. Webb, R. Zacher, Analysis of a model for the dynamics of prions, Discrete Contin. Dyn. Syst. 6 (2006) 225-235]. We show the well-posedness of this problem in its natural phase space , i.e., there is a unique global semiflow on Z+ associated to the problem. A theorem of threshold type is derived for this model which is typical for mathematical epidemics. If a certain combination of kinetic parameters is below or at the threshold, there is a unique steady state, the disease-free equilibrium, which is globally asymptotically stable in Z+; above the threshold it is unstable, and there is another unique steady state, the disease equilibrium, which inherits that property.

  6. Marginal production in the Gulf of Mexico - II. Model results

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, Mark J.; Yu, Yunke [Center for Energy Studies, Louisiana State University, Baton Rouge, LA 70803 (United States)

    2010-08-15

    In the second part of this two-part article on marginal production in the Gulf of Mexico, we estimate the number of committed assets in water depth less than 1000 ft that are expected to be marginal over a 60-year time horizon. We compute the expected quantity and value of the production and gross revenue streams of the gulf's committed asset inventory circa. January 2007 using a probabilistic model framework. Cumulative hydrocarbon production from the producing inventory is estimated to be 1056 MMbbl oil and 13.3 Tcf gas. Marginal production from the committed asset inventory is expected to contribute 4.1% of total oil production and 5.4% of gas production. A meta-evaluation procedure is adapted to present the results of sensitivity analysis. Model results are discussed along with a description of the model framework and limitations of the analysis. (author)

  7. Turbulent convection model in the overshooting region: II. Theoretical analysis

    CERN Document Server

    Zhang, S Q

    2012-01-01

    Turbulent convection models are thought to be good tools to deal with the convective overshooting in the stellar interior. However, they are too complex to be applied in calculations of stellar structure and evolution. In order to understand the physical processes of the convective overshooting and to simplify the application of turbulent convection models, a semi-analytic solution is necessary. We obtain the approximate solution and asymptotic solution of the turbulent convection model in the overshooting region, and find some important properties of the convective overshooting: I. The overshooting region can be partitioned into three parts: a thin region just outside the convective boundary with high efficiency of turbulent heat transfer, a power law dissipation region of turbulent kinetic energy in the middle, and a thermal dissipation area with rapidly decreasing turbulent kinetic energy. The decaying indices of the turbulent correlations $k$, $\\bar{u_{r}'T'}$, and $\\bar{T'T'}$ are only determined by the ...

  8. Model cortical association fields account for the time course and dependence on target complexity of human contour perception.

    Directory of Open Access Journals (Sweden)

    Vadas Gintautas

    2011-10-01

    Full Text Available Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas distributed among groups of randomly rotated fragments (clutter. The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms, followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least [Formula: see text] ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas.

  9. Horns Rev II, 2D-Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Frigaard, Peter

    This report present the results of 2D physical model tests carried out in the shallow wave flume at Dept. of Civil Engineering, Aalborg University (AAU). The objective of the tests was: To investigate the combined influence of the pile diameter to water depth ratio and the wave height to water...... on the front side of the pile (0 to 90 degrees). These tests have been conducted at Aalborg University from 9. October, 2006 to 8. November, 2006. Unless otherwise mentioned, all values given in this report are in model scale....

  10. Contact Modelling in Resistance Welding, Part II: Experimental Validation

    DEFF Research Database (Denmark)

    Song, Quanfeng; Zhang, Wenqi; Bay, Niels

    2006-01-01

    Contact algorithms in resistance welding presented in the previous paper are experimentally validated in the present paper. In order to verify the mechanical contact algorithm, two types of experiments, i.e. sandwich upsetting of circular, cylindrical specimens and compression tests of discs...... with a solid ring projection towards a flat ring, are carried out at room temperature. The complete algorithm, involving not only the mechanical model but also the thermal and electrical models, is validated by projection welding experiments. The experimental results are in satisfactory agreement...

  11. Autocorrelation and regularization in digital images. II - Simple image models

    Science.gov (United States)

    Jupp, David L. B.; Strahler, Alan H.; Woodcock, Curtis E.

    1989-01-01

    The variogram function used in geostatistical analysis is a useful statistic in the analysis of remotely sensed images. Using the results derived by Jupp et al. (1988), the basic second-order, or covariance, properties of scenes modeled by simple disks of varying size and spacing after imaging into disk-shaped pixels are analyzed to explore the relationship betwee image variograms and discrete object scene structure. The models provide insight into the nature of real images of the earth's surface and the tools for a complete analysis of the more complex case of three-dimensional illuminated discrete-object images.

  12. Accounting rigid support at the border in a mixed model the finite element method in problems of ice cover destruction

    Directory of Open Access Journals (Sweden)

    V. V. Knyazkov

    2014-01-01

    Full Text Available To evaluate the force to damage the ice covers is necessary for estimation of icebreaking capability of vessels, as well as of hull strength of icebreakers, and navigation of ships in ice conditions. On the other hand, the use of ice cover support to arrange construction works from the ice is also of practical interest.By the present moment a great deal of investigations of ice cover deformation have been carried out to result, usually, in approximate calculations formula which was obtained after making a variety of assumptions. Nevertheless, we believe that it is possible to make further improvement in calculations. Application numerical methods, and, for example, FEM, makes possible to avoid numerous drawbacks of analytical methods dealing with both complex boundaries and load application areas and other problem peculiarities.The article considers an application of mixed models of FEM for investigating ice cover deformation. A simple flexible triangle element of mixed type was taken to solve this problem. Vector of generalized coordinates of the element contains apices flexures and normal bending moments in the middle of its sides. Compared to other elements mixed models easily satisfy compatibility requirements on the boundary of adjacent elements and do not require numerical displacement differentiation to define bending moments, because bending moments are included in vector of element generalized coordinates.The method of account of rigid support plate is proposed. The resulting ratio, taking into account the "stiffening", reduces the number of resolving systems of equations by the number of elements on the plate contour.To evaluate further the results the numerical realization of ice cover stress-strained problem it becomes necessary and correct to check whether calculation results correspond to accurate solution. Using an example of circular plate the convergence of numerical solutions to analytical solutions is showed.The article

  13. A nonlinear BOLD model accounting for refractory effect by applying the longitudinal relaxation in NMR to the linear BOLD model.

    Science.gov (United States)

    Jung, Kwan-Jin

    2009-09-01

    A mathematical model to regress the nonlinear blood oxygen level-dependent (BOLD) fMRI signal has been developed by incorporating the refractory effect into the linear BOLD model of the biphasic gamma variate function. The refractory effect was modeled as a relaxation of two separate BOLD capacities corresponding to the biphasic components of the BOLD signal in analogy with longitudinal relaxation of magnetization in NMR. When tested with the published fMRI data of finger tapping, the nonlinear BOLD model with the refractory effect reproduced the nonlinear BOLD effects such as reduced poststimulus undershoot and saddle pattern in a prolonged stimulation as well as the reduced BOLD signal for repetitive stimulation.

  14. Lanchester-Type Models of Warfare. Volume II

    Science.gov (United States)

    1980-10-01

    verification of combat models are as follows: 594 (1) principle of uniformitarianism does not hold, (2) systems are only partially observable, (3...the principle of uniformitarianism , which holds that physical and biological process- es, conditions, and operations do not change over time (i.e. uni

  15. SPARTAN II: An Instructional High Resolution Land Combat Model

    Science.gov (United States)

    1993-03-01

    White Sands Missile Range, New Mexico (14: C1,J4). TRAC uses these two models for doctrinal and force-structure evaluation and for training and education...8217 COMMON SHARED soldato, evento , ptgto, tgtreco, bluecount, & redcount, activeblue, activered, timetostop ’obs observer ID ’time = current simulation time

  16. nIFTy galaxy cluster simulations II: radiative models

    CSIR Research Space (South Africa)

    Sembolini, F

    2016-04-01

    Full Text Available We have simulated the formation of a massive galaxy cluster (M(supcrit)(sub200) = 1.1×10(sup15)h(sup-1)M) in a CDM universe using 10 different codes (RAMSES, 2 incarnations of AREPO and 7 of GADGET), modeling hydrodynamics with full radiative...

  17. Horns Rev II, 2D-Model Tests

    DEFF Research Database (Denmark)

    Andersen, Thomas Lykke; Brorsen, Michael

    This report is an extension of the study presented in Lykke Andersen and Brorsen, 2006 and includes results from the irregular wave tests, where Lykke Andersen & Brorsen, 2006 focused on regular waves. The 2D physical model tests were carried out in the shallow wave flume at Dept. of Civil...

  18. Storm Water Management Model Reference Manual Volume II – Hydraulics

    Science.gov (United States)

    SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and gene...

  19. Accounting for geochemical alterations of caprock fracture permeability in basin-scale models of leakage from geologic CO2 reservoirs

    Science.gov (United States)

    Guo, B.; Fitts, J. P.; Dobossy, M.; Bielicki, J. M.; Peters, C. A.

    2012-12-01

    Climate mitigation, public acceptance and energy, markets demand that the potential CO2 leakage rates from geologic storage reservoirs are predicted to be low and are known to a high level of certainty. Current approaches to predict CO2 leakage rates assume constant permeability of leakage pathways (e.g., wellbores, faults, fractures). A reactive transport model was developed to account for geochemical alterations that result in permeability evolution of leakage pathways. The one-dimensional reactive transport model was coupled with the basin-scale Estimating Leakage Semi-Analytical (ELSA) model to simulate CO2 and brine leakage through vertical caprock pathways for different CO2 storage reservoir sites and injection scenarios within the Mt. Simon and St. Peter sandstone formations of the Michigan basin. Mineral dissolution in the numerical reactive transport model expands leakage pathways and increases permeability as a result of calcite dissolution by reactions driven by CO2-acidified brine. A geochemical model compared kinetic and equilibrium treatments of calcite dissolution within each grid block for each time step. For a single fracture, we investigated the effect of the reactions on leakage by performing sensitivity analyses of fracture geometry, CO2 concentration, calcite abundance, initial permeability, and pressure gradient. Assuming that calcite dissolution reaches equilibrium at each time step produces unrealistic scenarios of buffering and permeability evolution within fractures. Therefore, the reactive transport model with a kinetic treatment of calcite dissolution was coupled to the ELSA model and used to compare brine and CO2 leakage rates at a variety of potential geologic storage sites within the Michigan basin. The results are used to construct maps based on the susceptibility to geochemically driven increases in leakage rates. These maps should provide useful and easily communicated inputs into decision-making processes for siting geologic CO2

  20. Modelling the spatial distribution of snow water equivalent at the catchment scale taking into account changes in snow covered area

    Directory of Open Access Journals (Sweden)

    T. Skaugen

    2011-12-01

    Full Text Available A successful modelling of the snow reservoir is necessary for water resources assessments and the mitigation of spring flood hazards. A good estimate of the spatial probability density function (PDF of snow water equivalent (SWE is important for obtaining estimates of the snow reservoir, but also for modelling the changes in snow covered area (SCA, which is crucial for the runoff dynamics in spring. In a previous paper the PDF of SWE was modelled as a sum of temporally correlated gamma distributed variables. This methodology was constrained to estimate the PDF of SWE for snow covered areas only. In order to model the PDF of SWE for a catchment, we need to take into account the change in snow coverage and provide the spatial moments of SWE for both snow covered areas and for the catchment as a whole. The spatial PDF of accumulated SWE is, also in this study, modelled as a sum of correlated gamma distributed variables. After accumulation and melting events the changes in the spatial moments are weighted by changes in SCA. The spatial variance of accumulated SWE is, after both accumulation- and melting events, evaluated by use of the covariance matrix. For accumulation events there are only positive elements in the covariance matrix, whereas for melting events, there are both positive and negative elements. The negative elements dictate that the correlation between melt and SWE is negative. The negative contributions become dominant only after some time into the melting season so at the onset of the melting season, the spatial variance thus continues to increase, for later to decrease. This behaviour is consistent with observations and called the "hysteretic" effect by some authors. The parameters for the snow distribution model can be estimated from observed historical precipitation data which reduces by one the number of parameters to be calibrated in a hydrological model. Results from the model are in good agreement with observed spatial moments

  1. Modelling the spatial distribution of snow water equivalent at the catchment scale taking into account changes in snow covered area

    Science.gov (United States)

    Skaugen, T.; Randen, F.

    2011-12-01

    A successful modelling of the snow reservoir is necessary for water resources assessments and the mitigation of spring flood hazards. A good estimate of the spatial probability density function (PDF) of snow water equivalent (SWE) is important for obtaining estimates of the snow reservoir, but also for modelling the changes in snow covered area (SCA), which is crucial for the runoff dynamics in spring. In a previous paper the PDF of SWE was modelled as a sum of temporally correlated gamma distributed variables. This methodology was constrained to estimate the PDF of SWE for snow covered areas only. In order to model the PDF of SWE for a catchment, we need to take into account the change in snow coverage and provide the spatial moments of SWE for both snow covered areas and for the catchment as a whole. The spatial PDF of accumulated SWE is, also in this study, modelled as a sum of correlated gamma distributed variables. After accumulation and melting events the changes in the spatial moments are weighted by changes in SCA. The spatial variance of accumulated SWE is, after both accumulation- and melting events, evaluated by use of the covariance matrix. For accumulation events there are only positive elements in the covariance matrix, whereas for melting events, there are both positive and negative elements. The negative elements dictate that the correlation between melt and SWE is negative. The negative contributions become dominant only after some time into the melting season so at the onset of the melting season, the spatial variance thus continues to increase, for later to decrease. This behaviour is consistent with observations and called the "hysteretic" effect by some authors. The parameters for the snow distribution model can be estimated from observed historical precipitation data which reduces by one the number of parameters to be calibrated in a hydrological model. Results from the model are in good agreement with observed spatial moments of SWE and SCA

  2. Serviceability limit state related to excessive lateral deformations to account for infill walls in the structural model

    Directory of Open Access Journals (Sweden)

    G. M. S. ALVA

    Full Text Available Brazilian Codes NBR 6118 and NBR 15575 provide practical values for interstory drift limits applied to conventional modeling in order to prevent negative effects in masonry infill walls caused by excessive lateral deformability, however these codes do not account for infill walls in the structural model. The inclusion of infill walls in the proposed model allows for a quantitative evaluation of structural stresses in these walls and an assessment of cracking in these elements (sliding shear diagonal tension and diagonal compression cracking. This paper presents the results of simulations of single-story one-bay infilled R/C frames. The main objective is to show how to check the serviceability limit states under lateral loads when the infill walls are included in the modeling. The results of numerical simulations allowed for an evaluation of stresses and the probable cracking pattern in infill walls. The results also allowed an identification of some advantages and limitations of the NBR 6118 practical procedure based on interstory drift limits.

  3. A statistical human rib cage geometry model accounting for variations by age, sex, stature and body mass index.

    Science.gov (United States)

    Shi, Xiangnan; Cao, Libo; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2014-07-18

    In this study, we developed a statistical rib cage geometry model accounting for variations by age, sex, stature and body mass index (BMI). Thorax CT scans were obtained from 89 subjects approximately evenly distributed among 8 age groups and both sexes. Threshold-based CT image segmentation was performed to extract the rib geometries, and a total of 464 landmarks on the left side of each subject׳s ribcage were collected to describe the size and shape of the rib cage as well as the cross-sectional geometry of each rib. Principal component analysis and multivariate regression analysis were conducted to predict rib cage geometry as a function of age, sex, stature, and BMI, all of which showed strong effects on rib cage geometry. Except for BMI, all parameters also showed significant effects on rib cross-sectional area using a linear mixed model. This statistical rib cage geometry model can serve as a geometric basis for developing a parametric human thorax finite element model for quantifying effects from different human attributes on thoracic injury risks.

  4. Analytical model and design of spoke-type permanent-magnet machines accounting for saturation and nonlinearity of magnetic bridges

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Peixin; Chai, Feng [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Bi, Yunlong [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Pei, Yulong, E-mail: peiyulong1@163.com [Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China); Cheng, Shukang [State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001 (China); Department of Electrical Engineering, Harbin Institute of Technology, Harbin 150001 (China)

    2016-11-01

    Based on subdomain model, this paper presents an analytical method for predicting the no-load magnetic field distribution, back-EMF and torque in general spoke-type motors with magnetic bridges. Taking into account the saturation and nonlinearity of magnetic material, the magnetic bridges are equivalent to fan-shaped saturation regions. For getting standard boundary conditions, a lumped parameter magnetic circuit model and iterative method are employed to calculate the permeability. The final field domain is divided into five types of simple subdomains. Based on the method of separation of variables, the analytical expression of each subdomain is derived. The analytical results of the magnetic field distribution, Back-EMF and torque are verified by finite element method, which confirms the validity of the proposed model for facilitating the motor design and optimization. - Highlights: • The no-load magnetic field of poke-type motors is firstly calculated by analytical method. • The magnetic circuit model and iterative method are employed to calculate the permeability. • The analytical expression of each subdomain is derived.. • The proposed method can effectively reduce the predesign stages duration.

  5. Accounting for Population Structure in Gene-by-Environment Interactions in Genome-Wide Association Studies Using Mixed Models.

    Science.gov (United States)

    Sul, Jae Hoon; Bilow, Michael; Yang, Wen-Yun; Kostem, Emrah; Furlotte, Nick; He, Dan; Eskin, Eleazar

    2016-03-01

    Although genome-wide association studies (GWASs) have discovered numerous novel genetic variants associated with many complex traits and diseases, those genetic variants typically explain only a small fraction of phenotypic variance. Factors that account for phenotypic variance include environmental factors and gene-by-environment interactions (GEIs). Recently, several studies have conducted genome-wide gene-by-environment association analyses and demonstrated important roles of GEIs in complex traits. One of the main challenges in these association studies is to control effects of population structure that may cause spurious associations. Many studies have analyzed how population structure influences statistics of genetic variants and developed several statistical approaches to correct for population structure. However, the impact of population structure on GEI statistics in GWASs has not been extensively studied and nor have there been methods designed to correct for population structure on GEI statistics. In this paper, we show both analytically and empirically that population structure may cause spurious GEIs and use both simulation and two GWAS datasets to support our finding. We propose a statistical approach based on mixed models to account for population structure on GEI statistics. We find that our approach effectively controls population structure on statistics for GEIs as well as for genetic variants.

  6. A sampling design and model for estimating abundance of Nile crocodiles while accounting for heterogeneity of detectability of multiple observers

    Science.gov (United States)

    Shirley, Matthew H.; Dorazio, Robert M.; Abassery, Ekramy; Elhady, Amr A.; Mekki, Mohammed S.; Asran, Hosni H.

    2012-01-01

    As part of the development of a management program for Nile crocodiles in Lake Nasser, Egypt, we used a dependent double-observer sampling protocol with multiple observers to compute estimates of population size. To analyze the data, we developed a hierarchical model that allowed us to assess variation in detection probabilities among observers and survey dates, as well as account for variation in crocodile abundance among sites and habitats. We conducted surveys from July 2008-June 2009 in 15 areas of Lake Nasser that were representative of 3 main habitat categories. During these surveys, we sampled 1,086 km of lake shore wherein we detected 386 crocodiles. Analysis of the data revealed significant variability in both inter- and intra-observer detection probabilities. Our raw encounter rate was 0.355 crocodiles/km. When we accounted for observer effects and habitat, we estimated a surface population abundance of 2,581 (2,239-2,987, 95% credible intervals) crocodiles in Lake Nasser. Our results underscore the importance of well-trained, experienced monitoring personnel in order to decrease heterogeneity in intra-observer detection probability and to better detect changes in the population based on survey indices. This study will assist the Egyptian government establish a monitoring program as an integral part of future crocodile harvest activities in Lake Nasser

  7. Neural Tuning Size in a Model of Primate Visual Processing Accounts for Three Key Markers of Holistic Face Processing.

    Directory of Open Access Journals (Sweden)

    Cheston Tan

    Full Text Available Faces are an important and unique class of visual stimuli, and have been of interest to neuroscientists for many years. Faces are known to elicit certain characteristic behavioral markers, collectively labeled "holistic processing", while non-face objects are not processed holistically. However, little is known about the underlying neural mechanisms. The main aim of this computational simulation work is to investigate the neural mechanisms that make face processing holistic. Using a model of primate visual processing, we show that a single key factor, "neural tuning size", is able to account for three important markers of holistic face processing: the Composite Face Effect (CFE, Face Inversion Effect (FIE and Whole-Part Effect (WPE. Our proof-of-principle specifies the precise neurophysiological property that corresponds to the poorly-understood notion of holism, and shows that this one neural property controls three classic behavioral markers of holism. Our work is consistent with neurophysiological evidence, and makes further testable predictions. Overall, we provide a parsimonious account of holistic face processing, connecting computation, behavior and neurophysiology.

  8. Modeling the distribution of Mg II absorbers around galaxies using Background Galaxies & Quasars

    CERN Document Server

    Bordoloi, R; Kacprzak, G G; Churchill, C W

    2012-01-01

    We present joint constraints on the distribution of MgII absorption around galaxies, by combining the MgII absorption seen in stacked background galaxy spectra and the distribution of host galaxies of strong MgII systems from the spectra of background quasars. We present a suite of models that predict, the dependence of MgII absorption on a galaxy's apparent inclination, impact parameter(b) and azimuthal angle. The variations in the absorption strength with azimuthal angles provide much stronger constraints on the intrinsic geometry of the MgII absorption than the dependence on the galaxy's inclination. Strong MgII absorbers (W_r(2796)>0.3) are asymmetrically distributed in azimuth around their host galaxies:72% of the absorbers studied and 100% of the close-in absorbers within b<38 kpc, are located within 50deg of the host galaxy's projected minor axis. Composite models consisting either of a simple bipolar component plus a spherical or disk component, or a single highly softened bipolar distribution, can...

  9. Equilibrium and kinetic modelling of cadmium (II) biosorption by Dried Biomass Aphanothece sp. from aqueous phase

    Science.gov (United States)

    Awalina; Harimawan, A.; Haryani, G. S.; Setiadi, T.

    2017-05-01

    The Biosorption of cadmium (II) ions on dried biomass of Aphanothece sp.which previously grown in a photobioreactor system with atmospheric carbon dioxide fed input, was studied in a batch system with respect to initial pH, biomass concentration, contact time, and temperature. The biomass exhibited the highest cadmium (II) uptake capacity at 30ºC, initial pH of 8.0±0.2 in 60 minute and initial cadmium (II) ion concentration of 7.76 mg/L. Maximum biosorption capacities were 16.47 mg/g, 54.95 mg/g and 119.05 mg/g at range of initial cadmium (II) 0.96-3.63 mg/L, 1.99-8.10 mg/L and 6.48-54.38 mg/L, respectively. Uptake kinetics follows the pseudo-second order model while equilibrium is best described by Langmuir isotherm model. Isotherms have been used to determine thermodynamic parameter process (free energy change, enthalpy change and entropy change). FTIR analysis of microalgae biomass revealed the presence of amino acids, carboxyl, hydroxyl, sulfhydryl and carbonyl groups, which are responsible for biosorption of metal ions. During repeated sorption/desorption cycles, the ratio of Cd (II) desorption to biosorption decreased from 81% (at first cycle) to only 27% (at the third cycle). Nevertheless, due to its higher biosorption capability than other adsorbent, Aphanothece sp appears to be a good biosorbent for removing metal Cd (II) ions from aqueous phase.

  10. Design Accountability

    DEFF Research Database (Denmark)

    Koskinen, Ilpo; Krogh, Peter

    2015-01-01

    design research is that where classical research is interested in singling out a particular aspect and exploring it in depth, design practice is characterized by balancing numerous concerns in a heterogenous and occasionally paradoxical product. It is on this basis the notion of design accountability...

  11. Design Accountability

    DEFF Research Database (Denmark)

    Koskinen, Ilpo; Krogh, Peter

    2015-01-01

    design research is that where classical research is interested in singling out a particular aspect and exploring it in depth, design practice is characterized by balancing numerous concerns in a heterogenous and occasionally paradoxical product. It is on this basis the notion of design accountability...

  12. Differential geometry based solvation model II: Lagrangian formulation.

    Science.gov (United States)

    Chen, Zhan; Baker, Nathan A; Wei, G W

    2011-12-01

    Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation models. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The optimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and PB equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of

  13. Study of multiparticle production by gluon dominance model (Part II)

    CERN Document Server

    Ermolov, P F; Kuraev, E A; Kutov, A V; Nikitin, V A; Pankov, A A; Roufanov, I A; Zhidkov, N K

    2005-01-01

    The gluon dominance model presents a description of multiparticle production in proton-proton collisions and proton-antiproton annihilation. The collective behavior of secondary particles in $pp$-interactions at 70 GeV/c and higher is studied in the project {\\bf "Thermalization"}. The obtained neutral and charged multiplicity distribution parameters explain some RHIC-data. The gluon dominance model is modified by the inclusion of intermediate quark topology for the multiplicity distribution description in the pure $p\\bar p$-annihilation at few tens GeV/c and explains behavior of the second correlative moment. This article proposes a mechanism of the soft photon production as a sign of hadronization. Excess of soft photons allows one to estimate the emission region size.

  14. MODELING OF TARGETED DRUG DELIVERY PART II. MULTIPLE DRUG ADMINISTRATION

    Directory of Open Access Journals (Sweden)

    A. V. Zaborovskiy

    2017-01-01

    Full Text Available In oncology practice, despite significant advances in early cancer detection, surgery, radiotherapy, laser therapy, targeted therapy, etc., chemotherapy is unlikely to lose its relevance in the near future. In this context, the development of new antitumor agents is one of the most important problems of cancer research. In spite of the importance of searching for new compounds with antitumor activity, the possibilities of the “old” agents have not been fully exhausted. Targeted delivery of antitumor agents can give them a “second life”. When developing new targeted drugs and their further introduction into clinical practice, the change in their pharmacodynamics and pharmacokinetics plays a special role. The paper describes a pharmacokinetic model of the targeted drug delivery. The conditions under which it is meaningful to search for a delivery vehicle for the active substance were described. Primary screening of antitumor agents was undertaken to modify them for the targeted delivery based on underlying assumptions of the model.

  15. The Friedrichs-Model with fermion-boson couplings II

    CERN Document Server

    Civitarese, O; Pronko, G P

    2007-01-01

    In this work we present a formal solution of the extended version of the Friedrichs Model. The Hamiltonian consists of discrete and continuum bosonic states, which are coupled to fermions. The simultaneous treatment of the couplings of the fermions with the discrete and continuous sectors of the bosonic degrees of freedom leads to a system of coupled equations, whose solutions are found by applying standard methods of representation of bound and resonant states.

  16. Slag Behavior in Gasifiers. Part II: Constitutive Modeling of Slag

    Energy Technology Data Exchange (ETDEWEB)

    Massoudi, Mehrdad [National Energy Technology Laboratory; Wang, Ping

    2013-02-07

    The viscosity of slag and the thermal conductivity of ash deposits are among two of the most important constitutive parameters that need to be studied. The accurate formulation or representations of the (transport) properties of coal present a special challenge of modeling efforts in computational fluid dynamics applications. Studies have indicated that slag viscosity must be within a certain range of temperatures for tapping and the membrane wall to be accessible, for example, between 1,300 °C and 1,500 °C, the viscosity is approximately 25 Pa·s. As the operating temperature decreases, the slag cools and solid crystals begin to form. Since slag behaves as a non-linear fluid, we discuss the constitutive modeling of slag and the important parameters that must be studied. We propose a new constitutive model, where the stress tensor not only has a yield stress part, but it also has a viscous part with a shear rate dependency of the viscosity, along with temperature and concentration dependency, while allowing for the possibility of the normal stress effects. In Part I, we reviewed, identify and discuss the key coal ash properties and the operating conditions impacting slag behavior.

  17. Slag Behavior in Gasifiers. Part II: Constitutive Modeling of Slag

    Directory of Open Access Journals (Sweden)

    Mehrdad Massoudi

    2013-02-01

    Full Text Available The viscosity of slag and the thermal conductivity of ash deposits are among two of the most important constitutive parameters that need to be studied. The accurate formulation or representations of the (transport properties of coal present a special challenge of modeling efforts in computational fluid dynamics applications. Studies have indicated that slag viscosity must be within a certain range of temperatures for tapping and the membrane wall to be accessible, for example, between 1,300 °C and 1,500 °C, the viscosity is approximately 25 Pa·s. As the operating temperature decreases, the slag cools and solid crystals begin to form. Since slag behaves as a non-linear fluid, we discuss the constitutive modeling of slag and the important parameters that must be studied. We propose a new constitutive model, where the stress tensor not only has a yield stress part, but it also has a viscous part with a shear rate dependency of the viscosity, along with temperature and concentration dependency, while allowing for the possibility of the normal stress effects. In Part I, we reviewed, identify and discuss the key coal ash properties and the operating conditions impacting slag behavior.

  18. The use of MAVIS II to integrate the modeling and analysis of explosive valve interactions

    Energy Technology Data Exchange (ETDEWEB)

    Ng, R.; Kwon, D.M.

    1998-12-31

    The MAVIS II computer program provides for the modeling and analysis of explosive valve interactions. This report describes the individual components of the program and how MAVIS II is used with other available tools to integrate the design and understanding of explosive valves. The rationale and model used for each valve interaction is described. Comparisons of the calculated results with available data have demonstrated the feasibility and accuracy of using MAVIS II for analytical studies of explosive valve interactions. The model used for the explosive or pyrotechnic used as the driving force in explosive valves is the most critical to be understood and modeled. MAVIS II is an advanced version that incorporates a plastic, as well as elastic, modeling of the deformations experienced when plungers are forced into a bore. The inclusion of a plastic model has greatly expanded the use of MAVIS for all categories (opening, closure, or combined) of valves, especially for the closure valves in which the sealing operation requires the plastic deformation of either a plunger or bore over a relatively large area. In order to increase its effectiveness, the use of MAVIS II should be integrated with the results from available experimental hardware. Test hardware such as the Velocity Interferometer System for Any Reflector (VISAR) and Velocity Generator test provide experimental data for accurate comparison of the actual valve functions. Variable Explosive Chamber (VEC) and Constant Explosive Volume (CEV) tests are used to provide the proper explosive equation-of-state for the MAVIS calculations of the explosive driving forces. The rationale and logistics of this integration is demonstrated through an example. A recent valve design is used to demonstrate how MAVIS II can be integrated with experimental tools to provide an understanding of the interactions in this valve.

  19. Impact of accounting for coloured noise in radar altimetry data on a regional quasi-geoid model

    Science.gov (United States)

    Farahani, H. H.; Slobbe, D. C.; Klees, R.; Seitz, Kurt

    2016-07-01

    We study the impact of an accurate computation and incorporation of coloured noise in radar altimeter data when computing a regional quasi-geoid model using least-squares techniques. Our test area comprises the Southern North Sea including the Netherlands, Belgium, and parts of France, Germany, and the UK. We perform the study by modelling the disturbing potential with spherical radial base functions. To that end, we use the traditional remove-compute-restore procedure with a recent GRACE/GOCE static gravity field model. Apart from radar altimeter data, we use terrestrial, airborne, and shipboard gravity data. Radar altimeter sea surface heights are corrected for the instantaneous dynamic topography and used in the form of along-track quasi-geoid height differences. Noise in these data are estimated using repeat-track and post-fit residual analysis techniques and then modelled as an auto regressive moving average process. Quasi-geoid models are computed with and without taking the modelled coloured noise into account. The difference between them is used as a measure of the impact of coloured noise in radar altimeter along-track quasi-geoid height differences on the estimated quasi-geoid model. The impact strongly depends on the availability of shipboard gravity data. If no such data are available, the impact may attain values exceeding 10 centimetres in particular areas. In case shipboard gravity data are used, the impact is reduced, though it still attains values of several centimetres. We use geometric quasi-geoid heights from GPS/levelling data at height markers as control data to analyse the quality of the quasi-geoid models. The quasi-geoid model computed using a model of the coloured noise in radar altimeter along-track quasi-geoid height differences shows in some areas a significant improvement over a model that assumes white noise in these data. However, the interpretation in other areas remains a challenge due to the limited quality of the control data.

  20. Impact of accounting for coloured noise in radar altimetry data on a regional quasi-geoid model

    Science.gov (United States)

    Farahani, H. H.; Slobbe, D. C.; Klees, R.; Seitz, Kurt

    2017-01-01

    We study the impact of an accurate computation and incorporation of coloured noise in radar altimeter data when computing a regional quasi-geoid model using least-squares techniques. Our test area comprises the Southern North Sea including the Netherlands, Belgium, and parts of France, Germany, and the UK. We perform the study by modelling the disturbing potential with spherical radial base functions. To that end, we use the traditional remove-compute-restore procedure with a recent GRACE/GOCE static gravity field model. Apart from radar altimeter data, we use terrestrial, airborne, and shipboard gravity data. Radar altimeter sea surface heights are corrected for the instantaneous dynamic topography and used in the form of along-track quasi-geoid height differences. Noise in these data are estimated using repeat-track and post-fit residual analysis techniques and then modelled as an auto regressive moving average process. Quasi-geoid models are computed with and without taking the modelled coloured noise into account. The difference between them is used as a measure of the impact of coloured noise in radar altimeter along-track quasi-geoid height differences on the estimated quasi-geoid model. The impact strongly depends on the availability of shipboard gravity data. If no such data are available, the impact may attain values exceeding 10 centimetres in particular areas. In case shipboard gravity data are used, the impact is reduced, though it still attains values of several centimetres. We use geometric quasi-geoid heights from GPS/levelling data at height markers as control data to analyse the quality of the quasi-geoid models. The quasi-geoid model computed using a model of the coloured noise in radar altimeter along-track quasi-geoid height differences shows in some areas a significant improvement over a model that assumes white noise in these data. However, the interpretation in other areas remains a challenge due to the limited quality of the control data.

  1. Accounting for the uncertainty related to building occupants with regards to visual comfort: A literature survey on drivers and models

    DEFF Research Database (Denmark)

    Fabi, Valentina; Andersen, Rune Korsholm; Corgnati, Stefano

    2016-01-01

    energy use during the design stage. Since the reaction to thermal, acoustic, or visual stimuli is not the same for every human being, monitoring the behaviour inside buildings is an essential step to assert differences in energy consumption related to different interactions. Reliable information...... conditions influence occupants' manual controlling of the system in offices and by consequence the energy consumption. The purpose of this study was to investigate the possible drivers for light-switching to model occupant behaviour in office buildings. The probability of switching lighting systems on or off...... who take daylight level into account and switch on lights only if necessary and people who totally disregard the natural lighting. This underlines the importance of how individuality is at the base of the definition of the different types of users....

  2. A 3D finite-strain-based constitutive model for shape memory alloys accounting for thermomechanical coupling and martensite reorientation

    Science.gov (United States)

    Wang, Jun; Moumni, Ziad; Zhang, Weihong; Xu, Yingjie; Zaki, Wael

    2017-06-01

    The paper presents a finite-strain constitutive model for shape memory alloys (SMAs) that accounts for thermomechanical coupling and martensite reorientation. The finite-strain formulation is based on a two-tier, multiplicative decomposition of the deformation gradient into thermal, elastic, and inelastic parts, where the inelastic deformation is further split into phase transformation and martensite reorientation components. A time-discrete formulation of the constitutive equations is proposed and a numerical integration algorithm is presented featuring proper symmetrization of the tensor variables and explicit formulation of the material and spatial tangent operators involved. The algorithm is used for finite element analysis of SMA components subjected to various loading conditions, including uniaxial, non-proportional, isothermal and adiabatic loading cases. The analysis is carried out using the FEA software Abaqus by means of a user-defined material subroutine, which is then utilized to simulate a SMA archwire undergoing large strains and rotations.

  3. Towards ecosystem accounting

    NARCIS (Netherlands)

    Duku, C.; Rathjens, H.; Zwart, S.J.; Hein, L.

    2015-01-01

    Ecosystem accounting is an emerging field that aims to provide a consistent approach to analysing environment-economy interactions. One of the specific features of ecosystem accounting is the distinction between the capacity and the flow of ecosystem services. Ecohydrological modelling to support

  4. Meteorological and Back Trajectory Modeling for the Rocky Mountain Atmospheric Nitrogen and Sulfur Study II

    Directory of Open Access Journals (Sweden)

    Kristi A. Gebhart

    2014-01-01

    Full Text Available The Rocky Mountain Atmospheric Nitrogen and Sulfur (RoMANS II study with field operations during November 2008 through November 2009 was designed to evaluate the composition and sources of reactive nitrogen in Rocky Mountain National Park, Colorado, USA. As part of RoMANS II, a mesoscale meteorological model was utilized to provide input for back trajectory and chemical transport models. Evaluation of the model's ability to capture important transport patterns in this region of complex terrain is discussed. Previous source-receptor studies of nitrogen in this region are also reviewed. Finally, results of several back trajectory analyses for RoMANS II are presented. The trajectory mass balance (TrMB model, a receptor-based linear regression technique, was used to estimate mean source attributions of airborne ammonia concentrations during RoMANS II. Though ammonia concentrations are usually higher when there is transport from the east, the TrMB model estimates that, on average, areas to the west contribute a larger mean fraction of the ammonia. Possible reasons for this are discussed and include the greater frequency of westerly versus easterly winds, the possibility that ammonia is transported long distances as ammonium nitrate, and the difficulty of correctly modeling the transport winds in this area.

  5. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    Science.gov (United States)

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012.

  6. Hollow cathode modeling: II. Physical analysis and parametric study

    Science.gov (United States)

    Sary, Gaétan; Garrigues, Laurent; Boeuf, Jean-Pierre

    2017-05-01

    A numerical emissive hollow cathode model which couples plasma and thermal aspects of the NASA NSTAR cathode has been presented in a companion paper and simulation results obtained using the plasma model were compared to experimental data. We now compare simulation results with measurements using the full coupled model. Inside the cathode, the simulated plasma density profile agrees with the experimental data up to the ±50% experimental uncertainty while the simulated emitter temperature differs from measurements by at most 5 K. We then proceed to an analysis of the cathode discharge both inside the cathode where electron emission is dominant and outside in the near plume where electron transport instabilities are important. As observed previously in the literature, the total emitted electron current is much larger (34 {{A}}) than the set discharge current collected at the anode (13 {{A}}) while ionization plays a negligible role. Extracted electrons are emitted from a region much shorter than the full emitter (0.9 {{cm}} versus 2.5 {{cm}}). The influence of an applied axial magnetic field in the plume is also assessed and we observe that it leads to a 10-fold increase of the plasma density 1 cm downstream of the orifice entrance while the simulated discharge potential at the anode is increased from 10 {{V}} up to 35.5 {{V}}. Lastly, we perform a parametric study on both the operating point (discharge current, mass flow rate) and design (inner radius) of the cathode. The simulated useful operating envelope is shown to be limited at low discharge current mostly because of the probable ion sputtering of the emitter and at high discharge current because of emitter evaporation, plasma oscillations and sputtering of the keeper electrode. The behavior of the cathode is also analyzed w.r.t. its internal radius and simulation results show that the useful emitter length scales linearly with the cathode radius.

  7. Quantum chaos in the nuclear collective model. II. Peres lattices.

    Science.gov (United States)

    Stránský, Pavel; Hruska, Petr; Cejnar, Pavel

    2009-06-01

    This is a continuation of our paper [Phys. Rev. E 79, 046202 (2009)] devoted to signatures of quantum chaos in the geometric collective model of atomic nuclei. We apply the method by Peres to study ordered and disordered patterns in quantum spectra drawn as lattices in the plane of energy vs average of a chosen observable. Good qualitative agreement with standard measures of chaos is manifested. The method provides an efficient tool for studying structural changes in eigenstates across quantum spectra of general systems.

  8. Modeling direct interband tunneling. II. Lower-dimensional structures

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Andrew, E-mail: pandrew@ucla.edu [Department of Electrical Engineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Chui, Chi On [Department of Electrical Engineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2014-08-07

    We investigate the applicability of the two-band Hamiltonian and the widely used Kane analytical formula to interband tunneling along unconfined directions in nanostructures. Through comparisons with k·p and tight-binding calculations and quantum transport simulations, we find that the primary correction is the change in effective band gap. For both constant fields and realistic tunnel field-effect transistors, dimensionally consistent band gap scaling of the Kane formula allows analytical and numerical device simulations to approximate non-equilibrium Green's function current characteristics without arbitrary fitting. This allows efficient first-order calibration of semiclassical models for interband tunneling in nanodevices.

  9. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  10. A Type II Diabetic Model-from Insulin Resistance to Diabetes

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    1 IntroductionWith the development of the living standard of the people, the incidence rate of diabetes, especially type II diabetes, increase with years. Establishment of corresponding animal model has become one of the important tool investigating diabetes. Diabetic animal model is classified roughly into experimental and spontaneous diabetic animal model. The spontaneous one has a relatively high applied value. However, its extensive application has been restricted by some factors such as costliness and ...

  11. An empirical test of Birkett’s competency model for management accountants : A confirmative study conducted in the Netherlands

    NARCIS (Netherlands)

    Bots, J.M.; Groenland, E.A.G.; Swagerman, D.

    2009-01-01

    In 2002, the Accountants-in-Business section of the International Federation of Accountants (IFAC) issued the Competency Profiles for Management Accounting Practice and Practitioners report. This “Birkett Report” presents a framework for competency development during the careers of management accoun

  12. An empirical test of Birkett’s competency model for management accountants : A confirmative study conducted in the Netherlands

    NARCIS (Netherlands)

    Bots, J.M.; Groenland, E.A.G.; Swagerman, D.

    2009-01-01

    In 2002, the Accountants-in-Business section of the International Federation of Accountants (IFAC) issued the Competency Profiles for Management Accounting Practice and Practitioners report. This “Birkett Report” presents a framework for competency development during the careers of management accoun

  13. Modeling downstream fining in sand-bed rivers. II: Application

    Science.gov (United States)

    Wright, S.; Parker, G.

    2005-01-01

    In this paper the model presented in the companion paper, Wright and Parker (2005) is applied to a generic river reach typical of a large, sand-bed river flowing into the ocean in order to investigate the mechanisms controlling longitudinal profile development and downstream fining. Three mechanisms which drive downstream fining are studied: a delta prograding into standing water, sea-level rise, and tectonic subsidence. Various rates of sea-level rise (typical of the late Holocene) and tectonic subsidence are modeled in order to quantify their effects on the degree of profile concavity and downstream fining. Also, several other physical mechanisms which may affect fining are studied, including the relative importance of the suspended versus bed load, the effect of the loss of sediment overbank, and the influence of the delta bottom slope. Finally, sensitivity analysis is used to show that the grain-size distribution at the interface between the active layer and substrate has a significant effect on downstream fining. ?? 2005 International Association of Hydraulic Engineering and Research.

  14. MODELING THE 1958 LITUYA BAY MEGA-TSUNAMI, II

    Directory of Open Access Journals (Sweden)

    Charles L. Mader

    2002-01-01

    Full Text Available Lituya Bay, Alaska is a T-Shaped bay, 7 miles long and up to 2 miles wide. The two arms at the head of the bay, Gilbert and Crillon Inlets, are part of a trench along the Fairweather Fault. On July 8, 1958, an 7.5 Magnitude earthquake occurred along the Fairweather fault with an epicenter near Lituya Bay.A mega-tsunami wave was generated that washed out trees to a maximum altitude of 520 meters at the entrance of Gilbert Inlet. Much of the rest of the shoreline of the Bay was denuded by the tsunami from 30 to 200 meters altitude.In the previous study it was determined that if the 520 meter high run-up was 50 to 100 meters thick, the observed inundation in the rest of Lituya Bay could be numerically reproduced. It was also concluded that further studies would require full Navier-Stokes modeling similar to those required for asteroid generated tsunami waves.During the Summer of 2000, Hermann Fritz conducted experiments that reproduced the Lituya Bay 1958 event. The laboratory experiments indicated that the 1958 Lituya Bay 524 meter run-up on the spur ridge of Gilbert Inlet could be caused by a landslide impact.The Lituya Bay impact landslide generated tsunami was modeled with the full Navier- Stokes AMR Eulerian compressible hydrodynamic code called SAGE with includes the effect of gravity.

  15. nIFTy galaxy cluster simulations II: radiative models

    CERN Document Server

    Sembolini, Federico; Pearce, Frazer R; Power, Chris; Knebe, Alexander; Kay, Scott T; Cui, Weiguang; Yepes, Gustavo; Beck, Alexander M; Borgani, Stefano; Cunnama, Daniel; Davé, Romeel; February, Sean; Huang, Shuiyao; Katz, Neal; McCarthy, Ian G; Murante, Giuseppe; Newton, Richard D A; Perret, Valentin; Saro, Alexandro; Schaye, Joop; Teyssier, Romain

    2015-01-01

    We have simulated the formation of a massive galaxy cluster (M$_{200}^{\\rm crit}$ = 1.1$\\times$10$^{15}h^{-1}M_{\\odot}$) in a $\\Lambda$CDM universe using 10 different codes (RAMSES, 2 incarnations of AREPO and 7 of GADGET), modeling hydrodynamics with full radiative subgrid physics. These codes include Smoothed-Particle Hydrodynamics (SPH), spanning traditional and advanced SPH schemes, adaptive mesh and moving mesh codes. Our goal is to study the consistency between simulated clusters modeled with different radiative physical implementations - such as cooling, star formation and AGN feedback. We compare images of the cluster at $z=0$, global properties such as mass, and radial profiles of various dynamical and thermodynamical quantities. We find that, with respect to non-radiative simulations, dark matter is more centrally concentrated, the extent not simply depending on the presence/absence of AGN feedback. The scatter in global quantities is substantially higher than for non-radiative runs. Intriguingly, a...

  16. Neurologic abnormalities in mouse models of the lysosomal storage disorders mucolipidosis II and mucolipidosis III γ.

    Directory of Open Access Journals (Sweden)

    Rachel A Idol

    Full Text Available UDP-GlcNAc:lysosomal enzyme N-acetylglucosamine-1-phosphotransferase is an α2β2γ2 hexameric enzyme that catalyzes the synthesis of the mannose 6-phosphate targeting signal on lysosomal hydrolases. Mutations in the α/β subunit precursor gene cause the severe lysosomal storage disorder mucolipidosis II (ML II or the more moderate mucolipidosis III alpha/beta (ML III α/β, while mutations in the γ subunit gene cause the mildest disorder, mucolipidosis III gamma (ML III γ. Here we report neurologic consequences of mouse models of ML II and ML III γ. The ML II mice have a total loss of acid hydrolase phosphorylation, which results in depletion of acid hydrolases in mesenchymal-derived cells. The ML III γ mice retain partial phosphorylation. However, in both cases, total brain extracts have normal or near normal activity of many acid hydrolases reflecting mannose 6-phosphate-independent lysosomal targeting pathways. While behavioral deficits occur in both models, the onset of these changes occurs sooner and the severity is greater in the ML II mice. The ML II mice undergo progressive neurodegeneration with neuronal loss, astrocytosis, microgliosis and Purkinje cell depletion which was evident at 4 months whereas ML III γ mice have only mild to moderate astrocytosis and microgliosis at 12 months. Both models accumulate the ganglioside GM2, but only ML II mice accumulate fucosylated glycans. We conclude that in spite of active mannose 6-phosphate-independent targeting pathways in the brain, there are cell types that require at least partial phosphorylation function to avoid lysosomal dysfunction and the associated neurodegeneration and behavioral impairments.

  17. Prediction of the binding affinities of peptides to class II MHC using a regularized thermodynamic model

    Directory of Open Access Journals (Sweden)

    Mittelmann Hans D

    2010-01-01

    Full Text Available Abstract Background The binding of peptide fragments of extracellular peptides to class II MHC is a crucial event in the adaptive immune response. Each MHC allotype generally binds a distinct subset of peptides and the enormous number of possible peptide epitopes prevents their complete experimental characterization. Computational methods can utilize the limited experimental data to predict the binding affinities of peptides to class II MHC. Results We have developed the Regularized Thermodynamic Average, or RTA, method for predicting the affinities of peptides binding to class II MHC. RTA accounts for all possible peptide binding conformations using a thermodynamic average and includes a parameter constraint for regularization to improve accuracy on novel data. RTA was shown to achieve higher accuracy, as measured by AUC, than SMM-align on the same data for all 17 MHC allotypes examined. RTA also gave the highest accuracy on all but three allotypes when compared with results from 9 different prediction methods applied to the same data. In addition, the method correctly predicted the peptide binding register of 17 out of 18 peptide-MHC complexes. Finally, we found that suboptimal peptide binding registers, which are often ignored in other prediction methods, made significant contributions of at least 50% of the total binding energy for approximately 20% of the peptides. Conclusions The RTA method accurately predicts peptide binding affinities to class II MHC and accounts for multiple peptide binding registers while reducing overfitting through regularization. The method has potential applications in vaccine design and in understanding autoimmune disorders. A web server implementing the RTA prediction method is available at http://bordnerlab.org/RTA/.

  18. A Long-Term Memory Competitive Process Model of a Common Procedural Error. Part II: Working Memory Load and Capacity

    Science.gov (United States)

    2013-07-01

    A Long-Term Memory Competitive Process Model of a Common Procedural Error, Part II: Working Memory Load and Capacity Franklin P. Tamborello, II...00-00-2013 4. TITLE AND SUBTITLE A Long-Term Memory Competitive Process Model of a Common Procedural Error, Part II: Working Memory Load and...07370024.2011.601692 Tamborello, F. P., & Trafton, J. G. (2013). A long-term competitive process model of a common procedural error. In Proceedings of the 35th

  19. AMERICAN ACCOUNTING

    Directory of Open Access Journals (Sweden)

    Mihaela Onica

    2005-01-01

    Full Text Available The international Accounting Standards already contribute to the generation of better and more easily comparable financial information on an international level, supporting thus a more effective allocationof the investments resources in the world. Under the circumstances, there occurs the necessity of a consistent application of the standards on a global level. The financial statements are part of thefinancial reporting process. A set of complete financial statements usually includes a balance sheet,a profit and loss account, a report of the financial item change (which can be presented in various ways, for example as a status of the treasury flows and of the funds flows and those notes, as well as those explanatory situations and materials which are part of the financial statements.

  20. Polarized Molecular Orbital Model Chemistry. II. The PMO Method.

    Science.gov (United States)

    Zhang, Peng; Fiedler, Luke; Leverentz, Hannah R; Truhlar, Donald G; Gao, Jiali

    2011-04-12

    We present a new semiempirical molecular orbital method based on neglect of diatomic differential overlap. This method differs from previous NDDO-based methods in that we include p orbitals on hydrogen atoms to provide a more realistic modeling of polarizability. As in AM1-D and PM3-D, we also include damped dispersion. The formalism is based on the original MNDO one, but in the process of parameterization we make some specific changes to some of the functional forms. The present article is a demonstration of the capability of the new approach, and it presents a successful parametrization for compounds composed only of hydrogen and oxygen atoms, including the important case of water clusters.