WorldWideScience

Sample records for simple cost model

  1. The Economics of Vaccinating or Dosing Cattle against Disease: A Simple Linear Cost-Benefit Model with Modifications

    OpenAIRE

    Tisdell, Clem; Ramsay, Gavin

    1995-01-01

    Outlines a simple linear cost-benefit model for determining whether it is economic at the farm-level to vaccinate or dose a batch of livestock against a disease. This model assumes that total benefits and costs are proportional to the number of animals vaccinated. This model is then modified to allow for the possibility of programmes of vaccination or disease prevention involving start-up costs which increase, but at a decreasing rate with batch size or with the size of the herd to be vaccina...

  2. Development of a simple estimation tool for LMFBR construction cost

    International Nuclear Information System (INIS)

    Yoshida, Kazuo; Kinoshita, Izumi

    1999-01-01

    A simple tool for estimating the construction costs of liquid-metal-cooled fast breeder reactors (LMFBRs), 'Simple Cost' was developed in this study. Simple Cost is based on a new estimation formula that can reduce the amount of design data required to estimate construction costs. Consequently, Simple cost can be used to estimate the construction costs of innovative LMFBR concepts for which detailed design has not been carried out. The results of test calculation show that Simple Cost provides cost estimations equivalent to those obtained with conventional methods within the range of plant power from 325 to 1500 MWe. Sensitivity analyses for typical design parameters were conducted using Simple Cost. The effects of four major parameters - reactor vessel diameter, core outlet temperature, sodium handling area and number of secondary loops - on the construction costs of LMFBRs were evaluated quantitatively. The results show that the reduction of sodium handling area is particularly effective in reducing construction costs. (author)

  3. The Structured Intuitive Model for Product Line Economics (SIMPLE)

    National Research Council Canada - National Science Library

    Clements, Paul C; McGregor, John D; Cohen, Sholom G

    2005-01-01

    .... This report presents the Structured Intuitive Model of Product Line Economics (SIMPLE), a general-purpose business model that supports the estimation of the costs and benefits in a product line development organization...

  4. The cost of leg forces in bipedal locomotion: a simple optimization study.

    Directory of Open Access Journals (Sweden)

    John R Rebula

    Full Text Available Simple optimization models show that bipedal locomotion may largely be governed by the mechanical work performed by the legs, minimization of which can automatically discover walking and running gaits. Work minimization can reproduce broad aspects of human ground reaction forces, such as a double-peaked profile for walking and a single peak for running, but the predicted peaks are unrealistically high and impulsive compared to the much smoother forces produced by humans. The smoothness might be explained better by a cost for the force rather than work produced by the legs, but it is unclear what features of force might be most relevant. We therefore tested a generalized force cost that can penalize force amplitude or its n-th time derivative, raised to the p-th power (or p-norm, across a variety of combinations for n and p. A simple model shows that this generalized force cost only produces smoother, human-like forces if it penalizes the rate rather than amplitude of force production, and only in combination with a work cost. Such a combined objective reproduces the characteristic profiles of human walking (R² = 0.96 and running (R² = 0.92, more so than minimization of either work or force amplitude alone (R² = -0.79 and R² = 0.22, respectively, for walking. Humans might find it preferable to avoid rapid force production, which may be mechanically and physiologically costly.

  5. Low Cost, Simple, Intrauterine Insemination Procedure with ...

    African Journals Online (AJOL)

    During the last 30 years however, intrauterine insemination has evolved with the introduction of ovulation stimulating protocols and sperm preparation methods taken from assisted reproduction techniques. Costs have risen, but the success rate has not risen to the same extent. We have therefore developed a quite simple ...

  6. Locally Simple Models Construction: Methodology and Practice

    Directory of Open Access Journals (Sweden)

    I. A. Kazakov

    2017-12-01

    Full Text Available One of the most notable trends associated with the Fourth industrial revolution is a significant strengthening of the role played by semantic methods. They are engaged in artificial intelligence means, knowledge mining in huge flows of big data, robotization, and in the internet of things. Smart contracts also can be mentioned here, although the ’intelligence’ of smart contracts still needs to be seriously elaborated. These trends should inevitably lead to an increased role of logical methods working with semantics, and significantly expand the scope of their application in practice. However, there are a number of problems that hinder this process. We are developing an approach, which makes the application of logical modeling efficient in some important areas. The approach is based on the concept of locally simple models and is primarily focused on solving tasks in the management of enterprises, organizations, governing bodies. The most important feature of locally simple models is their ability to replace software systems. Replacement of programming by modeling gives huge advantages, for instance, it dramatically reduces development and support costs. Modeling, unlike programming, preserves the explicit semantics of models allowing integration with artificial intelligence and robots. In addition, models are much more understandable to general people than programs. In this paper we propose the implementation of the concept of locally simple modeling on the basis of so-called document models, which has been developed by us earlier. It is shown that locally simple modeling is realized through document models with finite submodel coverages. In the second part of the paper an example of using document models for solving a management problem of real complexity is demonstrated.

  7. Simple and Low-Cost Wireless Distributed Measurement System

    Directory of Open Access Journals (Sweden)

    Alessandra Flammini

    2007-07-01

    Full Text Available This paper describes the design and realization of a simple and low-cost system for distributed measurements. Traditional handheld digital multimeters have been equipped with a radio-frequency interface in order to implement what the authors call WDMM, the basic block of a wireless multi-probe data logger. New functionalities require very few components and result in a cost increase of less than 10$. In addition, also maintenance has been facilitated since tracking data such as working state or last calibration time are available to the user. Data inquiry can be performed by a purposely designed module that has the same hardware of the WDMM but a different user interface or by a PDA (Personal Digital Assistant or a traditional personal computer thanks to a USB connection. Simple supervisory software has been realized under the LabVIEW graphical programming environment.

  8. Simple Models for the Dynamic Modeling of Rotating Tires

    Directory of Open Access Journals (Sweden)

    J.C. Delamotte

    2008-01-01

    Full Text Available Large Finite Element (FE models of tires are currently used to predict low frequency behavior and to obtain dynamic model coefficients used in multi-body models for riding and comfort. However, to predict higher frequency behavior, which may explain irregular wear, critical rotating speeds and noise radiation, FE models are not practical. Detailed FE models are not adequate for optimization and uncertainty predictions either, as in such applications the dynamic solution must be computed a number of times. Therefore, there is a need for simpler models that can capture the physics of the tire and be used to compute the dynamic response with a low computational cost. In this paper, the spectral (or continuous element approach is used to derive such a model. A circular beam spectral element that takes into account the string effect is derived, and a method to simulate the response to a rotating force is implemented in the frequency domain. The behavior of a circular ring under different internal pressures is investigated using modal and frequency/wavenumber representations. Experimental results obtained with a real untreaded truck tire are presented and qualitatively compared with the simple model predictions with good agreement. No attempt is made to obtain equivalent parameters for the simple model from the real tire results. On the other hand, the simple model fails to represent the correct variation of the quotient of the natural frequency by the number of circumferential wavelengths with the mode count. Nevertheless, some important features of the real tire dynamic behavior, such as the generation of standing waves and part of the frequency/wavenumber behavior, can be investigated using the proposed simplified model.

  9. A comprehensive cost model for NASA data archiving

    Science.gov (United States)

    Green, J. L.; Klenk, K. F.; Treinish, L. A.

    1990-01-01

    A simple archive cost model has been developed to help predict NASA's archiving costs. The model covers data management activities from the beginning of the mission through launch, acquisition, and support of retrospective users by the long-term archive; it is capable of determining the life cycle costs for archived data depending on how the data need to be managed to meet user requirements. The model, which currently contains 48 equations with a menu-driven user interface, is available for use on an IBM PC or AT.

  10. Operational strategy and marginal costs in simple trigeneration systems

    International Nuclear Information System (INIS)

    Lozano, M.A.; Carvalho, M.; Serra, L.M.

    2009-01-01

    As a direct result of economic pressures to cut expenses, as well as the legal obligation to reduce emissions, companies and businesses are seeking ways to use energy more efficiently. Trigeneration systems (CHCP: Combined Heating, Cooling and Power generation) allow greater operational flexibility at sites with a variable demand for energy in the form of heating and cooling. This is particularly relevant in buildings where the need for heating is restricted to a few winter months. In summer, the absorption chillers make use of the cogenerated heat to produce chilled water, avoiding waste heat discharge. The operation of a simple trigeneration system is analyzed in this paper. The system is interconnected to the electric utility grid, both to receive electricity and to deliver surplus electricity. For any given demand required by the users, a great number of operating conditions are possible. A linear programming model provides the operational mode with the lowest variable cost. A thermoeconomic analysis, based on marginal production costs, is used to obtain unit costs for internal energy flows and final products as well as to explain the best operational strategy as a function of the demand for energy services and the prices of the resources consumed. (author)

  11. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    Science.gov (United States)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  12. Bioinformatics tools for development of fast and cost effective simple ...

    African Journals Online (AJOL)

    Bioinformatics tools for development of fast and cost effective simple sequence repeat ... comparative mapping and exploration of functional genetic diversity in the ... Already, a number of computer programs have been implemented that aim at ...

  13. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  14. Improved cost models for optimizing CO2 pipeline configuration for point-to-point pipelines and simple networks

    NARCIS (Netherlands)

    Knoope, M. M. J.|info:eu-repo/dai/nl/364248149; Guijt, W.; Ramirez, A.|info:eu-repo/dai/nl/284852414; Faaij, A. P. C.

    In this study, a new cost model is developed for CO2 pipeline transport, which starts with the physical properties of CO2 transport and includes different kinds of steel grades and up-to-date material and construction costs. This pipeline cost model is used for a new developed tool to determine the

  15. Complexity-aware simple modeling.

    Science.gov (United States)

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Simple Tidal Prism Models Revisited

    Science.gov (United States)

    Luketina, D.

    1998-01-01

    Simple tidal prism models for well-mixed estuaries have been in use for some time and are discussed in most text books on estuaries. The appeal of this model is its simplicity. However, there are several flaws in the logic behind the model. These flaws are pointed out and a more theoretically correct simple tidal prism model is derived. In doing so, it is made clear which effects can, in theory, be neglected and which can not.

  17. A simple and low-cost recirculating aquaculture system for the production of arapaima juveniles

    OpenAIRE

    Burton, Andrew Mark; Moncayo Calderero, Edwin; Burgos Moran, Ricardo Ernesto; Anastacio Sánchez, Rogelio Lumbes; Avendaño Villamar, Ulises Tiberio; Ortega Torres, Nelson Guillermo

    2016-01-01

    A simple and low-cost recirculating system (RAS) for production of arapaima  (Arapaima gigas) juveniles is described. Twenty arapaima fry (mean 13.0 cm, 12.0 g) were housed in three production tanks and fed a high HUFA diet resulting in 90% of fry successfully progressing to juveniles (mean 17.4 cm long; 40.2 g). The fish were then reared for a further 72 days fed on commercial extruded pellet feed achieving a mean length of 42.6 cm and 656.6 g. The simple and low-cost RAS holds good potentia...

  18. A simple technique to manipulate foraging costs in seed-eating birds

    NARCIS (Netherlands)

    Koetsier, Egbert; Verhulst, Simon

    Food availability is a key factor in ecology and evolution, but available techniques to manipulate the effort to acquire food in vertebrates are technically challenging and/or labour intensive. We present a simple technique to increase foraging costs in seed-eating birds that can be applied with

  19. A Simple Exoskeleton That Assists Plantarflexion Can Reduce the Metabolic Cost of Human Walking

    Science.gov (United States)

    Malcolm, Philippe; Derave, Wim; Galle, Samuel; De Clercq, Dirk

    2013-01-01

    Background Even though walking can be sustained for great distances, considerable energy is required for plantarflexion around the instant of opposite leg heel contact. Different groups attempted to reduce metabolic cost with exoskeletons but none could achieve a reduction beyond the level of walking without exoskeleton, possibly because there is no consensus on the optimal actuation timing. The main research question of our study was whether it is possible to obtain a higher reduction in metabolic cost by tuning the actuation timing. Methodology/Principal Findings We measured metabolic cost by means of respiratory gas analysis. Test subjects walked with a simple pneumatic exoskeleton that assists plantarflexion with different actuation timings. We found that the exoskeleton can reduce metabolic cost by 0.18±0.06 W kg−1 or 6±2% (standard error of the mean) (p = 0.019) below the cost of walking without exoskeleton if actuation starts just before opposite leg heel contact. Conclusions/Significance The optimum timing that we found concurs with the prediction from a mathematical model of walking. While the present exoskeleton was not ambulant, measurements of joint kinetics reveal that the required power could be recycled from knee extension deceleration work that occurs naturally during walking. This demonstrates that it is theoretically possible to build future ambulant exoskeletons that reduce metabolic cost, without power supply restrictions. PMID:23418524

  20. A simple model for indentation creep

    Science.gov (United States)

    Ginder, Ryan S.; Nix, William D.; Pharr, George M.

    2018-03-01

    A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.

  1. A heuristic model for risk and cost impacts of plant outage maintenance schedule

    International Nuclear Information System (INIS)

    Mohammad Hadi Hadavi, S.

    2009-01-01

    Cost and risk are two major competing criteria in maintenance optimization problems. If a plant is forced to shutdown because of accident or fear of accident happening, beside loss of revenue, it causes damage to the credibility and reputation of the business operation. In this paper a heuristic model for incorporating three compelling optimization criteria (i.e., risk, cost, and loss) into a single evaluation function is proposed. Such a model could be used in any evaluation engine of outage maintenance schedule optimizer. It is attempted to make the model realistic and to address the ongoing challenges facing a schedule planner in a simple and commonly understandable fashion. Two simple competing schedules for the NPP feedwater system are examined against the model. The results show that while the model successfully addresses the current challenges for outage maintenance optimization, it properly demonstrates the dynamics of schedule in regards to risk, cost, and losses endured by maintenance schedule, particularly when prolonged outage and lack of maintenance for equipments in need of urgent care are of concern.

  2. Formation of decontamination cost calculation model for severe accident consequence assessment

    International Nuclear Information System (INIS)

    Silva, Kampanart; Promping, Jiraporn; Okamoto, Koji; Ishiwatari, Yuki

    2014-01-01

    In previous studies, the authors developed an index “cost per severe accident” to perform a severe accident consequence assessment that can cover various kinds of accident consequences, namely health effects, economic, social and environmental impacts. Though decontamination cost was identified as a major component, it was taken into account using simple and conservative assumptions, which make it difficult to have further discussions. The decontamination cost calculation model was therefore reconsidered. 99 parameters were selected to take into account all decontamination-related issues, and the decontamination cost calculation model was formed. The distributions of all parameters were determined. A sensitivity analysis using the Morris method was performed in order to identify important parameters that have large influence on the cost per severe accident and large extent of interactions with other parameters. We identified 25 important parameters, and fixed most negligible parameters to the median of their distributions to form a simplified decontamination cost calculation model. Calculations of cost per severe accident with the full model (all parameters distributed), and with the simplified model were performed and compared. The differences of the cost per severe accident and its components were not significant, which ensure the validity of the simplified model. The simplified model is used to perform a full scope calculation of the cost per severe accident and compared with the previous study. The decontamination cost increased its importance significantly. (author)

  3. Simple spherical ablative-implosion model

    International Nuclear Information System (INIS)

    Mayer, F.J.; Steele, J.T.; Larsen, J.T.

    1980-01-01

    A simple model of the ablative implosion of a high-aspect-ratio (shell radius to shell thickness ratio) spherical shell is described. The model is similar in spirit to Rosenbluth's snowplow model. The scaling of the implosion time was determined in terms of the ablation pressure and the shell parameters such as diameter, wall thickness, and shell density, and compared these to complete hydrodynamic code calculations. The energy transfer efficiency from ablation pressure to shell implosion kinetic energy was examined and found to be very efficient. It may be possible to attach a simple heat-transport calculation to our implosion model to describe the laser-driven ablation-implosion process. The model may be useful for determining other energy driven (e.g., ion beam) implosion scaling

  4. A simple, scalable and low-cost method to generate thermal diagnostics of a domestic building

    International Nuclear Information System (INIS)

    Papafragkou, Anastasios; Ghosh, Siddhartha; James, Patrick A.B.; Rogers, Alex; Bahaj, AbuBakr S.

    2014-01-01

    Highlights: • Our diagnostic method uses a single field measurement from a temperature logger. • Building technical performance and occupant behaviour are addressed simultaneously. • Our algorithm learns a thermal model of a home and diagnoses the heating system. • We propose a novel clustering approach to decouple user behaviour from technical performance. • Our diagnostic confidence is enhanced using a large scale deployment. - Abstract: Traditional approaches to understand the problem of the energy performance in the domestic sector include on-site surveys by energy assessors and the installation of complex home energy monitoring systems. The time and money that needs to be invested by the occupants and the form of feedback generated by these approaches often makes them unattractive to householders. This paper demonstrates a simple, low cost method that generates thermal diagnostics for dwellings, measuring only one field dataset; internal temperature over a period of 1 week. A thermal model, which is essentially a learning algorithm, generates a set of thermal diagnostics about the primary heating system, the occupants’ preferences and the impact of certain interventions, such as lowering the thermostat set-point. A simple clustering approach is also proposed to categorise homes according to their building fabric thermal performance and occupants’ energy efficiency with respect to ventilation. The advantage of this clustering approach is that the occupants receive tailored advice on certain actions that if taken will improve the overall thermal performance of a dwelling. Due to the method’s low cost and simplicity it could facilitate government initiatives, such as the ‘Green Deal’ in the UK

  5. A Cost Model for Integrated Logistic Support Activities

    Directory of Open Access Journals (Sweden)

    M. Elena Nenni

    2013-01-01

    Full Text Available An Integrated Logistic Support (ILS service has the objective of improving a system’s efficiency and availability for the life cycle. The system constructor offers the service to the customer, and she becomes the Contractor Logistic Support (CLS. The aim of this paper is to propose an approach to support the CLS in the budget formulation. Specific goals of the model are the provision of the annual cost of ILS activities through a specific cost model and a comprehensive examination of expected benefits, costs and savings under alternative ILS strategies. A simple example derived from an industrial application is also provided to illustrate the idea. Scientific literature is lacking in the topic and documents from the military are just dealing with the issue of performance measurement. Moreover, they are obviously focused on the customer’s perspective. Other scientific papers are general and focused only on maintenance or life cycle management. The model developed in this paper approaches the problem from the perspective of the CLS, and it is specifically tailored on the main issues of an ILS service.

  6. Comparison between the SIMPLE and ENERGY mixing models

    International Nuclear Information System (INIS)

    Burns, K.J.; Todreas, N.E.

    1980-07-01

    The SIMPLE and ENERGY mixing models were compared in order to investigate the limitations of SIMPLE's analytically formulated mixing parameter, relative to the experimentally calibrated ENERGY mixing parameters. For interior subchannels, it was shown that when the SIMPLE and ENERGY parameters are reduced to a common form, there is good agreement between the two models for a typical fuel geometry. However, large discrepancies exist for typical blanket (lower P/D) geometries. Furthermore, the discrepancies between the mixing parameters result in significant differences in terms of the temperature profiles generated by the ENERGY code utilizing these mixing parameters as input. For edge subchannels, the assumptions made in the development of the SIMPLE model were extended to the rectangular edge subchannel geometry used in ENERGY. The resulting effective eddy diffusivities (used by the ENERGY code) associated with the SIMPLE model are again closest to those of the ENERGY model for the fuel assembly geometry. Finally, the SIMPLE model's neglect of a net swirl effect in the edge region is most limiting for assemblies exhibiting relatively large radial power skews

  7. A simple model for binary star evolution

    International Nuclear Information System (INIS)

    Whyte, C.A.; Eggleton, P.P.

    1985-01-01

    A simple model for calculating the evolution of binary stars is presented. Detailed stellar evolution calculations of stars undergoing mass and energy transfer at various rates are reported and used to identify the dominant physical processes which determine the type of evolution. These detailed calculations are used to calibrate the simple model and a comparison of calculations using the detailed stellar evolution equations and the simple model is made. Results of the evolution of a few binary systems are reported and compared with previously published calculations using normal stellar evolution programs. (author)

  8. Linking Simple Economic Theory Models and the Cointegrated Vector AutoRegressive Model

    DEFF Research Database (Denmark)

    Møller, Niels Framroze

    This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its stru....... Further fundamental extensions and advances to more sophisticated theory models, such as those related to dynamics and expectations (in the structural relations) are left for future papers......This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its......, it is demonstrated how other controversial hypotheses such as Rational Expectations can be formulated directly as restrictions on the CVAR-parameters. A simple example of a "Neoclassical synthetic" AS-AD model is also formulated. Finally, the partial- general equilibrium distinction is related to the CVAR as well...

  9. A Simple Model for Complex Fabrication of MEMS based Pressure Sensor: A Challenging Approach

    Directory of Open Access Journals (Sweden)

    Himani SHARMA

    2010-08-01

    Full Text Available In this paper we have presented the simple model for complex fabrication of MEMS based absolute micro pressure sensor. This kind of modeling is extremely useful for determining its complexity in fabrication steps and provides complete information about process sequence to be followed during manufacturing. Therefore, the need for test iteration decreases and cost, time can be reduced significantly. By using DevEdit tool (part of SILVACO tool, a behavioral model of pressure sensor have been presented and implemented.

  10. Long-range planning cost model for support of future space missions by the deep space network

    Science.gov (United States)

    Sherif, J. S.; Remer, D. S.; Buchanan, H. R.

    1990-01-01

    A simple model is suggested to do long-range planning cost estimates for Deep Space Network (DSP) support of future space missions. The model estimates total DSN preparation costs and the annual distribution of these costs for long-range budgetary planning. The cost model is based on actual DSN preparation costs from four space missions: Galileo, Voyager (Uranus), Voyager (Neptune), and Magellan. The model was tested against the four projects and gave cost estimates that range from 18 percent above the actual total preparation costs of the projects to 25 percent below. The model was also compared to two other independent projects: Viking and Mariner Jupiter/Saturn (MJS later became Voyager). The model gave cost estimates that range from 2 percent (for Viking) to 10 percent (for MJS) below the actual total preparation costs of these missions.

  11. HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.

    Science.gov (United States)

    Juusola, Jessie L; Brandeau, Margaret L

    2016-04-01

    To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.

  12. Simple models with ALICE fluxes

    CERN Document Server

    Striet, J

    2000-01-01

    We introduce two simple models which feature an Alice electrodynamics phase. In a well defined sense the Alice flux solutions we obtain in these models obey first order equations similar to those of the Nielsen-Olesen fluxtube in the abelian higgs model in the Bogomol'nyi limit. Some numerical solutions are presented as well.

  13. A simple and low-cost biofilm quantification method using LED and CMOS image sensor.

    Science.gov (United States)

    Kwak, Yeon Hwa; Lee, Junhee; Lee, Junghoon; Kwak, Soo Hwan; Oh, Sangwoo; Paek, Se-Hwan; Ha, Un-Hwan; Seo, Sungkyu

    2014-12-01

    A novel biofilm detection platform, which consists of a cost-effective red, green, and blue light-emitting diode (RGB LED) as a light source and a lens-free CMOS image sensor as a detector, is designed. This system can measure the diffraction patterns of cells from their shadow images, and gather light absorbance information according to the concentration of biofilms through a simple image processing procedure. Compared to a bulky and expensive commercial spectrophotometer, this platform can provide accurate and reproducible biofilm concentration detection and is simple, compact, and inexpensive. Biofilms originating from various bacterial strains, including Pseudomonas aeruginosa (P. aeruginosa), were tested to demonstrate the efficacy of this new biofilm detection approach. The results were compared with the results obtained from a commercial spectrophotometer. To utilize a cost-effective light source (i.e., an LED) for biofilm detection, the illumination conditions were optimized. For accurate and reproducible biofilm detection, a simple, custom-coded image processing algorithm was developed and applied to a five-megapixel CMOS image sensor, which is a cost-effective detector. The concentration of biofilms formed by P. aeruginosa was detected and quantified by varying the indole concentration, and the results were compared with the results obtained from a commercial spectrophotometer. The correlation value of the results from those two systems was 0.981 (N = 9, P CMOS image-sensor platform. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Design, construction and commissioning of a simple, low cost permanent magnet quadrupole doublet

    International Nuclear Information System (INIS)

    Conard, E.M.; Parcell, S.K.; Arnott, D.W.

    1999-01-01

    In the framework of new beam line developments at the Australian National Medical Cyclotron, a permanent magnet quadrupole doublet was designed and built entirely in house. The design proceeded from the classical work by Halbach et al. but emphasised the 'low cost' aspect by using simple rectangular NdFeB blocks and simple assembly techniques. Numerical simulations using the (2-D) Gemini code were performed to check the field strength and homogeneity predictions of analytical calculations. This paper gives the reasons for the selection of a permanent magnet, the design and construction details of the quadrupole doublet and its field measurement results. (authors)

  15. PV O&M Cost Model and Cost Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Andy

    2017-03-15

    This is a presentation on PV O&M cost model and cost reduction for the annual Photovoltaic Reliability Workshop (2017), covering estimating PV O&M costs, polynomial expansion, and implementation of Net Present Value (NPV) and reserve account in cost models.

  16. Simple Automatic File Exchange (SAFE) to Support Low-Cost Spacecraft Operation via the Internet

    Science.gov (United States)

    Baker, Paul; Repaci, Max; Sames, David

    1998-01-01

    Various issues associated with Simple Automatic File Exchange (SAFE) are presented in viewgraph form. Specific topics include: 1) Packet telemetry, Internet IP networks and cost reduction; 2) Basic functions and technical features of SAFE; 3) Project goals, including low-cost satellite transmission to data centers to be distributed via an Internet; 4) Operations with a replicated file protocol; 5) File exchange operation; 6) Ground stations as gateways; 7) Lessons learned from demonstrations and tests with SAFE; and 8) Feedback and future initiatives.

  17. KrF laser cost/performance model for ICF commercial applications

    International Nuclear Information System (INIS)

    Harris, D.B.; Pendergrass, J.H.

    1985-01-01

    Simple expressions suitable for use in commercial-applications plant parameter studies for the direct capital cost plus indirect field costs and for the efficiency as a function of repetition rate were developed for pure-optical-compression KrF laser fusion drivers. These simple expressions summarize estimates obtained from detailed cost-performance studies incorporating recent results of ongoing physics, design, and cost studies. Contributions of KrF laser capital charges and D and M costs to total levelized constant-dollar (1984) unit ICF power generation cost are estimated as a function of plant size and driver pulse energy using a published gain for short-wavelength lasers and representative values of plant parameters

  18. Simple, Safe, and Cost-Effective Technique for Resected Stomach Extraction in Laparoscopic Sleeve Gastrectomy.

    Science.gov (United States)

    Derici, Serhan; Atila, Koray; Bora, Seymen; Yener, Serkan

    2016-01-01

    Background. Laparoscopic sleeve gastrectomy (LSG) has become a popular operation during the recent years. This procedure requires resection of 80-90% of the stomach. Extraction of gastric specimen is known to be a challenging and costly stage of the operation. In this paper, we report results of a simple and cost-effective specimen extraction technique which was applied to 137 consecutive LSG patients. Methods. Between October 2013 and October 2015, 137 laparoscopic sleeve gastrectomy surgeries were performed at Dokuz Eylul University General Surgery Department, Upper Gastrointestinal Surgery Unit. All specimens were extracted through a 15 mm trocar site without using any special device. Results. We noticed one superficial incisional surgical site infection and treated this patient with oral antibiotics. No cases of trocar site hernia were observed. Conclusion. Different techniques have been described for specimen extraction. This simple technique allows extraction of specimen safely in a short time and does not require any special device.

  19. A simple model based magnet sorting algorithm for planar hybrid undulators

    International Nuclear Information System (INIS)

    Rakowsky, G.

    2010-01-01

    Various magnet sorting strategies have been used to optimize undulator performance, ranging from intuitive pairing of high- and low-strength magnets, to full 3D FEM simulation with 3-axis Helmholtz coil magnet data. In the extreme, swapping magnets in a full field model to minimize trajectory wander and rms phase error can be time consuming. This paper presents a simpler approach, extending the field error signature concept to obtain trajectory displacement, kick angle and phase error signatures for each component of magnetization error from a Radia model of a short hybrid-PM undulator. We demonstrate that steering errors and phase errors are essentially decoupled and scalable from measured X, Y and Z components of magnetization. Then, for any given sequence of magnets, rms trajectory and phase errors are obtained from simple cumulative sums of the scaled displacements and phase errors. The cost function (a weighted sum of these errors) is then minimized by swapping magnets, using one's favorite optimization algorithm. This approach was applied recently at NSLS to a short in-vacuum undulator, which required no subsequent trajectory or phase shimming. Trajectory and phase signatures are also obtained for some mechanical errors, to guide 'virtual shimming' and specifying mechanical tolerances. Some simple inhomogeneities are modeled to assess their error contributions.

  20. Cost Model for Digital Preservation: Cost of Digital Migration

    Directory of Open Access Journals (Sweden)

    Ulla Bøgvad Kejser

    2011-03-01

    Full Text Available The Danish Ministry of Culture has funded a project to set up a model for costing preservation of digital materials held by national cultural heritage institutions. The overall objective of the project was to increase cost effectiveness of digital preservation activities and to provide a basis for comparing and estimating future cost requirements for digital preservation. In this study we describe an activity-based costing methodology for digital preservation based on the Open Archice Information System (OAIS Reference Model. Within this framework, which we denote the Cost Model for Digital Preservation (CMDP, the focus is on costing the functional entity Preservation Planning from the OAIS and digital migration activities. In order to estimate these costs we have identified cost-critical activities by analysing the functions in the OAIS model and the flows between them. The analysis has been supplemented with findings from the literature, and our own knowledge and experience. The identified cost-critical activities have subsequently been deconstructed into measurable components, cost dependencies have been examined, and the resulting equations expressed in a spreadsheet. Currently the model can calculate the cost of different migration scenarios for a series of preservation formats for text, images, sound, video, geodata, and spreadsheets. In order to verify the model it has been tested on cost data from two different migration projects at the Danish National Archives (DNA. The study found that the OAIS model provides a sound overall framework for the cost breakdown, but that some functions need additional detailing in order to cost activities accurately. Running the two sets of empirical data showed among other things that the model underestimates the cost of manpower-intensive migration projects, while it reinstates an often underestimated cost, which is the cost of developing migration software. The model has proven useful for estimating the

  1. Lightning rod: a simple and low cost experiment for eletrostatics

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Laburú

    2008-09-01

    Full Text Available With the objective of contributing to make significant the scientific learning, this work suggests a simple and low-cost experiment to demonstrate electrostatics knowledge studied in High School. The experimental proposal has yet the concern of focusing the content, linking it to daily technological elements. Doing that, and due to the practical interest it can arouse in student, we presented the operation of an idealized Lightning Rod to apply in electrostatics school knowledge and to show that the same one can have an important day by day usefulness and it cannot be a turned off abstraction or distant from the reality.

  2. User’s Guide for Naval Material Command’s Life Cycle Cost (FLEX) Model.

    Science.gov (United States)

    1982-04-01

    WBS) of both simple and complex programs. o The model can use a different cost estimating procedure for each element of the CBS (i.e., algorithm and...cycle. (yrs) E-15 I .. 22 Govenm.n: .Al. Scale Devel.-,menz - cstr Def Lntion: .he costs included in t.his subcategory include: 1.22C0 ?roject I&nagement

  3. Estimating the uncertainty of damage costs of pollution: A simple transparent method and typical results

    International Nuclear Information System (INIS)

    Spadaro, Joseph V.; Rabl, Ari

    2008-01-01

    Whereas the uncertainty of environmental impacts and damage costs is usually estimated by means of a Monte Carlo calculation, this paper shows that most (and in many cases all) of the uncertainty calculation involves products and/or sums of products and can be accomplished with an analytic solution which is simple and transparent. We present our own assessment of the component uncertainties and calculate the total uncertainty for the impacts and damage costs of the classical air pollutants; results for a Monte Carlo calculation for the dispersion part are also shown. The distribution of the damage costs is approximately lognormal and can be characterized in terms of geometric mean μ g and geometric standard deviation σ g , implying that the confidence interval is multiplicative. We find that for the classical air pollutants σ g is approximately 3 and the 68% confidence interval is [μ g / σ g , μ g σ g ]. Because the lognormal distribution is highly skewed for large σ g , the median is significantly smaller than the mean. We also consider the case where several lognormally distributed damage costs are added, for example to obtain the total damage cost due to all the air pollutants emitted by a power plant, and we find that the relative error of the sum can be significantly smaller than the relative errors of the summands. Even though the distribution for such sums is not exactly lognormal, we present a simple lognormal approximation that is quite adequate for most applications

  4. Complex Coronary Hemodynamics - Simple Analog Modelling as an Educational Tool.

    Science.gov (United States)

    Parikh, Gaurav R; Peter, Elvis; Kakouros, Nikolaos

    2017-01-01

    Invasive coronary angiography remains the cornerstone for evaluation of coronary stenoses despite there being a poor correlation between luminal loss assessment by coronary luminography and myocardial ischemia. This is especially true for coronary lesions deemed moderate by visual assessment. Coronary pressure-derived fractional flow reserve (FFR) has emerged as the gold standard for the evaluation of hemodynamic significance of coronary artery stenosis, which is cost effective and leads to improved patient outcomes. There are, however, several limitations to the use of FFR including the evaluation of serial stenoses. In this article, we discuss the electronic-hydraulic analogy and the utility of simple electrical modelling to mimic the coronary circulation and coronary stenoses. We exemplify the effect of tandem coronary lesions on the FFR by modelling of a patient with sequential disease segments and complex anatomy. We believe that such computational modelling can serve as a powerful educational tool to help clinicians better understand the complexity of coronary hemodynamics and improve patient care.

  5. A Study of Simple Diffraction Models

    DEFF Research Database (Denmark)

    Agerkvist, Finn

    In this paper two simple methods for cabinet edge diffraction are examined. Calculations with both models are compared with more sophisticated theoretical models and with measured data. The parameters involved are studied and their importance for normal loudspeaker box designs is examined....

  6. A Simple Probabilistic Combat Model

    Science.gov (United States)

    2016-06-13

    Government may violate any copyrights that exist in this work. This page intentionally left blank. ABSTRACT The Lanchester ...page intentionally left blank. TABLE OF CONTENTS Page No.Abstract iii List of Illustrations vii 1. INTRODUCTION 1 2. DETERMINISTIC LANCHESTER MODEL...This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality

  7. Simple models of district heating systems for load and demand side management and operational optimisation; Simple modeller for fjernvarmesystemer med henblik pae belastningsudjaevning og driftsoptimering

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, B. [Technical Univ. of Denmark, Dept. of Mechanical Engineering, Kgs. Lyngby (Denmark); Larsen, H.V. [Risoe National Lab., System Analysis Dept., Roskilde (DK)

    2004-12-01

    The purpose of this research project has been to further develop and test simple (aggregated) models of district heating (DH) systems for simulation and operational optimization, and to investigate the influence of Load Management and Demand Side Management (DMS) on the total operational costs. The work is based on physical-mathematical modelling and simulation of DH systems, and is a continuation of previous EFP-96 work. In the present EFP-2001 project the goals have been to improve the Danish method of aggregation by addressing the problem of aggregation of pressure losses, and to test the methods on a much larger data set than in the EFP-1996 project. In order to verify the models it is crucial to have good data at disposal. Full information on the heat loads and temperatures not only at the DH plant but also at every consumer (building) is needed, and therefore only a few DH systems in Denmark can supply such data. (BA)

  8. Improved Analysis of Earth System Models and Observations using Simple Climate Models

    Science.gov (United States)

    Nadiga, B. T.; Urban, N. M.

    2016-12-01

    Earth system models (ESM) are the most comprehensive tools we have to study climate change and develop climate projections. However, the computational infrastructure required and the cost incurred in running such ESMs precludes direct use of such models in conjunction with a wide variety of tools that can further our understanding of climate. Here we are referring to tools that range from dynamical systems tools that give insight into underlying flow structure and topology to tools that come from various applied mathematical and statistical techniques and are central to quantifying stability, sensitivity, uncertainty and predictability to machine learning tools that are now being rapidly developed or improved. Our approach to facilitate the use of such models is to analyze output of ESM experiments (cf. CMIP) using a range of simpler models that consider integral balances of important quantities such as mass and/or energy in a Bayesian framework.We highlight the use of this approach in the context of the uptake of heat by the world oceans in the ongoing global warming. Indeed, since in excess of 90% of the anomalous radiative forcing due greenhouse gas emissions is sequestered in the world oceans, the nature of ocean heat uptake crucially determines the surface warming that is realized (cf. climate sensitivity). Nevertheless, ESMs themselves are never run long enough to directly assess climate sensitivity. So, we consider a range of models based on integral balances--balances that have to be realized in all first-principles based models of the climate system including the most detailed state-of-the art climate simulations. The models range from simple models of energy balance to those that consider dynamically important ocean processes such as the conveyor-belt circulation (Meridional Overturning Circulation, MOC), North Atlantic Deep Water (NADW) formation, Antarctic Circumpolar Current (ACC) and eddy mixing. Results from Bayesian analysis of such models using

  9. HTGR Cost Model Users' Manual

    International Nuclear Information System (INIS)

    Gandrik, A.M.

    2012-01-01

    The High Temperature Gas-Cooler Reactor (HTGR) Cost Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Cost Model calculates an estimate of the capital costs, annual operating and maintenance costs, and decommissioning costs for a high-temperature gas-cooled reactor. The user can generate these costs for multiple reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for a single or four-pack configuration; and for a reactor size of 350 or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Cost Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Cost Model. This model was design for users who are familiar with the HTGR design and Excel. Modification of the HTGR Cost Model should only be performed by users familiar with Excel and Visual Basic.

  10. A Nuclear Waste Management Cost Model for Policy Analysis

    Science.gov (United States)

    Barron, R. W.; Hill, M. C.

    2017-12-01

    Although integrated assessments of climate change policy have frequently identified nuclear energy as a promising alternative to fossil fuels, these studies have often treated nuclear waste disposal very simply. Simple assumptions about nuclear waste are problematic because they may not be adequate to capture relevant costs and uncertainties, which could result in suboptimal policy choices. Modeling nuclear waste management costs is a cross-disciplinary, multi-scale problem that involves economic, geologic and environmental processes that operate at vastly different temporal scales. Similarly, the climate-related costs and benefits of nuclear energy are dependent on environmental sensitivity to CO2 emissions and radiation, nuclear energy's ability to offset carbon emissions, and the risk of nuclear accidents, factors which are all deeply uncertain. Alternative value systems further complicate the problem by suggesting different approaches to valuing intergenerational impacts. Effective policy assessment of nuclear energy requires an integrated approach to modeling nuclear waste management that (1) bridges disciplinary and temporal gaps, (2) supports an iterative, adaptive process that responds to evolving understandings of uncertainties, and (3) supports a broad range of value systems. This work develops the Nuclear Waste Management Cost Model (NWMCM). NWMCM provides a flexible framework for evaluating the cost of nuclear waste management across a range of technology pathways and value systems. We illustrate how NWMCM can support policy analysis by estimating how different nuclear waste disposal scenarios developed using the NWMCM framework affect the results of a recent integrated assessment study of alternative energy futures and their effects on the cost of achieving carbon abatement targets. Results suggest that the optimism reflected in previous works is fragile: Plausible nuclear waste management costs and discount rates appropriate for intergenerational cost

  11. Y-Scaling in a simple quark model

    International Nuclear Information System (INIS)

    Kumano, S.; Moniz, E.J.

    1988-01-01

    A simple quark model is used to define a nuclear pair model, that is, two composite hadrons interacting only through quark interchange and bound in an overall potential. An ''equivalent'' hadron model is developed, displaying an effective hadron-hadron interaction which is strongly repulsive. We compare the effective hadron model results with the exact quark model observables in the kinematic region of large momentum transfer, small energy transfer. The nucleon reponse function in this y-scaling region is, within the traditional frame work sensitive to the nucleon momentum distribution at large momentum. We find a surprizingly small effect of hadron substructure. Furthermore, we find in our model that a simple parametrization of modified hadron size in the bound state, motivated by the bound quark momentum distribution, is not a useful way to correlate different observables

  12. A Simple Model of Self-Assessments

    NARCIS (Netherlands)

    S. Dominguez Martinez (Silvia); O.H. Swank (Otto)

    2006-01-01

    textabstractWe develop a simple model that describes individuals' self-assessments of their abilities. We assume that individuals learn about their abilities from appraisals of others and experience. Our model predicts that if communication is imperfect, then (i) appraisals of others tend to be too

  13. Cost Model for Digital Preservation: Cost of Digital Migration

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad; Nielsen, Anders Bo; Thirifays, Alex

    2011-01-01

    The Danish Ministry of Culture has funded a project to set up a model for costing preservation of digital materials held by national cultural heritage institutions. The overall objective of the project was to increase cost effectiveness of digital preservation activities and to provide a basis...... for comparing and estimating future cost requirements for digital preservation. In this study we describe an activity-based costing methodology for digital preservation based on the Open Archice Information System (OAIS) Reference Model. Within this framework, which we denote the Cost Model for Digital...... Preservation (CMDP), the focus is on costing the functional entity Preservation Planning from the OAIS and digital migration activities. In order to estimate these costs we have identified cost-critical activities by analysing the functions in the OAIS model and the flows between them. The analysis has been...

  14. The Monash University Interactive Simple Climate Model

    Science.gov (United States)

    Dommenget, D.

    2013-12-01

    The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.

  15. A Departmental Cost-Effectiveness Model.

    Science.gov (United States)

    Holleman, Thomas, Jr.

    In establishing a departmental cost-effectiveness model, the traditional cost-effectiveness model was discussed and equipped with a distant and deflation equation for both benefits and costs. Next, the economics of costing was examined and program costing procedures developed. Then, the model construct was described as it was structured around the…

  16. A simple model of self-assessment

    NARCIS (Netherlands)

    Dominguez-Martinez, S.; Swank, O.H.

    2009-01-01

    We develop a simple model that describes individuals' self-assessments of their abilities. We assume that individuals learn about their abilities from appraisals of others and experience. Our model predicts that if communication is imperfect, then (i) appraisals of others tend to be too positive and

  17. A modeling paradigm for interdisciplinary water resources modeling: Simple Script Wrappers (SSW)

    Science.gov (United States)

    Steward, David R.; Bulatewicz, Tom; Aistrup, Joseph A.; Andresen, Daniel; Bernard, Eric A.; Kulcsar, Laszlo; Peterson, Jeffrey M.; Staggenborg, Scott A.; Welch, Stephen M.

    2014-05-01

    Holistic understanding of a water resources system requires tools capable of model integration. This team has developed an adaptation of the OpenMI (Open Modelling Interface) that allows easy interactions across the data passed between models. Capabilities have been developed to allow programs written in common languages such as matlab, python and scilab to share their data with other programs and accept other program's data. We call this interface the Simple Script Wrapper (SSW). An implementation of SSW is shown that integrates groundwater, economic, and agricultural models in the High Plains region of Kansas. Output from these models illustrates the interdisciplinary discovery facilitated through use of SSW implemented models. Reference: Bulatewicz, T., A. Allen, J.M. Peterson, S. Staggenborg, S.M. Welch, and D.R. Steward, The Simple Script Wrapper for OpenMI: Enabling interdisciplinary modeling studies, Environmental Modelling & Software, 39, 283-294, 2013. http://dx.doi.org/10.1016/j.envsoft.2012.07.006 http://code.google.com/p/simple-script-wrapper/

  18. A Simple Model of Global Aerosol Indirect Effects

    Science.gov (United States)

    Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter

    2013-01-01

    Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.

  19. Designer's unified cost model

    Science.gov (United States)

    Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.

    1992-01-01

    A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  20. Complex versus simple models: ion-channel cardiac toxicity prediction.

    Science.gov (United States)

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  1. Complex versus simple models: ion-channel cardiac toxicity prediction

    Directory of Open Access Journals (Sweden)

    Hitesh B. Mistry

    2018-02-01

    Full Text Available There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model Bnet was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the Bnet model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  2. Combinatorial structures to modeling simple games and applications

    Science.gov (United States)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  3. Simple model of the arms race

    International Nuclear Information System (INIS)

    Zane, L.I.

    1982-01-01

    A simple model of a two-party arms race is developed based on the principle that the race will continue so long as either side can unleash an effective first strike against the other side. The model is used to examine how secrecy, the ABM, MIRV-ing, and an MX system affect the arms race

  4. Time-driven activity-based costing: A dynamic value assessment model in pediatric appendicitis.

    Science.gov (United States)

    Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E

    2017-06-01

    Healthcare reform policies are emphasizing value-based healthcare delivery. We hypothesize that time-driven activity-based costing (TDABC) can be used to appraise healthcare interventions in pediatric appendicitis. Triage-based standing delegation orders, surgical advanced practice providers, and a same-day discharge protocol were implemented to target deficiencies identified in our initial TDABC model. Post-intervention process maps for a hospital episode were created using electronic time stamp data for simple appendicitis cases during February to March 2016. Total personnel and consumable costs were determined using TDABC methodology. The post-intervention TDABC model featured 6 phases of care, 33 processes, and 19 personnel types. Our interventions reduced duration and costs in the emergency department (-41min, -$23) and pre-operative floor (-57min, -$18). While post-anesthesia care unit duration and costs increased (+224min, +$41), the same-day discharge protocol eliminated post-operative floor costs (-$306). Our model incorporating all three interventions reduced total direct costs by 11% ($2753.39 to $2447.68) and duration of hospitalization by 51% (1984min to 966min). Time-driven activity-based costing can dynamically model changes in our healthcare delivery as a result of process improvement interventions. It is an effective tool to continuously assess the impact of these interventions on the value of appendicitis care. II, Type of study: Economic Analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Designers' unified cost model

    Science.gov (United States)

    Freeman, W.; Ilcewicz, L.; Swanson, G.; Gutowski, T.

    1992-01-01

    The Structures Technology Program Office (STPO) at NASA LaRC has initiated development of a conceptual and preliminary designers' cost prediction model. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state-of-the-art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a database and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. This paper presents the team members, approach, goals, plans, and progress to date for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  6. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  7. Parametric cost models for space telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtnay

    2017-11-01

    Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.

  8. Parametric Cost Models for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney

    2010-01-01

    Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.

  9. Cost Model for Digital Curation: Cost of Digital Migration

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad; Nielsen, Anders Bo; Thirifays, Alex

    2009-01-01

    The Danish Ministry of Culture is currently funding a project to set up a model for costing preservation of digital materials held by national cultural heritage institutions. The overall objective of the project is to provide a basis for comparing and estimating future financial requirements...... for digital preservation and to increase cost effectiveness of digital preservation activities. In this study we describe an activity based costing methodology for digital preservation based on the OAIS Reference Model. In order to estimate the cost of digital migrations we have identified cost critical...

  10. A genetic algorithm solution for a nuclear power plant risk-cost maintenance model

    International Nuclear Information System (INIS)

    Tong Jiejuan; Mao Dingyuan; Xue Dazhi

    2004-01-01

    Reliability Centered Maintenance (RCM) is one of the popular maintenance optimization methods according to certain kinds of priorities. Traditional RCM usually analyzes and optimizes the maintenance strategy from the viewpoint of component instead of the whole maintenance program impact. Research presented in this paper is a pilot study using PSA techniques in RCM. How to reflect the effect on component unavailability by the maintenance activities such as surveillance testing and preventive maintenance in PSA model is discussed firstly. Based on the discussion, a maintenance risk-cost model is established for global maintenance optimization in a nuclear power plant, and a genetic algorithm (GA) is applied to solve such a model to get the global optimized maintenance strategy. Finally, the result got from a simple test case based on a risk-cost model consisting of 10 components is presented

  11. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    Science.gov (United States)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  12. Cost Modeling for Space Telescope

    Science.gov (United States)

    Stahl, H. Philip

    2011-01-01

    Parametric cost models are an important tool for planning missions, compare concepts and justify technology investments. This paper presents on-going efforts to develop single variable and multi-variable cost models for space telescope optical telescope assembly (OTA). These models are based on data collected from historical space telescope missions. Standard statistical methods are used to derive CERs for OTA cost versus aperture diameter and mass. The results are compared with previously published models.

  13. Maintenance cost models in deregulated power systems under opportunity costs

    International Nuclear Information System (INIS)

    Al-Arfaj, K.; Dahal, K.; Azaiez, M.N.

    2007-01-01

    In a centralized power system, the operator is responsible for scheduling maintenance. There are different types of maintenance, including corrective maintenance; predictive maintenance; preventive maintenance; and reliability-centred maintenance. The main cause of power failures is poor maintenance. As such, maintenance costs play a significant role in deregulated power systems. They include direct costs associated with material and labor costs as well as indirect costs associated with spare parts inventory, shipment, test equipment, indirect labor, opportunity costs and cost of failure. In maintenance scheduling and planning, the cost function is the only component of the objective function. This paper presented the results of a study in which different components of maintenance costs were modeled. The maintenance models were formulated as an optimization problem with single and multiple objectives and a set of constraints. The maintenance costs models could be used to schedule the maintenance activities of power generators more accurately and to identify the best maintenance strategies over a period of time as they consider failure and opportunity costs in a deregulated environment. 32 refs., 4 tabs., 4 figs

  14. Low Cost, Simple, Intrauterine Insemination Procedure

    African Journals Online (AJOL)

    AJRH Managing Editor

    quite simple intrauterine insemination technique which may be performed in developing countries, without the need of sophisticated ... Cytoplasmic Sperm Injection (ICSI), are quite ... were administered only once by intramuscular injection ...

  15. Proposed Reliability/Cost Model

    Science.gov (United States)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  16. Time-driven activity-based costing to identify opportunities for cost reduction in pediatric appendectomy.

    Science.gov (United States)

    Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E

    2016-12-01

    As reimbursement programs shift to value-based payment models emphasizing quality and efficient healthcare delivery, there exists a need to better understand process management to unearth true costs of patient care. We sought to identify cost-reduction opportunities in simple appendicitis management by applying a time-driven activity-based costing (TDABC) methodology to this high-volume surgical condition. Process maps were created using medical record time stamps. Labor capacity cost rates were calculated using national median physician salaries, weighted nurse-patient ratios, and hospital cost data. Consumable costs for supplies, pharmacy, laboratory, and food were derived from the hospital general ledger. Time-driven activity-based costing resulted in precise per-minute calculation of personnel costs. Highest costs were in the operating room ($747.07), hospital floor ($388.20), and emergency department ($296.21). Major contributors to length of stay were emergency department evaluation (270min), operating room availability (395min), and post-operative monitoring (1128min). The TDABC model led to $1712.16 in personnel costs and $1041.23 in consumable costs for a total appendicitis cost of $2753.39. Inefficiencies in healthcare delivery can be identified through TDABC. Triage-based standing delegation orders, advanced practice providers, and same day discharge protocols are proposed cost-reducing interventions to optimize value-based care for simple appendicitis. II. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. A simple spatiotemporal chaotic Lotka-Volterra model

    International Nuclear Information System (INIS)

    Sprott, J.C.; Wildenberg, J.C.; Azizi, Yousef

    2005-01-01

    A mathematically simple example of a high-dimensional (many-species) Lotka-Volterra model that exhibits spatiotemporal chaos in one spatial dimension is described. The model consists of a closed ring of identical agents, each competing for fixed finite resources with two of its four nearest neighbors. The model is prototypical of more complicated models in its quasiperiodic route to chaos (including attracting 3-tori), bifurcations, spontaneous symmetry breaking, and spatial pattern formation

  18. Implementing a trustworthy cost-accounting model.

    Science.gov (United States)

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  19. Towards a Simple Constitutive Model for Bread Dough

    Science.gov (United States)

    Tanner, Roger I.

    2008-07-01

    Wheat flour dough is an example of a soft solid material consisting of a gluten (rubbery) network with starch particles as a filler. The volume fraction of the starch filler is high-typically 60%. A computer-friendly constitutive model has been lacking for this type of material and here we report on progress towards finding such a model. The model must describe the response to small strains, simple shearing starting from rest, simple elongation, biaxial straining, recoil and various other transient flows. A viscoelastic Lodge-type model involving a damage function. which depends on strain from an initial reference state fits the given data well, and it is also able to predict the thickness at exit from dough sheeting, which has been a long-standing unsolved puzzle. The model also shows an apparent rate-dependent yield stress, although no explicit yield stress is built into the model. This behaviour agrees with the early (1934) observations of Schofield and Scott Blair on dough recoil after unloading.

  20. A simple model for skewed species-lifetime distributions

    KAUST Repository

    Murase, Yohsuke; Shimada, Takashi; Ito, Nobuyasu

    2010-01-01

    A simple model of a biological community assembly is studied. Communities are assembled by successive migrations and extinctions of species. In the model, species are interacting with each other. The intensity of the interaction between each pair

  1. Attrition Cost Model Instruction Manual

    Science.gov (United States)

    Yanagiura, Takeshi

    2012-01-01

    This instruction manual explains in detail how to use the Attrition Cost Model program, which estimates the cost of student attrition for a state's higher education system. Programmed with SAS, this model allows users to instantly calculate the cost of attrition and the cumulative attrition rate that is based on the most recent retention and…

  2. APT cost scaling: Preliminary indications from a Parametric Costing Model (PCM)

    International Nuclear Information System (INIS)

    Krakowski, R.A.

    1995-01-01

    A Parametric Costing Model has been created and evaluate as a first step in quantitatively understanding important design options for the Accelerator Production of Tritium (APT) concept. This model couples key economic and technical elements of APT in a two-parameter search of beam energy and beam power that minimizes costs within a range of operating constraints. The costing and engineering depth of the Parametric Costing Model is minimal at the present open-quotes entry levelclose quotes, and is intended only to demonstrate a potential for a more-detailed, cost-based integrating design tool. After describing the present basis of the Parametric Costing Model and giving an example of a single parametric scaling run derived therefrom, the impacts of choices related to resistive versus superconducting accelerator structures and cost of electricity versus plant availability (open-quotes load curveclose quotes) are reported. Areas of further development and application are suggested

  3. SimpleBox 4.0: Improving the model while keeping it simple….

    Science.gov (United States)

    Hollander, Anne; Schoorl, Marian; van de Meent, Dik

    2016-04-01

    Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Preliminary Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Prince, F. Andrew; Smart, Christian; Stephens, Kyle; Henrichs, Todd

    2009-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. However, great care is required. Some space telescope cost models, such as those based only on mass, lack sufficient detail to support such analysis and may lead to inaccurate conclusions. Similarly, using ground based telescope models which include the dome cost will also lead to inaccurate conclusions. This paper reviews current and historical models. Then, based on data from 22 different NASA space telescopes, this paper tests those models and presents preliminary analysis of single and multi-variable space telescope cost models.

  5. Cost Concept Model and Gateway Specification

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad

    2014-01-01

    This document introduces a Framework supporting the implementation of a cost concept model against which current and future cost models for curating digital assets can be benchmarked. The value built into this cost concept model leverages the comprehensive engagement by the 4C project with various...... to promote interoperability; • A Nested Model for Digital Curation—that visualises the core concepts, demonstrates how they interact and places them into context visually by linking them to A Cost and Benefit Model for Curation; This Framework provides guidance for data collection and associated calculations...

  6. Proton facility economics: the importance of "simple" treatments.

    Science.gov (United States)

    Johnstone, Peter A S; Kerstiens, John; Richard, Helsper

    2012-08-01

    Given the cost and debt incurred to build a modern proton facility, impetus exists to minimize treatment of patients with complex setups because of their slower throughput. The aim of this study was to determine how many "simple" cases are necessary given different patient loads simply to recoup construction costs and debt service, without beginning to cover salaries, utilities, beam costs, and so on. Simple cases are ones that can be performed quickly because of an easy setup for the patient or because the patient is to receive treatment to just one or two fields. A "standard" construction cost and debt for 1, 3, and 4 gantry facilities were calculated from public documents of facilities built in the United States, with 100% of the construction funded through standard 15-year financing at 5% interest. Clinical best case (that each room was completely scheduled with patients over a 14-hour workday) was assumed, and a statistical analysis was modeled with debt, case mix, and payer mix moving independently. Treatment times and reimbursement data from the investigators' facility for varying complexities of patients were extrapolated for varying numbers treated daily. Revenue assumptions of $X per treatment were assumed both for pediatric cases (a mix of Medicaid and private payer) and state Medicare simple case rates. Private payer reimbursement averages $1.75X per treatment. The number of simple patients required daily to cover construction and debt service costs was then derived. A single gantry treating only complex or pediatric patients would need to apply 85% of its treatment slots simply to service debt. However, that same room could cover its debt treating 4 hours of simple patients, thus opening more slots for complex and pediatric patients. A 3-gantry facility treating only complex and pediatric cases would not have enough treatment slots to recoup construction and debt service costs at all. For a 4-gantry center, focusing on complex and pediatric cases alone

  7. Cost Model Comparison: A Study of Internally and Commercially Developed Cost Models in Use by NASA

    Science.gov (United States)

    Gupta, Garima

    2011-01-01

    NASA makes use of numerous cost models to accurately estimate the cost of various components of a mission - hardware, software, mission/ground operations - during the different stages of a mission's lifecycle. The purpose of this project was to survey these models and determine in which respects they are similar and in which they are different. The initial survey included a study of the cost drivers for each model, the form of each model (linear/exponential/other CER, range/point output, capable of risk/sensitivity analysis), and for what types of missions and for what phases of a mission lifecycle each model is capable of estimating cost. The models taken into consideration consisted of both those that were developed by NASA and those that were commercially developed: GSECT, NAFCOM, SCAT, QuickCost, PRICE, and SEER. Once the initial survey was completed, the next step in the project was to compare the cost models' capabilities in terms of Work Breakdown Structure (WBS) elements. This final comparison was then portrayed in a visual manner with Venn diagrams. All of the materials produced in the process of this study were then posted on the Ground Segment Team (GST) Wiki.

  8. Video distribution system cost model

    Science.gov (United States)

    Gershkoff, I.; Haspert, J. K.; Morgenstern, B.

    1980-01-01

    A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.

  9. Cost Models for MMC Manufacturing Processes

    Science.gov (United States)

    Elzey, Dana M.; Wadley, Haydn N. G.

    1996-01-01

    Processes for the manufacture of advanced metal matrix composites are rapidly approaching maturity in the research laboratory and there is growing interest in their transition to industrial production. However, research conducted to date has almost exclusively focused on overcoming the technical barriers to producing high-quality material and little attention has been given to the economical feasibility of these laboratory approaches and process cost issues. A quantitative cost modeling (QCM) approach was developed to address these issues. QCM are cost analysis tools based on predictive process models relating process conditions to the attributes of the final product. An important attribute, of the QCM approach is the ability to predict the sensitivity of material production costs to product quality and to quantitatively explore trade-offs between cost and quality. Applications of the cost models allow more efficient direction of future MMC process technology development and a more accurate assessment of MMC market potential. Cost models were developed for two state-of-the art metal matrix composite (MMC) manufacturing processes: tape casting and plasma spray deposition. Quality and Cost models are presented for both processes and the resulting predicted quality-cost curves are presented and discussed.

  10. Selected Tether Applications Cost Model

    Science.gov (United States)

    Keeley, Michael G.

    1988-01-01

    Diverse cost-estimating techniques and data combined into single program. Selected Tether Applications Cost Model (STACOM 1.0) is interactive accounting software tool providing means for combining several independent cost-estimating programs into fully-integrated mathematical model capable of assessing costs, analyzing benefits, providing file-handling utilities, and putting out information in text and graphical forms to screen, printer, or plotter. Program based on Lotus 1-2-3, version 2.0. Developed to provide clear, concise traceability and visibility into methodology and rationale for estimating costs and benefits of operations of Space Station tether deployer system.

  11. A simple model for determining photoelectron-generated radiation scaling laws

    International Nuclear Information System (INIS)

    Dipp, T.M.

    1993-12-01

    The generation of radiation via photoelectrons induced off of a conducting surface was explored using a simple model to determine fundamental scaling laws. The model is one-dimensional (small-spot) and uses monoenergetic, nonrelativistic photoelectrons emitted normal to the illuminated conducting surface. Simple steady-state radiation, frequency, and maximum orbital distance equations were derived using small-spot radiation equations, a sin 2 type modulation function, and simple photoelectron dynamics. The result is a system of equations for various scaling laws, which, along with model and user constraints, are simultaneously solved using techniques similar to linear programming. Typical conductors illuminated by low-power sources producing photons with energies less than 5.0 eV are readily modeled by this small-spot, steady-state analysis, which shows they generally produce low efficiency (η rsL -10.5 ) pure photoelectron-induced radiation. However, the small-spot theory predicts that the total conversion efficiency for incident photon power to photoelectron-induced radiated power can go higher than 10 -5.5 for typical real conductors if photons having energies of 15 eV and higher are used, and should go even higher still if the small-spot limit of this theory is exceeded as well. Overall, the simple theory equations, model constraint equations, and solution techniques presented provide a foundation for understanding, predicting, and optimizing the generated radiation, and the simple theory equations provide scaling laws to compare with computational and laboratory experimental data

  12. How Much? Cost Models for Online Education.

    Science.gov (United States)

    Lorenzo, George

    2001-01-01

    Reviews some of the research being done in the area of cost models for online education. Describes a cost analysis handbook; an activity-based costing model that was based on an economic model for traditional instruction at the Indiana University Purdue University Indianapolis; and blending other costing models. (LRW)

  13. Development of a funding, cost, and spending model for satellite projects

    Science.gov (United States)

    Johnson, Jesse P.

    1989-01-01

    The need for a predictive budget/funging model is obvious. The current models used by the Resource Analysis Office (RAO) are used to predict the total costs of satellite projects. An effort to extend the modeling capabilities from total budget analysis to total budget and budget outlays over time analysis was conducted. A statistical based and data driven methodology was used to derive and develop the model. Th budget data for the last 18 GSFC-sponsored satellite projects were analyzed and used to build a funding model which would describe the historical spending patterns. This raw data consisted of dollars spent in that specific year and their 1989 dollar equivalent. This data was converted to the standard format used by the RAO group and placed in a database. A simple statistical analysis was performed to calculate the gross statistics associated with project length and project cost ant the conditional statistics on project length and project cost. The modeling approach used is derived form the theory of embedded statistics which states that properly analyzed data will produce the underlying generating function. The process of funding large scale projects over extended periods of time is described by Life Cycle Cost Models (LCCM). The data was analyzed to find a model in the generic form of a LCCM. The model developed is based on a Weibull function whose parameters are found by both nonlinear optimization and nonlinear regression. In order to use this model it is necessary to transform the problem from a dollar/time space to a percentage of total budget/time space. This transformation is equivalent to moving to a probability space. By using the basic rules of probability, the validity of both the optimization and the regression steps are insured. This statistically significant model is then integrated and inverted. The resulting output represents a project schedule which relates the amount of money spent to the percentage of project completion.

  14. Preliminary Multivariable Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. Previously, the authors published two single variable cost models based on 19 flight missions. The current paper presents the development of a multi-variable space telescopes cost model. The validity of previously published models are tested. Cost estimating relationships which are and are not significant cost drivers are identified. And, interrelationships between variables are explored

  15. Solid rocket motor cost model

    Science.gov (United States)

    Harney, A. G.; Raphael, L.; Warren, S.; Yakura, J. K.

    1972-01-01

    A systematic and standardized procedure for estimating life cycle costs of solid rocket motor booster configurations. The model consists of clearly defined cost categories and appropriate cost equations in which cost is related to program and hardware parameters. Cost estimating relationships are generally based on analogous experience. In this model the experience drawn on is from estimates prepared by the study contractors. Contractors' estimates are derived by means of engineering estimates for some predetermined level of detail of the SRM hardware and program functions of the system life cycle. This method is frequently referred to as bottom-up. A parametric cost analysis is a useful technique when rapid estimates are required. This is particularly true during the planning stages of a system when hardware designs and program definition are conceptual and constantly changing as the selection process, which includes cost comparisons or trade-offs, is performed. The use of cost estimating relationships also facilitates the performance of cost sensitivity studies in which relative and comparable cost comparisons are significant.

  16. An application of the 'Bayesian cohort model' to nuclear power plant cost analyses

    International Nuclear Information System (INIS)

    Ono, Kenji; Nakamura, Takashi

    2002-01-01

    We have developed a new method for identifying the effects of calendar year, plant age and commercial operation starting year on the costs and performances of nuclear power plants and also developed an analysis system running on personal computers. The method extends the Bayesian cohort model for time series social survey data proposed by one of the authors. The proposed method was shown to be able to separate the above three effects more properly than traditional methods such as taking simple means by time domain. The analyses of US nuclear plant cost and performance data by using the proposed method suggest that many of the US plants spent relatively long time and much capital cost for modification at their age of about 10 to 20 years, but that, after those ages, they performed fairly well with lower and stabilized O and M and additional capital costs. (author)

  17. A simple oblique dip model for geomagnetic micropulsations

    Directory of Open Access Journals (Sweden)

    J. A. Lawrie

    Full Text Available It is pointed out that simple models adopted so far have tended to neglect the obliquity of the magnetic field lines entering the Earth's surface. A simple alternative model is presented, in which the ambient field lines are straight, but enter wedge shaped boundaries at half a right-angle. The model is illustrated by assuming an axially symmetric, compressional, impulse type disturbance at the outer boundary, all other boundaries being assumed to be perfectly conducting. The numerical method used is checked from the instant the excitation ceases, by an analytical method. The first harmonic along field lines is found to be of noticeable size, but appears to be mainly due to coupling with the fundamental, and with the first harmonic across field lines.

    Key words. Magnetospheric physics (MHD waves and instabilities.

  18. Bayesian models for cost-effectiveness analysis in the presence of structural zero costs.

    Science.gov (United States)

    Baio, Gianluca

    2014-05-20

    Bayesian modelling for cost-effectiveness data has received much attention in both the health economics and the statistical literature, in recent years. Cost-effectiveness data are characterised by a relatively complex structure of relationships linking a suitable measure of clinical benefit (e.g. quality-adjusted life years) and the associated costs. Simplifying assumptions, such as (bivariate) normality of the underlying distributions, are usually not granted, particularly for the cost variable, which is characterised by markedly skewed distributions. In addition, individual-level data sets are often characterised by the presence of structural zeros in the cost variable. Hurdle models can be used to account for the presence of excess zeros in a distribution and have been applied in the context of cost data. We extend their application to cost-effectiveness data, defining a full Bayesian specification, which consists of a model for the individual probability of null costs, a marginal model for the costs and a conditional model for the measure of effectiveness (given the observed costs). We presented the model using a working example to describe its main features. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  19. Analyzing C2 Structures and Self-Synchronization with Simple Computational Models

    Science.gov (United States)

    2011-06-01

    16th ICCRTS “Collective C2 in Multinational Civil-Military Operations” Analyzing C2 Structures and Self- Synchronization with Simple...Self- Synchronization with Simple Computational Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...models. The Kuramoto Model, though with some serious limitations, provides a representation of information flow and self- synchronization in an

  20. Energy economy in the actomyosin interaction: lessons from simple models.

    Science.gov (United States)

    Lehman, Steven L

    2010-01-01

    The energy economy of the actomyosin interaction in skeletal muscle is both scientifically fascinating and practically important. This chapter demonstrates how simple cross-bridge models have guided research regarding the energy economy of skeletal muscle. Parameter variation on a very simple two-state strain-dependent model shows that early events in the actomyosin interaction strongly influence energy efficiency, and late events determine maximum shortening velocity. Addition of a weakly-bound state preceding force production allows weak coupling of cross-bridge mechanics and ATP turnover, so that a simple three-state model can simulate the velocity-dependence of ATP turnover. Consideration of the limitations of this model leads to a review of recent evidence regarding the relationship between ligand binding states, conformational states, and macromolecular structures of myosin cross-bridges. Investigation of the fine structure of the actomyosin interaction during the working stroke continues to inform fundamental research regarding the energy economy of striated muscle.

  1. A simple rainfall-runoff model for the single and long term hydrological performance of green roofs

    DEFF Research Database (Denmark)

    Locatelli, Luca; Mark, Ole; Mikkelsen, Peter Steen

    Green roofs are being widely implemented for storm water control and runoff reduction. There is need for incorporating green roofs into urban drainage models in order to evaluate their impact. These models must have low computational costs and fine time resolution. This paper aims to develop...... a model of green roof hydrological performance. A simple conceptual model for the long term and single event hydrological performance of green roofs, shows to be capable of reproducing observed runoff measurements. The model has surface and subsurface storage components representing the overall retention...... capacity of the green roof. The runoff from the system is described by the non-linear reservoir method and the storage capacity of the green roof is continuously re-established by evapotranspiration. Runoff data from a green roof in Denmark are collected and used for parameter calibration....

  2. Open vs Laparoscopic Simple Prostatectomy: A Comparison of Initial Outcomes and Cost.

    Science.gov (United States)

    Demir, Aslan; Günseren, Kadir Ömür; Kordan, Yakup; Yavaşçaoğlu, İsmet; Vuruşkan, Berna Aytaç; Vuruşkan, Hakan

    2016-08-01

    We compared the cost-effectiveness of laparoscopic simple prostatectomy (LSP) vs open prostatectomy (OP). A total of 73 men treated for benign prostatic hyperplasia were enrolled for OP and LSP in groups 1 and 2, respectively. The findings were recorded perioperative, including operation time (OT), blood lost, transfusion rate, conversion to the open surgery, and the complications according to the Clavien Classification. The postoperative findings, including catheterization and drainage time, the amount of analgesic used, hospitalization time, postoperative complications, international prostate symptom score (IPSS) and International Index of Erectile Function (IIEF) scores, the extracted prostate weight, the uroflowmeter, as well as postvoiding residual (PVR) and quality of life (QoL) score at the postoperative third month, were analyzed. The cost of both techniques was also compared statistically. No statistical differences were found in the preoperative parameters, including age, IPSS and QoL score, maximum flow rate (Qmax), PVR, IIEF score, and prostate volumes, as measured by transabdominal ultrasonography. No statistical differences were established in terms of the OT and the weight of the extracted prostate. No differences were established with regard to complications according to Clavien's classification in groups. However, the bleeding rate was significantly lower in group 2. The drainage, catheterization, and hospitalization times and the amount of analgesics were significantly lower in the second group. The postoperative third month findings were not different statistically. Only the Qmax values were significantly greater in group 2. While there was only a $52 difference between groups with regard to operation cost, this difference was significantly different. The use of LSP for the prostates over 80 g is more effective than the OP in terms of OT, bleeding amount, transfusion rates, catheterization time, drain removal time, hospitalization time

  3. Simple classical model for Fano statistics in radiation detectors

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, David V. [Pacific Northwest National Laboratory, National Security Division - Radiological and Chemical Sciences Group PO Box 999, Richland, WA 99352 (United States)], E-mail: David.Jordan@pnl.gov; Renholds, Andrea S.; Jaffe, John E.; Anderson, Kevin K.; Rene Corrales, L.; Peurrung, Anthony J. [Pacific Northwest National Laboratory, National Security Division - Radiological and Chemical Sciences Group PO Box 999, Richland, WA 99352 (United States)

    2008-02-01

    A simple classical model that captures the essential statistics of energy partitioning processes involved in the creation of information carriers (ICs) in radiation detectors is presented. The model pictures IC formation from a fixed amount of deposited energy in terms of the statistically analogous process of successively sampling water from a large, finite-volume container ('bathtub') with a small dipping implement ('shot or whiskey glass'). The model exhibits sub-Poisson variance in the distribution of the number of ICs generated (the 'Fano effect'). Elementary statistical analysis of the model clarifies the role of energy conservation in producing the Fano effect and yields Fano's prescription for computing the relative variance of the IC number distribution in terms of the mean and variance of the underlying, single-IC energy distribution. The partitioning model is applied to the development of the impact ionization cascade in semiconductor radiation detectors. It is shown that, in tandem with simple assumptions regarding the distribution of energies required to create an (electron, hole) pair, the model yields an energy-independent Fano factor of 0.083, in accord with the lower end of the range of literature values reported for silicon and high-purity germanium. The utility of this simple picture as a diagnostic tool for guiding or constraining more detailed, 'microscopic' physical models of detector material response to ionizing radiation is discussed.

  4. A piezo-ring-on-chip microfluidic device for simple and low-cost mass spectrometry interfacing.

    Science.gov (United States)

    Tsao, Chia-Wen; Lei, I-Chao; Chen, Pi-Yu; Yang, Yu-Liang

    2018-02-12

    Mass spectrometry (MS) interfacing technology provides the means for incorporating microfluidic processing with post MS analysis. In this study, we propose a simple piezo-ring-on-chip microfluidic device for the controlled spraying of MALDI-MS targets. This device uses a low-cost, commercially-available ring-shaped piezoelectric acoustic atomizer (piezo-ring) directly integrated into a polydimethylsiloxane microfluidic device to spray the sample onto the MS target substrate. The piezo-ring-on-chip microfluidic device's design, fabrication, and actuation, and its pulsatile pumping effects were evaluated. The spraying performance was examined by depositing organic matrix samples onto the MS target substrate by using both an automatic linear motion motor, and manual deposition. Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) was performed to analyze the peptide samples on the MALDI target substrates. Using our technique, model peptides with 10 -6 M concentration can be successfully detected. The results also indicate that the piezo-ring-on-chip approach forms finer matrix crystals and presents better MS signal uniformity with little sample consumption compared to the conventional pipetting method.

  5. The Launch Systems Operations Cost Model

    Science.gov (United States)

    Prince, Frank A.; Hamaker, Joseph W. (Technical Monitor)

    2001-01-01

    One of NASA's primary missions is to reduce the cost of access to space while simultaneously increasing safety. A key component, and one of the least understood, is the recurring operations and support cost for reusable launch systems. In order to predict these costs, NASA, under the leadership of the Independent Program Assessment Office (IPAO), has commissioned the development of a Launch Systems Operations Cost Model (LSOCM). LSOCM is a tool to predict the operations & support (O&S) cost of new and modified reusable (and partially reusable) launch systems. The requirements are to predict the non-recurring cost for the ground infrastructure and the recurring cost of maintaining that infrastructure, performing vehicle logistics, and performing the O&S actions to return the vehicle to flight. In addition, the model must estimate the time required to cycle the vehicle through all of the ground processing activities. The current version of LSOCM is an amalgamation of existing tools, leveraging our understanding of shuttle operations cost with a means of predicting how the maintenance burden will change as the vehicle becomes more aircraft like. The use of the Conceptual Operations Manpower Estimating Tool/Operations Cost Model (COMET/OCM) provides a solid point of departure based on shuttle and expendable launch vehicle (ELV) experience. The incorporation of the Reliability and Maintainability Analysis Tool (RMAT) as expressed by a set of response surface model equations gives a method for estimating how changing launch system characteristics affects cost and cycle time as compared to today's shuttle system. Plans are being made to improve the model. The development team will be spending the next few months devising a structured methodology that will enable verified and validated algorithms to give accurate cost estimates. To assist in this endeavor the LSOCM team is part of an Agency wide effort to combine resources with other cost and operations professionals to

  6. User Delay Cost Model and Facilities Maintenance Cost Model for a Terminal Control Area : Volume 3. User's Manual and Program Documentation for the Facilities Maintenance Cost Model

    Science.gov (United States)

    1978-05-01

    The Facilities Maintenance Cost Model (FMCM) is an analytic model designed to calculate expected annual labor costs of maintenance within a given FAA maintenance sector. The model is programmed in FORTRAN IV and has been demonstrated on the CDC Krono...

  7. The Shuttle Cost and Price model

    Science.gov (United States)

    Leary, Katherine; Stone, Barbara

    1983-01-01

    The Shuttle Cost and Price (SCP) model was developed as a tool to assist in evaluating major aspects of Shuttle operations that have direct and indirect economic consequences. It incorporates the major aspects of NASA Pricing Policy and corresponds to the NASA definition of STS operating costs. An overview of the SCP model is presented and the cost model portion of SCP is described in detail. Selected recent applications of the SCP model to NASA Pricing Policy issues are presented.

  8. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  9. Animated-simulation modeling facilitates clinical-process costing.

    Science.gov (United States)

    Zelman, W N; Glick, N D; Blackmore, C C

    2001-09-01

    Traditionally, the finance department has assumed responsibility for assessing process costs in healthcare organizations. To enhance process-improvement efforts, however, many healthcare providers need to include clinical staff in process cost analysis. Although clinical staff often use electronic spreadsheets to model the cost of specific processes, PC-based animated-simulation tools offer two major advantages over spreadsheets: they allow clinicians to interact more easily with the costing model so that it more closely represents the process being modeled, and they represent cost output as a cost range rather than as a single cost estimate, thereby providing more useful information for decision making.

  10. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  11. The Cost and Cost-Effectiveness of Scaling up Screening and Treatment of Syphilis in Pregnancy: A Model

    Science.gov (United States)

    Kahn, James G.; Jiwani, Aliya; Gomez, Gabriela B.; Hawkes, Sarah J.; Chesson, Harrell W.; Broutet, Nathalie; Kamb, Mary L.; Newman, Lori M.

    2014-01-01

    Background Syphilis in pregnancy imposes a significant global health and economic burden. More than half of cases result in serious adverse events, including infant mortality and infection. The annual global burden from mother-to-child transmission (MTCT) of syphilis is estimated at 3.6 million disability-adjusted life years (DALYs) and $309 million in medical costs. Syphilis screening and treatment is simple, effective, and affordable, yet, worldwide, most pregnant women do not receive these services. We assessed cost-effectiveness of scaling-up syphilis screening and treatment in existing antenatal care (ANC) programs in various programmatic, epidemiologic, and economic contexts. Methods and Findings We modeled the cost, health impact, and cost-effectiveness of expanded syphilis screening and treatment in ANC, compared to current services, for 1,000,000 pregnancies per year over four years. We defined eight generic country scenarios by systematically varying three factors: current maternal syphilis testing and treatment coverage, syphilis prevalence in pregnant women, and the cost of healthcare. We calculated program and net costs, DALYs averted, and net costs per DALY averted over four years in each scenario. Program costs are estimated at $4,142,287 – $8,235,796 per million pregnant women (2010 USD). Net costs, adjusted for averted medical care and current services, range from net savings of $12,261,250 to net costs of $1,736,807. The program averts an estimated 5,754 – 93,484 DALYs, yielding net savings in four scenarios, and a cost per DALY averted of $24 – $111 in the four scenarios with net costs. Results were robust in sensitivity analyses. Conclusions Eliminating MTCT of syphilis through expanded screening and treatment in ANC is likely to be highly cost-effective by WHO-defined thresholds in a wide range of settings. Countries with high prevalence, low current service coverage, and high healthcare cost would benefit most. Future analyses can be

  12. Two simple models of classical heat pumps.

    Science.gov (United States)

    Marathe, Rahul; Jayannavar, A M; Dhar, Abhishek

    2007-03-01

    Motivated by recent studies of models of particle and heat quantum pumps, we study similar simple classical models and examine the possibility of heat pumping. Unlike many of the usual ratchet models of molecular engines, the models we study do not have particle transport. We consider a two-spin system and a coupled oscillator system which exchange heat with multiple heat reservoirs and which are acted upon by periodic forces. The simplicity of our models allows accurate numerical and exact solutions and unambiguous interpretation of results. We demonstrate that while both our models seem to be built on similar principles, one is able to function as a heat pump (or engine) while the other is not.

  13. Simple model of inhibition of chain-branching combustion processes

    Science.gov (United States)

    Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.

    2017-11-01

    A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.

  14. Overall feature of EAST operation space by using simple Core-SOL-Divertor model

    International Nuclear Information System (INIS)

    Hiwatari, R.; Hatayama, A.; Zhu, S.; Takizuka, T.; Tomita, Y.

    2005-01-01

    We have developed a simple Core-SOL-Divertor (C-S-D) model to investigate qualitatively the overall features of the operational space for the integrated core and edge plasma. To construct the simple C-S-D model, a simple core plasma model of ITER physics guidelines and a two-point SOL-divertor model are used. The simple C-S-D model is applied to the study of the EAST operational space with lower hybrid current drive experiments under various kinds of trade-off for the basic plasma parameters. Effective methods for extending the operation space are also presented. As shown by this study for the EAST operation space, it is evident that the C-S-D model is a useful tool to understand qualitatively the overall features of the plasma operation space. (author)

  15. Applying Interpretive Structural Modeling to Cost Overruns in Construction Projects in the Sultanate of Oman

    Directory of Open Access Journals (Sweden)

    K. Alzebdeh

    2015-06-01

    Full Text Available Cost overruns in construction projects are a problem faced by project managers, engineers, and clients throughout the Middle East.  Globally, several studies in the literature have focused on identifying the causes of these overruns and used statistical methods to rank them according to their impacts. None of these studies have considered the interactions among these factors. This paper examines interpretive structural modelling (ISM as a viable technique for modelling complex interactions among factors responsible for cost overruns in construction projects in the Sultanate of Oman. In particular, thirteen interrelated factors associated with cost overruns were identified, along with their contextual interrelationships. Application of ISM leads to organizing these factors in a hierarchical structure which effectively demonstrates their interactions in a simple way. Four factors were found to be at the root of cost overruns: instability of the US dollar, changes in governmental regulations, faulty cost estimation, and poor coordination among projects’ parties. Taking appropriate actions to minimize the influence of these factors can ultimately lead to better control of future project costs. Thisstudy is of value to managers and decision makers because it provides a powerful yet very easy to apply approach for investigating the problem of cost overruns and other similar issues.

  16. A Simple Method for Estimating the Economic Cost of Productivity Loss Due to Blindness and Moderate to Severe Visual Impairment.

    Science.gov (United States)

    Eckert, Kristen A; Carter, Marissa J; Lansingh, Van C; Wilson, David A; Furtado, João M; Frick, Kevin D; Resnikoff, Serge

    2015-01-01

    To estimate the annual loss of productivity from blindness and moderate to severe visual impairment (MSVI) using simple models (analogous to how a rapid assessment model relates to a comprehensive model) based on minimum wage (MW) and gross national income (GNI) per capita (US$, 2011). Cost of blindness (COB) was calculated for the age group ≥50 years in nine sample countries by assuming the loss of current MW and loss of GNI per capita. It was assumed that all individuals work until 65 years old and that half of visual impairment prevalent in the ≥50 years age group is prevalent in the 50-64 years age group. For cost of MSVI (COMSVI), individual wage and GNI loss of 30% was assumed. Results were compared with the values of the uncorrected refractive error (URE) model of productivity loss. COB (MW method) ranged from $0.1 billion in Honduras to $2.5 billion in the United States, and COMSVI ranged from $0.1 billion in Honduras to $5.3 billion in the US. COB (GNI method) ranged from $0.1 million in Honduras to $7.8 billion in the US, and COMSVI ranged from $0.1 billion in Honduras to $16.5 billion in the US. Most GNI method values were near equivalent to those of the URE model. Although most people with blindness and MSVI live in developing countries, the highest productivity losses are in high income countries. The global economy could improve if eye care were made more accessible and more affordable to all.

  17. Simple model systems: a challenge for Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Di Carlo Marta

    2012-04-01

    Full Text Available Abstract The success of biomedical researches has led to improvement in human health and increased life expectancy. An unexpected consequence has been an increase of age-related diseases and, in particular, neurodegenerative diseases. These disorders are generally late onset and exhibit complex pathologies including memory loss, cognitive defects, movement disorders and death. Here, it is described as the use of simple animal models such as worms, fishes, flies, Ascidians and sea urchins, have facilitated the understanding of several biochemical mechanisms underlying Alzheimer's disease (AD, one of the most diffuse neurodegenerative pathologies. The discovery of specific genes and proteins associated with AD, and the development of new technologies for the production of transgenic animals, has helped researchers to overcome the lack of natural models. Moreover, simple model systems of AD have been utilized to obtain key information for evaluating potential therapeutic interventions and for testing efficacy of putative neuroprotective compounds.

  18. Modelling cost-effectiveness of different vasectomy methods in India, Kenya, and Mexico

    Directory of Open Access Journals (Sweden)

    Seamans Yancy

    2007-07-01

    Full Text Available Abstract Background Vasectomy is generally considered a safe and effective method of permanent contraception. The historical effectiveness of vasectomy has been questioned by recent research results indicating that the most commonly used method of vasectomy – simple ligation and excision (L and E – appears to have a relatively high failure rate, with reported pregnancy rates as high as 4%. Updated methods such as fascial interposition (FI and thermal cautery can lower the rate of failure but may require additional financial investments and may not be appropriate for low-resource clinics. In order to better compare the cost-effectiveness of these different vasectomy methods, we modelled the costs of different vasectomy methods using cost data collected in India, Kenya, and Mexico and effectiveness data from the latest published research. Methods The costs associated with providing vasectomies were determined in each country through interviews with clinic staff. Costs collected were economic, direct, programme costs of fixed vasectomy services but did not include large capital expenses or general recurrent costs for the health care facility. Estimates of the time required to provide service were gained through interviews and training costs were based on the total costs of vasectomy training programmes in each country. Effectiveness data were obtained from recent published studies and comparative cost-effectiveness was determined using cost per couple years of protection (CYP. Results In each country, the labour to provide the vasectomy and follow-up services accounts for the greatest portion of the overall cost. Because each country almost exclusively used one vasectomy method at all of the clinics included in the study, we modelled costs based on the additional material, labour, and training costs required in each country. Using a model of a robust vasectomy program, more effective methods such as FI and thermal cautery reduce the cost per

  19. Ground-Based Telescope Parametric Cost Model

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  20. A 'simple' hybrid model for power derivatives

    International Nuclear Information System (INIS)

    Lyle, Matthew R.; Elliott, Robert J.

    2009-01-01

    This paper presents a method for valuing power derivatives using a supply-demand approach. Our method extends work in the field by incorporating randomness into the base load portion of the supply stack function and equating it with a noisy demand process. We obtain closed form solutions for European option prices written on average spot prices considering two different supply models: a mean-reverting model and a Markov chain model. The results are extensions of the classic Black-Scholes equation. The model provides a relatively simple approach to describe the complicated price behaviour observed in electricity spot markets and also allows for computationally efficient derivatives pricing. (author)

  1. Foreshock and aftershocks in simple earthquake models.

    Science.gov (United States)

    Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R

    2015-02-27

    Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.

  2. SUPPLIES COSTS: AN EXPLORATORY STUDY WITH APPLICATION OF MEASUREMENT MODEL OF LOGISTICS COSTS

    Directory of Open Access Journals (Sweden)

    Ana Paula Ferreira Alves

    2013-12-01

    Full Text Available One of the main reasons for the difficulty in adopting an integrated method of calculation of logistics costs is still a lack of adequate information about costs. The management of the supply chain and identify its costs can provide information for their managers, with regard to decision making, generating competitive advantage. Some models of calculating logistics costs are proposed by Uelze (1974, Dias (1996, Goldratt (2002, Christopher (2007, Castiglioni (2009 and Borba & Gibbon (2009, with little disclosure of the results. In this context, this study aims to evaluate the costs of supplies, applying a measurement model of logistics costs. Methodologically, the study characterized as exploratory. The model applied pointed, in original condition, that about R$ 2.5 million were being applied in the process of management of supplies, with replacement costs and storage imbalance. Upgrading the company's data, it is possible obtain a 52% reduction in costs to replace and store supplies. Thus, the cost model applied to logistical supplies showed feasibility of implementation, as well as providing information to assist in management and decision-making in logistics supply.

  3. Atmospheric greenhouse effect - simple model; Atmosfaerens drivhuseffekt - enkel modell

    Energy Technology Data Exchange (ETDEWEB)

    Kanestroem, Ingolf; Henriksen, Thormod

    2011-07-01

    The article shows a simple model for the atmospheric greenhouse effect based on consideration of both the sun and earth as 'black bodies', so that the physical laws that apply to them, may be used. Furthermore, explained why some gases are greenhouse gases, but other gases in the atmosphere has no greenhouse effect. But first, some important concepts and physical laws encountered in the article, are repeated. (AG)

  4. Operating cost model for local service airlines

    Science.gov (United States)

    Anderson, J. L.; Andrastek, D. A.

    1976-01-01

    Several mathematical models now exist which determine the operating economics for a United States trunk airline. These models are valuable in assessing the impact of new aircraft into an airline's fleet. The use of a trunk airline cost model for the local service airline does not result in representative operating costs. A new model is presented which is representative of the operating conditions and resultant costs for the local service airline. The calculated annual direct and indirect operating costs for two multiequipment airlines are compared with their actual operating experience.

  5. NLP model of a LiBr–H2O absorption refrigeration system for the minimization of the annual operating cost

    International Nuclear Information System (INIS)

    Rubio-Maya, Carlos; Pacheco-Ibarra, J. Jesús; Belman-Flores, Juan M.; Galván-González, Sergio R.; Mendoza-Covarrubias, Crisanto

    2012-01-01

    In this paper the optimization of a LiBr–H 2 O absorption refrigeration system with the annual operating cost as the objective function to be minimized is presented. The optimization problem is established as a Non-Linear Programming (NLP) model allowing a formulation of the problem in a simple and structured way, and reducing the typical complexity of the thermal systems. The model is composed of three main parts: the thermodynamic model based on the exergy concept including also the proper formulation for the thermodynamic properties of the LiBr–H 2 O mixture, the second is the economic model and the third part composed by inequality constraints. The solution of the model is obtained using the CONOPT solver suitable for NLP problems (code is available on request). The results show the values of the decision variables that minimize the annual cost under the set of assumptions considered in the model and agree well with those reported in other works using different optimization approaches. - Highlights: ► The optimization of an ARS is presented using the annual operating cost as the objective function. ► The problem is established as an NLP model allowing a formulation in a simple and structured way. ► Several formulations for the thermodynamic properties were tested to implement the simpler ones. ► The results obtained agree well with those reported in the work being in comparison.

  6. Nonfuel OandM costs for laser and heavy-ion fusion power plants

    International Nuclear Information System (INIS)

    Pendergrass, J.H.

    1986-01-01

    Very simple nonfuel operating and maintenance (OandM) cost models have been used in many inertial confinement fusion (ICF) commercial applications studies. Often, ICF OandM costs have been accounted for by adding a small fraction of plant initial capital cost to other annual power production costs. Lack of definition of ICF technology and/or perceptions that OandM costs would be small relative to capital-related costs are some reasons for such simple treatments. This approach does not permit rational treatment of potentially significant differences in OandM costs for ICF plants with different driver, reactor, target, etc., technologies or rational comparisons with conventional technologies. Improved understanding of ICF makes more accurate estimates for some OandM costs appear feasible. More detailed OandM cost models, even if of modest accuracy in some areas, are useful for comparisons

  7. A simple mechanical model for the isotropic harmonic oscillator

    International Nuclear Information System (INIS)

    Nita, Gelu M

    2010-01-01

    A constrained elastic pendulum is proposed as a simple mechanical model for the isotropic harmonic oscillator. The conceptual and mathematical simplicity of this model recommends it as an effective pedagogical tool in teaching basic physics concepts at advanced high school and introductory undergraduate course levels.

  8. Design and Use of the Simple Event Model (SEM)

    NARCIS (Netherlands)

    van Hage, W.R.; Malaisé, V.; Segers, R.H.; Hollink, L.

    2011-01-01

    Events have become central elements in the representation of data from domains such as history, cultural heritage, multimedia and geography. The Simple Event Model (SEM) is created to model events in these various domains, without making assumptions about the domain-specific vocabularies used. SEM

  9. Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection

    Science.gov (United States)

    Harwati

    2017-06-01

    Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.

  10. Process Cost Modeling for Multi-Disciplinary Design Optimization

    Science.gov (United States)

    Bao, Han P.; Freeman, William (Technical Monitor)

    2002-01-01

    For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This report outlines the development of a process-based cost model in which the physical elements of the vehicle are costed according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this report is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool. In successive sections, the report addresses the issues of cost modeling as follows. First, an introduction is presented to provide the background for the research work. Next, a quick review of cost estimation techniques is made with the intention to

  11. Preliminary Multi-Variable Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.

  12. Modeling Operations Costs for Human Exploration Architectures

    Science.gov (United States)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  13. Modelling the costs of energy crops. A case study of US corn and Brazilian sugar cane

    International Nuclear Information System (INIS)

    Mejean, Aurelie; Hope, Chris

    2010-01-01

    High crude oil prices, uncertainties about the consequences of climate change and the eventual decline of conventional oil production raise the prospects of alternative fuels, such as biofuels. This paper describes a simple probabilistic model of the costs of energy crops, drawing on the user's degree of belief about a series of parameters as an input. This forward-looking analysis quantifies the effects of production constraints and experience on the costs of corn and sugar cane, which can then be converted to bioethanol. Land is a limited and heterogeneous resource: the crop cost model builds on the marginal land suitability, which is assumed to decrease as more land is taken into production, driving down the marginal crop yield. Also, the maximum achievable yield is increased over time by technological change, while the yield gap between the actual yield and the maximum yield decreases through improved management practices. The results show large uncertainties in the future costs of producing corn and sugar cane, with a 90% confidence interval of 2.9-7.2$/GJ in 2030 for marginal corn costs, and 1.5-2.5$/GJ in 2030 for marginal sugar cane costs. The influence of each parameter on these supply costs is examined. (author)

  14. MONITOR: A computer model for estimating the costs of an integral monitored retrievable storage facility

    International Nuclear Information System (INIS)

    Reimus, P.W.; Sevigny, N.L.; Schutz, M.E.; Heller, R.A.

    1986-12-01

    The MONITOR model is a FORTRAN 77 based computer code that provides parametric life-cycle cost estimates for a monitored retrievable storage (MRS) facility. MONITOR is very flexible in that it can estimate the costs of an MRS facility operating under almost any conceivable nuclear waste logistics scenario. The model can also accommodate input data of varying degrees of complexity and detail (ranging from very simple to more complex) which makes it ideal for use in the MRS program, where new designs and new cost data are frequently offered for consideration. MONITOR can be run as an independent program, or it can be interfaced with the Waste System Transportation and Economic Simulation (WASTES) model, a program that simulates the movement of waste through a complete nuclear waste disposal system. The WASTES model drives the MONITOR model by providing it with the annual quantities of waste that are received, stored, and shipped at the MRS facility. Three runs of MONITOR are documented in this report. Two of the runs are for Version 1 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2A (backup) version of the MRS cost estimate. In one of these runs MONITOR was run as an independent model, and in the other run MONITOR was run using an input file generated by the WASTES model. The two runs correspond to identical cases, and the fact that they gave identical results verified that the code performed the same calculations in both modes of operation. The third run was made for Version 2 of the MONITOR code. A simulation which uses the costs developed by the Ralph M. Parsons Company in the 2B (integral) version of the MRS cost estimate. This run was made with MONITOR being run as an independent model. The results of several cases have been verified by hand calculations

  15. Validation of the OpCost logging cost model using contractor surveys

    Science.gov (United States)

    Conor K. Bell; Robert F. Keefe; Jeremy S. Fried

    2017-01-01

    OpCost is a harvest and fuel treatment operations cost model developed to function as both a standalone tool and an integrated component of the Bioregional Inventory Originated Simulation Under Management (BioSum) analytical framework for landscape-level analysis of forest management alternatives. OpCost is an updated implementation of the Fuel Reduction Cost Simulator...

  16. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  17. An intercomparison of mesoscale models at simple sites for wind energy applications

    DEFF Research Database (Denmark)

    Olsen, Bjarke Tobias; Hahmann, Andrea N.; Sempreviva, Anna Maria

    2017-01-01

    of the output from 25 NWP models is presented for three sites in northern Europe characterized by simple terrain. The models are evaluated sing a number of statistical properties relevant to wind energy and verified with observations. On average the models have small wind speed biases offshore and aloft ( ... %) and larger biases closer to the surface over land (> 7 %). A similar pattern is detected for the inter-model spread. Strongly stable and strongly unstable atmospheric stability conditions are associated with larger wind speed errors. Strong indications are found that using a grid spacing larger than 3 km...... decreases the accuracy of the models, but we found no evidence that using a grid spacing smaller than 3 km is necessary for these simple sites. Applying the models to a simple wind energy offshore wind farm highlights the importance of capturing the correct distributions of wind speed and direction....

  18. Simple model of string with colour degrees of freedom

    Science.gov (United States)

    Hadasz, Leszek

    1994-03-01

    We consider a simple model of string with colour charges on its ends. The model is constructed by rewriting the action describing classical spinless as well as spinning particles with colour charge in terms of fields living on the “string worldsheet” bounded by trajectories of the particles.

  19. Simple mathematical models for housing allocation to a homeless ...

    African Journals Online (AJOL)

    We present simple mathematical models for modelling a homeless population and housing allocation. We look at a situation whereby the local authority makes temporary accommodation available for some of the homeless for a while and we examine how this affects the number of families homeless at any given time.

  20. A Simple theoretical model for 63Ni betavoltaic battery

    International Nuclear Information System (INIS)

    ZUO, Guoping; ZHOU, Jianliang; KE, Guotu

    2013-01-01

    A numerical simulation of the energy deposition distribution in semiconductors is performed for 63 Ni beta particles. Results show that the energy deposition distribution exhibits an approximate exponential decay law. A simple theoretical model is developed for 63 Ni betavoltaic battery based on the distribution characteristics. The correctness of the model is validated by two literature experiments. Results show that the theoretical short-circuit current agrees well with the experimental results, and the open-circuit voltage deviates from the experimental results in terms of the influence of the PN junction defects and the simplification of the source. The theoretical model can be applied to 63 Ni and 147 Pm betavoltaic batteries. - Highlights: • The energy deposition distribution is found following an approximate exponential decay law when beta particles emitted from 63 Ni pass through a semiconductor. • A simple theoretical model for 63 Ni betavoltaic battery is constructed based on the exponential decay law. • Theoretical model can be applied to the betavoltaic batteries which radioactive source has a similar energy spectrum with 63 Ni, such as 147 Pm

  1. A simple model for simultaneous methanogenic-denitrification systems

    DEFF Research Database (Denmark)

    Garibay-Orijel, C.; Ahring, Birgitte Kiær; Rinderknecht-Seijas, N.

    2006-01-01

    We describe a useful and simple model for studies of simultaneous methanogenic-denitrification (M-D) systems. One equation predicts an inverse relationship between the percentage of electron donor channeled into dissimilatory denitrification and the loading ratio X given by grams degradable COD per...

  2. Integrated modeling of software cost and quality

    International Nuclear Information System (INIS)

    Rone, K.Y.; Olson, K.M.

    1994-01-01

    In modeling the cost and quality of software systems, the relationship between cost and quality must be considered. This explicit relationship is dictated by the criticality of the software being developed. The balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and the developers with respect to the processes being employed

  3. Observations and models of simple nocturnal slope flows

    International Nuclear Information System (INIS)

    Doran, J.C.; Horst, J.W.

    1983-01-01

    Measurements of simple nocturnal slope winds were taken on Rattlesnake Mountain, a nearly ideal two-dimensional ridge. Tower and tethered balloon instrumentation allowed the determination of the wind and temperature characteristics of the katabatic layer as well as the ambient conditions. Two cases were chosen for study; these were marked by well-defined surface-based temperature inversions and a low-level maximum in the downslope wind component. The downslope development of the slope flow could be determined from the tower measurements, and showed a progressive strenghtening of the katabatic layer. Hydraulic models developed by Manins and Sawford (1979a) and Briggs (1981) gave useful estimates of drainage layer depths, but were not otherwise applicable. A simple numerical model that relates the eddy diffusivity to the local turbulent kinetic energy was found to give good agreement with the observed wind and temperature profiles of the slope flows

  4. Simple model for low-frequency guitar function

    DEFF Research Database (Denmark)

    Christensen, Ove; Vistisen, Bo B.

    1980-01-01

    - frequency guitar function. The model predicts frequency responce of sound pressure and top plate mobility which are in close quantitative agreement with experimental responses. The absolute sound pressure level and mobility level are predicted to within a few decibels, and the equivalent piston area......The frequency response of sound pressure and top plate mobility is studied around the two first resonances of the guitar. These resonances are shown to result from a coupling between the fundamental top plate mode and the Helmholtz resonance of the cavity. A simple model is proposed for low...

  5. Simple models of equilibrium and nonequilibrium phenomena

    International Nuclear Information System (INIS)

    Lebowitz, J.L.

    1987-01-01

    This volume consists of two chapters of particular interest to researchers in the field of statistical mechanics. The first chapter is based on the premise that the best way to understand the qualitative properties that characterize many-body (i.e. macroscopic) systems is to study 'a number of the more significant model systems which, at least in principle are susceptible of complete analysis'. The second chapter deals exclusively with nonequilibrium phenomena. It reviews the theory of fluctuations in open systems to which they have made important contributions. Simple but interesting model examples are emphasised

  6. Oil and gas pipeline construction cost analysis and developing regression models for cost estimation

    Science.gov (United States)

    Thaduri, Ravi Kiran

    In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.

  7. Simple, fast and accurate two-diode model for photovoltaic modules

    Energy Technology Data Exchange (ETDEWEB)

    Ishaque, Kashif; Salam, Zainal; Taheri, Hamed [Faculty of Electrical Engineering, Universiti Teknologi Malaysia, UTM 81310, Skudai, Johor Bahru (Malaysia)

    2011-02-15

    This paper proposes an improved modeling approach for the two-diode model of photovoltaic (PV) module. The main contribution of this work is the simplification of the current equation, in which only four parameters are required, compared to six or more in the previously developed two-diode models. Furthermore the values of the series and parallel resistances are computed using a simple and fast iterative method. To validate the accuracy of the proposed model, six PV modules of different types (multi-crystalline, mono-crystalline and thin-film) from various manufacturers are tested. The performance of the model is evaluated against the popular single diode models. It is found that the proposed model is superior when subjected to irradiance and temperature variations. In particular the model matches very accurately for all important points of the I-V curves, i.e. the peak power, short-circuit current and open circuit voltage. The modeling method is useful for PV power converter designers and circuit simulator developers who require simple, fast yet accurate model for the PV module. (author)

  8. A simple model to estimate the optimal doping of p - Type oxide superconductors

    Directory of Open Access Journals (Sweden)

    Adir Moysés Luiz

    2008-12-01

    Full Text Available Oxygen doping of superconductors is discussed. Doping high-Tc superconductors with oxygen seems to be more efficient than other doping procedures. Using the assumption of double valence fluctuations, we present a simple model to estimate the optimal doping of p-type oxide superconductors. The experimental values of oxygen content for optimal doping of the most important p-type oxide superconductors can be accounted for adequately using this simple model. We expect that our simple model will encourage further experimental and theoretical researches in superconducting materials.

  9. Simple and cost-effective method of highly conductive and elastic carbon nanotube/polydimethylsiloxane composite for wearable electronics.

    Science.gov (United States)

    Kim, Jeong Hun; Hwang, Ji-Young; Hwang, Ha Ryeon; Kim, Han Seop; Lee, Joong Hoon; Seo, Jae-Won; Shin, Ueon Sang; Lee, Sang-Hoon

    2018-01-22

    The development of various flexible and stretchable materials has attracted interest for promising applications in biomedical engineering and electronics industries. This interest in wearable electronics, stretchable circuits, and flexible displays has created a demand for stable, easily manufactured, and cheap materials. However, the construction of flexible and elastic electronics, on which commercial electronic components can be mounted through simple and cost-effective processing, remains challenging. We have developed a nanocomposite of carbon nanotubes (CNTs) and polydimethylsiloxane (PDMS) elastomer. To achieve uniform distributions of CNTs within the polymer, an optimized dispersion process was developed using isopropyl alcohol (IPA) and methyl-terminated PDMS in combination with ultrasonication. After vaporizing the IPA, various shapes and sizes can be easily created with the nanocomposite, depending on the mold. The material provides high flexibility, elasticity, and electrical conductivity without requiring a sandwich structure. It is also biocompatible and mechanically stable, as demonstrated by cytotoxicity assays and cyclic strain tests (over 10,000 times). We demonstrate the potential for the healthcare field through strain sensor, flexible electric circuits, and biopotential measurements such as EEG, ECG, and EMG. This simple and cost-effective fabrication method for CNT/PDMS composites provides a promising process and material for various applications of wearable electronics.

  10. Some Simple Arguments about Cost Externalization and its Relevance to the Price of Fusion Energy

    International Nuclear Information System (INIS)

    Budny, R.; Winfree, R.

    1999-01-01

    The primary goal of fusion energy research is to develop a source of energy that is less harmful to the environment than are the present sources. A concern often expressed by critics of fusion research is that fusion energy will never be economically competitive with fossil fuels, which in 1997 provided 75% of the world's energy. And in fact, studies of projected fusion electricity generation generally project fusion costs to be higher than those of conventional methods. Yet it is widely agreed that the environmental costs of fossil fuel use are high. Because these costs aren't included in the market price, and furthermore because many governments subsidize fossil fuel production, fossil fuels seem less expensive than they really are. Here we review some simple arguments about cost externalization which provide a useful background for discussion of energy prices. The collectively self-destructive behavior that is the root of many environmental problems, including fossil fuel use, was termed ''the tragedy of the commons'' by the biologist G. Hardin. Hardin's metaphor is that of a grazing commons that is open to all. Each herdsman, in deciding whether to add a cow to his herd, compares the benefit of doing so, which accrues to him alone, to the cost, which is shared by all the herdsmen using the commons, and therefore adds his cow. In this way individually rational behavior leads to the collective destruction of the shared resource. As Hardin pointed out, pollution is one kind of tragedy of the commons. CO 2 emissions and global warming are in this sense classic tragedies

  11. Modeling reproductive decisions with simple heuristics

    Directory of Open Access Journals (Sweden)

    Peter Todd

    2013-10-01

    Full Text Available BACKGROUND Many of the reproductive decisions that humans make happen without much planning or forethought, arising instead through the use of simple choice rules or heuristics that involve relatively little information and processing. Nonetheless, these heuristic-guided decisions are typically beneficial, owing to humans' ecological rationality - the evolved fit between our constrained decision mechanisms and the adaptive problems we face. OBJECTIVE This paper reviews research on the ecological rationality of human decision making in the domain of reproduction, showing how fertility-related decisions are commonly made using various simple heuristics matched to the structure of the environment in which they are applied, rather than being made with information-hungry mechanisms based on optimization or rational economic choice. METHODS First, heuristics for sequential mate search are covered; these heuristics determine when to stop the process of mate search by deciding that a good-enough mate who is also mutually interested has been found, using a process of aspiration-level setting and assessing. These models are tested via computer simulation and comparison to demographic age-at-first-marriage data. Next, a heuristic process of feature-based mate comparison and choice is discussed, in which mate choices are determined by a simple process of feature-matching with relaxing standards over time. Parental investment heuristics used to divide resources among offspring are summarized. Finally, methods for testing the use of such mate choice heuristics in a specific population over time are then described.

  12. Some simple applications of probability models to birth intervals

    International Nuclear Information System (INIS)

    Shrestha, G.

    1987-07-01

    An attempt has been made in this paper to apply some simple probability models to birth intervals under the assumption of constant fecundability and varying fecundability among women. The parameters of the probability models are estimated by using the method of moments and the method of maximum likelihood. (author). 9 refs, 2 tabs

  13. Performance of semiconductor radiation sensors for simple and low-cost radiation detector

    International Nuclear Information System (INIS)

    Tanimura, Yoshihiko; Birumachi, Atsushi; Yoshida, Makoto; Watanabe, Tamaki

    2008-01-01

    In order to develop a simple but reliable radiation detector for the general public, photon detection performances of radiation sensors have been studied in photon calibration fields and by Monte Carlo simulations. A silicon p-i-n photodiode and a CdTe detector were selected for the low cost sensors. Their energy responses to ambient dose equivalent H * (10) were evaluated over the energy range from 60 keV to 2 MeV. The response of the CdTe decreases markedly with increasing photon energy. On the other hand, the photodiode has the advantage of almost flat response above 150 keV. The sensitivities of these sensors are 4 to 6 cpm for the natural radiation. Detection limits of the radiation level are low enough to know the extreme increase of radiation due to emergency situations of nuclear power plants, fuel treatment facilities and so on. (author)

  14. Specific heat of the simple-cubic Ising model

    NARCIS (Netherlands)

    Feng, X.; Blöte, H.W.J.

    2010-01-01

    We provide an expression quantitatively describing the specific heat of the Ising model on the simple-cubic lattice in the critical region. This expression is based on finite-size scaling of numerical results obtained by means of a Monte Carlo method. It agrees satisfactorily with series expansions

  15. Simple, cost effective & result oriented framework for supplier performance measurement in sports goods manufacturing industry

    Directory of Open Access Journals (Sweden)

    2011-09-01

    Full Text Available The emergences of global markets have increased competition worldwide. For the Sports Goods Manufacturing Industry which is considered to be an intensive supplier base industry with limited resources to sustain in what is already a very competitive market there is a need for the entire supply chain viz. raw material and machinery suppliers and manufacturers to measure their supplier's performance to reduce business risks and revenue losses. How to design & execute a simple, cost effective & result oriented Framework for Supplier Performance Measurement for sports goods manufacturing small - medium enterprises is the main aim of this research paper.

  16. Simple model for decay of laser generated shock waves

    International Nuclear Information System (INIS)

    Trainor, R.J.

    1980-01-01

    A simple model is derived to calculate the hydrodynamic decay of laser-generated shock waves. Comparison with detailed hydrocode simulations shows good agreement between calculated time evolution of shock pressure, position, and instantaneous pressure profile. Reliability of the model decreases in regions of the target where superthermal-electron preheat effects become comparable to shock effects

  17. The productivity and cost-efficiency of models for involving nurse practitioners in primary care: a perspective from queueing analysis.

    Science.gov (United States)

    Liu, Nan; D'Aunno, Thomas

    2012-04-01

    To develop simple stylized models for evaluating the productivity and cost-efficiencies of different practice models to involve nurse practitioners (NPs) in primary care, and in particular to generate insights on what affects the performance of these models and how. The productivity of a practice model is defined as the maximum number of patients that can be accounted for by the model under a given timeliness-to-care requirement; cost-efficiency is measured by the corresponding annual cost per patient in that model. Appropriate queueing analysis is conducted to generate formulas and values for these two performance measures. Model parameters for the analysis are extracted from the previous literature and survey reports. Sensitivity analysis is conducted to investigate the model performance under different scenarios and to verify the robustness of findings. Employing an NP, whose salary is usually lower than a primary care physician, may not be cost-efficient, in particular when the NP's capacity is underutilized. Besides provider service rates, workload allocation among providers is one of the most important determinants for the cost-efficiency of a practice model involving NPs. Capacity pooling among providers could be a helpful strategy to improve efficiency in care delivery. The productivity and cost-efficiency of a practice model depend heavily on how providers organize their work and a variety of other factors related to the practice environment. Queueing theory provides useful tools to take into account these factors in making strategic decisions on staffing and panel size selection for a practice model. © Health Research and Educational Trust.

  18. Total inpatient treatment costs in patients with severe burns: towards a more accurate reimbursement model.

    Science.gov (United States)

    Mehra, Tarun; Koljonen, Virve; Seifert, Burkhardt; Volbracht, Jörk; Giovanoli, Pietro; Plock, Jan; Moos, Rudolf Maria

    2015-01-01

    Reimbursement systems have difficulties depicting the actual cost of burn treatment, leaving care providers with a significant financial burden. Our aim was to establish a simple and accurate reimbursement model compatible with prospective payment systems. A total of 370 966 electronic medical records of patients discharged in 2012 to 2013 from Swiss university hospitals were reviewed. A total of 828 cases of burns including 109 cases of severe burns were retained. Costs, revenues and earnings for severe and nonsevere burns were analysed and a linear regression model predicting total inpatient treatment costs was established. The median total costs per case for severe burns was tenfold higher than for nonsevere burns (179 949 CHF [167 353 EUR] vs 11 312 CHF [10 520 EUR], interquartile ranges 96 782-328 618 CHF vs 4 874-27 783 CHF, p <0.001). The median of earnings per case for nonsevere burns was 588 CHF (547 EUR) (interquartile range -6 720 - 5 354 CHF) whereas severe burns incurred a large financial loss to care providers, with median earnings of -33 178 CHF (30 856 EUR) (interquartile range -95 533 - 23 662 CHF). Differences were highly significant (p <0.001). Our linear regression model predicting total costs per case with length of stay (LOS) as independent variable had an adjusted R2 of 0.67 (p <0.001 for LOS). Severe burns are systematically underfunded within the Swiss reimbursement system. Flat-rate DRG-based refunds poorly reflect the actual treatment costs. In conclusion, we suggest a reimbursement model based on a per diem rate for treatment of severe burns.

  19. A simple model of bedform migration

    DEFF Research Database (Denmark)

    Bartholdy, Jesper; Ernstsen, Verner Brandbyge; Flemming, Burg W

    2010-01-01

    and width) of naturally-packed bed material on the bedform lee side, qb(crest). The model is simple, built on a rational description of simplified sediment mechanics, and its calibration constant can be explained in accordance with estimated values of the physical constants on which it is based. Predicted......A model linking subaqueous dune migration to the effective (grain related) shear stress is calibrated by means of flume data for bedform dimensions and migration rates. The effective shear stress is calculated on the basis of a new method assuming a near-bed layer above the mean bed level in which...... the current velocity accelerates towards the bedform crest. As a consequence, the effective bed shear stress corresponds to the shear stress acting directly on top of the bedform. The model operates with the critical Shields stress as a function of grain size, and predicts the deposition (volume per unit time...

  20. Characteristics and Properties of a Simple Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Kowal Robert

    2016-12-01

    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Despite the passage of time, it continues to raise interest both from the theoretical side as well as from the application side. One of the many fundamental questions in the model concerns determining derivative characteristics and studying the properties existing in their scope, referring to the first of these aspects. The literature of the subject provides several classic solutions in that regard. In the paper, a completely new design is proposed, based on the direct application of variance and its properties, resulting from the non-correlation of certain estimators with the mean, within the scope of which some fundamental dependencies of the model characteristics are obtained in a much more compact manner. The apparatus allows for a simple and uniform demonstration of multiple dependencies and fundamental properties in the model, and it does it in an intuitive manner. The results were obtained in a classic, traditional area, where everything, as it might seem, has already been thoroughly studied and discovered.

  1. Safeguards First Principle Initiative (SFPI) Cost Model

    International Nuclear Information System (INIS)

    Price, Mary Alice

    2010-01-01

    The Nevada Test Site (NTS) began operating Material Control and Accountability (MC and A) under the Safeguards First Principle Initiative (SFPI), a risk-based and cost-effective program, in December 2006. The NTS SFPI Comprehensive Assessment of Safeguards Systems (COMPASS) Model is made up of specific elements (MC and A plan, graded safeguards, accounting systems, measurements, containment, surveillance, physical inventories, shipper/receiver differences, assessments/performance tests) and various sub-elements, which are each assigned effectiveness and contribution factors that when weighted and rated reflect the health of the MC and A program. The MC and A Cost Model, using an Excel workbook, calculates budget and/or actual costs using these same elements/sub-elements resulting in total costs and effectiveness costs per element/sub-element. These calculations allow management to identify how costs are distributed for each element/sub-element. The Cost Model, as part of the SFPI program review process, enables management to determine if spending is appropriate for each element/sub-element.

  2. Advanced fuel cycle cost estimation model and its cost estimation results for three nuclear fuel cycles using a dynamic model in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sungki, E-mail: sgkim1@kaeri.re.kr [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Ko, Wonil [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Youn, Saerom; Gao, Ruxing [University of Science and Technology, 217 Gajungro, Yuseong-gu, Daejeon 305-350 (Korea, Republic of); Bang, Sungsig, E-mail: ssbang@kaist.ac.kr [Korea Advanced Institute of Science and Technology, Department of Business and Technology Management, 291 Deahak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2015-11-15

    Highlights: • The nuclear fuel cycle cost using a new cost estimation model was analyzed. • The material flows of three nuclear fuel cycle options were calculated. • The generation cost of once-through was estimated to be 66.88 mills/kW h. • The generation cost of pyro-SFR recycling was estimated to be 78.06 mills/kW h. • The reactor cost was identified as the main cost driver of pyro-SFR recycling. - Abstract: The present study analyzes advanced nuclear fuel cycle cost estimation models such as the different discount rate model and its cost estimation results. To do so, an analysis of the nuclear fuel cycle cost of three options (direct disposal (once through), PWR–MOX (Mixed OXide fuel), and Pyro-SFR (Sodium-cooled Fast Reactor)) from the viewpoint of economic sense, focusing on the cost estimation model, was conducted using a dynamic model. From an analysis of the fuel cycle cost estimation results, it was found that some cost gap exists between the traditional same discount rate model and the advanced different discount rate model. However, this gap does not change the priority of the nuclear fuel cycle option from the viewpoint of economics. In addition, the fuel cycle costs of OT (Once-Through) and Pyro-SFR recycling based on the most likely value using a probabilistic cost estimation except for reactor costs were calculated to be 8.75 mills/kW h and 8.30 mills/kW h, respectively. Namely, the Pyro-SFR recycling option was more economical than the direct disposal option. However, if the reactor cost is considered, the economic sense in the generation cost between the two options (direct disposal vs. Pyro-SFR recycling) can be changed because of the high reactor cost of an SFR.

  3. NASA Instrument Cost/Schedule Model

    Science.gov (United States)

    Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George

    2011-01-01

    NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.

  4. A simple geometrical model describing shapes of soap films suspended on two rings

    Science.gov (United States)

    Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.

    2016-09-01

    We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.

  5. A simple flow-concentration modelling method for integrating water ...

    African Journals Online (AJOL)

    A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...

  6. Plural Governance: A Modified Transaction Cost Model

    DEFF Research Database (Denmark)

    Mols, Niels Peter; Menard, Claude

    2014-01-01

    Plural governance is a form of governance where a firm both makes and buys similar goods or services. Despite a widespread use of plural governance there are no transaction cost models of how plural governance affects performance. This paper reviews the literature about plural forms and proposes...... a model relating transaction cost and resource-based variables to the cost of the plural form. The model is then used to analyze when the plural form is efficient compared to alternative governance structures. We also use the model to discuss the strength of three plural form synergies....

  7. Cost functions of greenhouse models

    International Nuclear Information System (INIS)

    Linderoth, H.

    2000-01-01

    The benchmark is equal to the cost (D) caused by an increase in temperature since the middle of the nineteenth century (T) of nearly 2.5 deg. C. According to mainstream economists, the benchmark is 1-2% of GDP, but very different estimates can also be found. Even though there appears to be agreement among a number of economists that the benchmark is 1-2% of GDP, major differences exist when it comes to estimating D for different sectors. One of the main problems is how to estimate non-market activities. Normally, the benchmark is the best guess, but due to the possibility of catastrophic events this can be considerable smaller than the mean. Certainly, the cost function is skewed to the right. The benchmark is just one point on the cost curve. To a great extent, cost functions are alike in greenhouse models (D = α ''.T'' λ). Cost functions are region and sector dependent in several models. In any case, both α (benchmark) and λ are rough estimates. Besides being dependent on α and λ, the marginal emission cost depends on the discount rate. In fact, because emissions have effects continuing for many years, the discount rate is clearly the most important parameter. (au) (au)

  8. Swimming near the substrate: a simple robotic model of stingray locomotion

    International Nuclear Information System (INIS)

    Blevins, Erin; Lauder, George V

    2013-01-01

    Studies of aquatic locomotion typically assume that organisms move through unbounded fluid. However, benthic fishes swim close to the substrate and will experience significant ground effects, which will be greatest for fishes with wide spans such as benthic batoids and flatfishes. Ground effects on fixed-wing flight are well understood, but these models are insufficient to describe the dynamic interactions between substrates and undulating, oscillating fish. Live fish alter their swimming behavior in ground effect, complicating comparisons of near-ground and freestream swimming performance. In this study, a simple, stingray-inspired physical model offers insights into ground effects on undulatory swimmers, contrasting the self-propelled swimming speed, power requirements, and hydrodynamics of fins swimming with fixed kinematics near and far from a solid boundary. Contrary to findings for gliding birds and other fixed-wing fliers, ground effect does not necessarily enhance the performance of undulating fins. Under most kinematic conditions, fins do not swim faster in ground effect, power requirements increase, and the cost of transport can increase by up to 10%. The influence of ground effect varies with kinematics, suggesting that benthic fish might modulate their swimming behavior to minimize locomotor penalties and incur benefits from swimming near a substrate. (paper)

  9. Model checking exact cost for attack scenarios

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Nielson, Flemming

    2017-01-01

    Attack trees constitute a powerful tool for modelling security threats. Many security analyses of attack trees can be seamlessly expressed as model checking of Markov Decision Processes obtained from the attack trees, thus reaping the benefits of a coherent framework and a mature tool support....... However, current model checking does not encompass the exact cost analysis of an attack, which is standard for attack trees. Our first contribution is the logic erPCTL with cost-related operators. The extended logic allows to analyse the probability of an event satisfying given cost bounds and to compute...... the exact cost of an event. Our second contribution is the model checking algorithm for erPCTL. Finally, we apply our framework to the analysis of attack trees....

  10. Simple models for the simulation of submarine melt for a Greenland glacial system model

    Science.gov (United States)

    Beckmann, Johanna; Perrette, Mahé; Ganopolski, Andrey

    2018-01-01

    Two hundred marine-terminating Greenland outlet glaciers deliver more than half of the annually accumulated ice into the ocean and have played an important role in the Greenland ice sheet mass loss observed since the mid-1990s. Submarine melt may play a crucial role in the mass balance and position of the grounding line of these outlet glaciers. As the ocean warms, it is expected that submarine melt will increase, potentially driving outlet glaciers retreat and contributing to sea level rise. Projections of the future contribution of outlet glaciers to sea level rise are hampered by the necessity to use models with extremely high resolution of the order of a few hundred meters. That requirement in not only demanded when modeling outlet glaciers as a stand alone model but also when coupling them with high-resolution 3-D ocean models. In addition, fjord bathymetry data are mostly missing or inaccurate (errors of several hundreds of meters), which questions the benefit of using computationally expensive 3-D models for future predictions. Here we propose an alternative approach built on the use of a computationally efficient simple model of submarine melt based on turbulent plume theory. We show that such a simple model is in reasonable agreement with several available modeling studies. We performed a suite of experiments to analyze sensitivity of these simple models to model parameters and climate characteristics. We found that the computationally cheap plume model demonstrates qualitatively similar behavior as 3-D general circulation models. To match results of the 3-D models in a quantitative manner, a scaling factor of the order of 1 is needed for the plume models. We applied this approach to model submarine melt for six representative Greenland glaciers and found that the application of a line plume can produce submarine melt compatible with observational data. Our results show that the line plume model is more appropriate than the cone plume model for simulating

  11. Global transportation cost modeling for long range planning

    International Nuclear Information System (INIS)

    Pope, R.B.; Michelhaugh, R.D.; Singley, P.T.; Lester, P.B.

    1998-01-01

    The U.S. Department of Energy (DOE) is preparing to perform significant remediation activities of the sites for which it is responsible. To accomplish this, it is preparing a corporate global plan focused on activities over the next decade. Significant in these planned activities is the transportation of the waste arising from the remediation. The costs of this transportation are expected to be large. To support the initial assessment of the plan, a cost-estimating model was developed, peer-reviewed against other available packaging and transportation cost data, and applied to significant number of shipping campaigns of radioactive waste. This cost-estimating model, known as the TEn-year Plan TRAnsportation cost Model (TEPTRAM), can be used to model radioactive material shipments between DOE sites or from DOE sites to non-DOE destinations. The model considers the costs for recovering and processing of the wastes, packaging the wastes for transport, and the carriage of the waste. It also provides a rough order-of-magnitude estimate of labor costs associated with preparing nd undertaking the shipments. At the user's direction, the model can also consider the cost of DOE's interactions with its external stakeholders (e.g., state and local governments and tribal entities) and the cost associated with tracking and communicating with the shipments. By considering all of these sources of costs, it provides a mechanism for assessing and comparing the costs of various waste processing and shipping campaign alternatives to help guide decision-making. Recent analyses of specific planned shipments of transuranic (TRU) waste which consider alternative packaging options are described. These analyses show that options are available for significantly reducing total costs while still satisfying regulatory requirements. (authors)

  12. Simple inflationary quintessential model. II. Power law potentials

    Science.gov (United States)

    de Haro, Jaume; Amorós, Jaume; Pan, Supriya

    2016-09-01

    The present work is a sequel of our previous work [Phys. Rev. D 93, 084018 (2016)] which depicted a simple version of an inflationary quintessential model whose inflationary stage was described by a Higgs-type potential and the quintessential phase was responsible due to an exponential potential. Additionally, the model predicted a nonsingular universe in past which was geodesically past incomplete. Further, it was also found that the model is in agreement with the Planck 2013 data when running is allowed. But, this model provides a theoretical value of the running which is far smaller than the central value of the best fit in ns , r , αs≡d ns/d l n k parameter space where ns, r , αs respectively denote the spectral index, tensor-to-scalar ratio and the running of the spectral index associated with any inflationary model, and consequently to analyze the viability of the model one has to focus in the two-dimensional marginalized confidence level in the allowed domain of the plane (ns,r ) without taking into account the running. Unfortunately, such analysis shows that this model does not pass this test. However, in this sequel we propose a family of models runs by a single parameter α ∈[0 ,1 ] which proposes another "inflationary quintessential model" where the inflation and the quintessence regimes are respectively described by a power law potential and a cosmological constant. The model is also nonsingular although geodesically past incomplete as in the cited model. Moreover, the present one is found to be more simple compared to the previous model and it is in excellent agreement with the observational data. In fact, we note that, unlike the previous model, a large number of the models of this family with α ∈[0 ,1/2 ) match with both Planck 2013 and Planck 2015 data without allowing the running. Thus, the properties in the current family of models compared to its past companion justify its need for a better cosmological model with the successive

  13. Evaluation of Cost Models and Needs & Gaps Analysis

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad

    2014-01-01

    they breakdown costs. This is followed by an in depth analysis of stakeholders’ needs for financial information derived from the 4C project stakeholder consultation.The stakeholders’ needs analysis indicated that models should:• support accounting, but more importantly they should enable budgeting• be able......his report ’D3.1—Evaluation of Cost Models and Needs & Gaps Analysis’ provides an analysis of existing research related to the economics of digital curation and cost & benefit modelling. It reports upon the investigation of how well current models and tools meet stakeholders’ needs for calculating...... andcomparing financial information. Based on this evaluation, it aims to point out gaps that need to be bridged in order to increase the uptake of cost & benefit modelling and good practices that will enable costing and comparison of the costs of alternative scenarios—which in turn provides a starting point...

  14. A simple multistage closed-(box+reservoir model of chemical evolution

    Directory of Open Access Journals (Sweden)

    Caimmi R.

    2011-01-01

    Full Text Available Simple closed-box (CB models of chemical evolution are extended on two respects, namely (i simple closed-(box+reservoir (CBR models allowing gas outflow from the box into the reservoir (Hartwick 1976 or gas inflow into the box from the reservoir (Caimmi 2007 with rate proportional to the star formation rate, and (ii simple multistage closed-(box+reservoir (MCBR models allowing different stages of evolution characterized by different inflow or outflow rates. The theoretical differential oxygen abundance distribution (TDOD predicted by the model maintains close to a continuous broken straight line. An application is made where a fictitious sample is built up from two distinct samples of halo stars and taken as representative of the inner Galactic halo. The related empirical differential oxygen abundance distribution (EDOD is represented, to an acceptable extent, as a continuous broken line for two viable [O/H]-[Fe/H] empirical relations. The slopes and the intercepts of the regression lines are determined, and then used as input parameters to MCBR models. Within the errors (-+σ, regression line slopes correspond to a large inflow during the earlier stage of evolution and to low or moderate outflow during the subsequent stages. A possible inner halo - outer (metal-poor bulge connection is also briefly discussed. Quantitative results cannot be considered for applications to the inner Galactic halo, unless selection effects and disk contamination are removed from halo samples, and discrepancies between different oxygen abundance determination methods are explained.

  15. Genealogies in simple models of evolution

    International Nuclear Information System (INIS)

    Brunet, Éric; Derrida, Bernard

    2013-01-01

    We review the statistical properties of the genealogies of a few models of evolution. In the asexual case, selection leads to coalescence times which grow logarithmically with the size of the population, in contrast with the linear growth of the neutral case. Moreover for a whole class of models, the statistics of the genealogies are those of the Bolthausen–Sznitman coalescent rather than the Kingman coalescent in the neutral case. For sexual reproduction in the neutral case, the time to reach the first common ancestors for the whole population and the time for all individuals to have all their ancestors in common are also logarithmic in the population size, as predicted by Chang in 1999. We discuss how these times are modified by introducing selection in a simple way. (paper)

  16. Offshore Wind Energy Cost Modeling Installation and Decommissioning

    CERN Document Server

    Kaiser, Mark J

    2012-01-01

    Offshore wind energy is one of the most promising and fastest growing alternative energy sources in the world. Offshore Wind Energy Cost Modeling provides a methodological framework to assess installation and decommissioning costs, and using examples from the European experience, provides a broad review of existing processes and systems used in the offshore wind industry. Offshore Wind Energy Cost Modeling provides a step-by-step guide to modeling costs over four sections. These sections cover: ·Background and introductory material, ·Installation processes and vessel requirements, ·Installation cost estimation, and ·Decommissioning methods and cost estimation.  This self-contained and detailed treatment of the key principles in offshore wind development is supported throughout by visual aids and data tables. Offshore Wind Energy Cost Modeling is a key resource for anyone interested in the offshore wind industry, particularly those interested in the technical and economic aspects of installation and decom...

  17. pyhector: A Python interface for the simple climate model Hector

    Energy Technology Data Exchange (ETDEWEB)

    N Willner, Sven; Hartin, Corinne; Gieseke, Robert

    2017-04-01

    Pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary production and respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system (Hartin et al. 2016). The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2. These were developed to cover the range of baseline and mitigation emissions scenarios and are widely used in climate change research and model intercomparison projects. Using DataFrames from the Python library Pandas (McKinney 2010) as a data structure for the scenarios simplifies generating and adapting scenarios. Other parameters of the Hector model can easily be modified when running the model. Pyhector can be installed using pip from the Python Package Index.3 Source code and issue tracker are available in Pyhector's GitHub repository4. Documentation is provided through Readthedocs5. Usage examples are also contained in the repository as a Jupyter Notebook (Pérez and Granger 2007; Kluyver et al. 2016). Courtesy of the Mybinder project6, the example Notebook can also be executed and modified without installing Pyhector locally.

  18. Simple implementation of general dark energy models

    International Nuclear Information System (INIS)

    Bloomfield, Jolyon K.; Pearson, Jonathan A.

    2014-01-01

    We present a formalism for the numerical implementation of general theories of dark energy, combining the computational simplicity of the equation of state for perturbations approach with the generality of the effective field theory approach. An effective fluid description is employed, based on a general action describing single-scalar field models. The formalism is developed from first principles, and constructed keeping the goal of a simple implementation into CAMB in mind. Benefits of this approach include its straightforward implementation, the generality of the underlying theory, the fact that the evolved variables are physical quantities, and that model-independent phenomenological descriptions may be straightforwardly investigated. We hope this formulation will provide a powerful tool for the comparison of theoretical models of dark energy with observational data

  19. Parametric Cost and Schedule Modeling for Early Technology Development

    Science.gov (United States)

    2018-04-02

    Research NoteNational Security Rep rt PARAMETRIC MODELING FOR EARLY TECHNOLOGY DEVELOPMENT COST AND SCHEDULE Chuck...Alexander NSR_11x17_Cover_CostModeling_v8.indd 1 11/20/17 3:15 PM PARAMETRIC COST AND SCHEDULE MODELING FOR EARLY  TECHNOLOGY DEVELOPMENT Chuck...COST AND SCHEDULE MODELING FOR EARLY  TECHNOLOGY DEVELOPMENT iii Contents Figures

  20. Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach

    Science.gov (United States)

    Xiao, T.

    2012-12-01

    One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.

  1. Operations and support cost modeling of conceptual space vehicles

    Science.gov (United States)

    Ebeling, Charles

    1994-01-01

    The University of Dayton is pleased to submit this annual report to the National Aeronautics and Space Administration (NASA) Langley Research Center which documents the development of an operations and support (O&S) cost model as part of a larger life cycle cost (LCC) structure. It is intended for use during the conceptual design of new launch vehicles and spacecraft. This research is being conducted under NASA Research Grant NAG-1-1327. This research effort changes the focus from that of the first two years in which a reliability and maintainability model was developed to the initial development of an operations and support life cycle cost model. Cost categories were initially patterned after NASA's three axis work breakdown structure consisting of a configuration axis (vehicle), a function axis, and a cost axis. A revised cost element structure (CES), which is currently under study by NASA, was used to established the basic cost elements used in the model. While the focus of the effort was on operations and maintenance costs and other recurring costs, the computerized model allowed for other cost categories such as RDT&E and production costs to be addressed. Secondary tasks performed concurrent with the development of the costing model included support and upgrades to the reliability and maintainability (R&M) model. The primary result of the current research has been a methodology and a computer implementation of the methodology to provide for timely operations and support cost analysis during the conceptual design activities.

  2. Thermal margin comparison between DAM and simple model

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Jeonghun; Yook, Daesik [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2017-01-15

    The nuclear industry in Korea, has considered using a detail analysis model (DAM), which described each rod, to get more thermal margin with the design a dry storage facility for nuclear spent fuel (NSF). A DAM is proposed and a thermal analysis to determine the cladding integrity is performed using test conditions with a homogenized NSF assembly analysis model(Simple model). The result show that according to USA safety criteria, temperature of canister surface has to keep below 500 K in normal condition and 630 K in excess condition. A commercial Computational Fluid Dynamics (CFD) called ANSYS Fluent version 14.5 was used.

  3. A simple, low-cost conductive composite material for 3D printing of electronic sensors.

    Science.gov (United States)

    Leigh, Simon J; Bradley, Robert J; Purssell, Christopher P; Billson, Duncan R; Hutchins, David A

    2012-01-01

    3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes ('rapid prototyping') before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term 'carbomorph' and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes.

  4. A simple, low-cost conductive composite material for 3D printing of electronic sensors.

    Directory of Open Access Journals (Sweden)

    Simon J Leigh

    Full Text Available 3D printing technology can produce complex objects directly from computer aided digital designs. The technology has traditionally been used by large companies to produce fit and form concept prototypes ('rapid prototyping' before production. In recent years however there has been a move to adopt the technology as full-scale manufacturing solution. The advent of low-cost, desktop 3D printers such as the RepRap and Fab@Home has meant a wider user base are now able to have access to desktop manufacturing platforms enabling them to produce highly customised products for personal use and sale. This uptake in usage has been coupled with a demand for printing technology and materials able to print functional elements such as electronic sensors. Here we present formulation of a simple conductive thermoplastic composite we term 'carbomorph' and demonstrate how it can be used in an unmodified low-cost 3D printer to print electronic sensors able to sense mechanical flexing and capacitance changes. We show how this capability can be used to produce custom sensing devices and user interface devices along with printed objects with embedded sensing capability. This advance in low-cost 3D printing with offer a new paradigm in the 3D printing field with printed sensors and electronics embedded inside 3D printed objects in a single build process without requiring complex or expensive materials incorporating additives such as carbon nanotubes.

  5. Decentralized Pricing in Minimum Cost Spanning Trees

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Moulin, Hervé; Østerdal, Lars Peter

    In the minimum cost spanning tree model we consider decentralized pricing rules, i.e. rules that cover at least the ecient cost while the price charged to each user only depends upon his own connection costs. We de ne a canonical pricing rule and provide two axiomatic characterizations. First......, the canonical pricing rule is the smallest among those that improve upon the Stand Alone bound, and are either superadditive or piece-wise linear in connection costs. Our second, direct characterization relies on two simple properties highlighting the special role of the source cost....

  6. TTS-Polttopuu - cost calculation model for fuelwood

    International Nuclear Information System (INIS)

    Naett, H.; Ryynaenen, S.

    1999-01-01

    The TTS-Institutes's Forestry Department has developed a computer based cost-calculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation, chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486- level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY-research programme. (orig.)

  7. Identification of Super Phenix steam generator by a simple polynomial model

    International Nuclear Information System (INIS)

    Rousseau, I.

    1981-01-01

    This note suggests a method of identification for the steam generator of the Super-Phenix fast neutron power plant for simple polynomial models. This approach is justified in the selection of the adaptive control. The identification algorithms presented will be applied to multivariable input-output behaviours. The results obtained with the representation in self-regressive form and by simple polynomial models will be compared and the effect of perturbations on the output signal will be tested, in order to select a good identification algorithm for multivariable adaptive regulation [fr

  8. Global transportation cost modeling for long-range planning

    International Nuclear Information System (INIS)

    Pope, R.B.; Michelhaugh, R.D.; Singley, P.T.; Lester, P.B.

    1998-02-01

    The US Department of Energy (DOE) is preparing to perform significant remediation activities of the sites for which it is responsible. To accomplish this, it is preparing a corporate global plan focused on activities over the next decade. Significant in these planned activities is the transportation of the waste arising from the remediation. The costs of this transportation are expected to be large. To support the initial assessment of the plan, a cost estimating model was developed, peer-reviewed against other available packaging and transportation cost data, and applied to a significant number of shipping campaigns of radioactive waste. This cost estimating model, known as the Ten-year Plan Transportation Cost Model (TEPTRAM), can be used to model radioactive material shipments between DOE sites or from DOE sites to non-DOE destinations. The model considers the costs for (a) recovering and processing of the wastes, (b)packaging the wastes for transport, and (c) the carriage of the waste. It also provides a rough order of magnitude estimate of labor costs associated with preparing and undertaking the shipments. At the user's direction, the model can also consider the cost of DOE's interactions with its external stakeholders (e.g., state and local governments and tribal entities) and the cost associated with tracking and communicating with the shipments. By considering all of these sources of costs, it provides a mechanism for assessing and comparing the costs of various waste processing and shipping campaign alternatives to help guide decision-making. Recent analyses of specific planned shipments of transuranic (TRU) waste which consider alternative packaging options are described. These analyses show that options are available for significantly reducing total costs while still satisfying regulatory requirements

  9. Added value of cost-utility analysis in simple diagnostic studies of accuracy: (18)F-fluoromethylcholine PET/CT in prostate cancer staging.

    Science.gov (United States)

    Gerke, Oke; Poulsen, Mads H; Høilund-Carlsen, Poul Flemming

    2015-01-01

    Diagnostic studies of accuracy targeting sensitivity and specificity are commonly done in a paired design in which all modalities are applied in each patient, whereas cost-effectiveness and cost-utility analyses are usually assessed either directly alongside to or indirectly by means of stochastic modeling based on larger randomized controlled trials (RCTs). However the conduct of RCTs is hampered in an environment such as ours, in which technology is rapidly evolving. As such, there is a relatively limited number of RCTs. Therefore, we investigated as to which extent paired diagnostic studies of accuracy can be also used to shed light on economic implications when considering a new diagnostic test. We propose a simple decision tree model-based cost-utility analysis of a diagnostic test when compared to the current standard procedure and exemplify this approach with published data from lymph node staging of prostate cancer. Average procedure costs were taken from the Danish Diagnosis Related Groups Tariff in 2013 and life expectancy was estimated for an ideal 60 year old patient based on prostate cancer stage and prostatectomy or radiation and chemotherapy. Quality-adjusted life-years (QALYs) were deduced from the literature, and an incremental cost-effectiveness ratio (ICER) was used to compare lymph node dissection with respective histopathological examination (reference standard) and (18)F-fluoromethylcholine positron emission tomography/computed tomography (FCH-PET/CT). Lower bounds of sensitivity and specificity of FCH-PET/CT were established at which the replacement of the reference standard by FCH-PET/CT comes with a trade-off between worse effectiveness and lower costs. Compared to the reference standard in a diagnostic accuracy study, any imperfections in accuracy of a diagnostic test imply that replacing the reference standard generates a loss in effectiveness and utility. We conclude that diagnostic studies of accuracy can be put to a more extensive use

  10. A simple statistical model for geomagnetic reversals

    Science.gov (United States)

    Constable, Catherine

    1990-01-01

    The diversity of paleomagnetic records of geomagnetic reversals now available indicate that the field configuration during transitions cannot be adequately described by simple zonal or standing field models. A new model described here is based on statistical properties inferred from the present field and is capable of simulating field transitions like those observed. Some insight is obtained into what one can hope to learn from paleomagnetic records. In particular, it is crucial that the effects of smoothing in the remanence acquisition process be separated from true geomagnetic field behavior. This might enable us to determine the time constants associated with the dominant field configuration during a reversal.

  11. The ASAC Flight Segment and Network Cost Models

    Science.gov (United States)

    Kaplan, Bruce J.; Lee, David A.; Retina, Nusrat; Wingrove, Earl R., III; Malone, Brett; Hall, Stephen G.; Houser, Scott A.

    1997-01-01

    To assist NASA in identifying research art, with the greatest potential for improving the air transportation system, two models were developed as part of its Aviation System Analysis Capability (ASAC). The ASAC Flight Segment Cost Model (FSCM) is used to predict aircraft trajectories, resource consumption, and variable operating costs for one or more flight segments. The Network Cost Model can either summarize the costs for a network of flight segments processed by the FSCM or can be used to independently estimate the variable operating costs of flying a fleet of equipment given the number of departures and average flight stage lengths.

  12. A simple dynamic energy capacity model

    International Nuclear Information System (INIS)

    Gander, James P.

    2012-01-01

    I develop a simple dynamic model showing how total energy capacity is allocated to two different uses and how these uses and their corresponding energy flows are related and behave through time. The control variable of the model determines the allocation. All the variables of the model are in terms of a composite energy equivalent measured in BTU's. A key focus is on the shadow price of energy capacity and its behavior through time. Another key focus is on the behavior of the control variable that determines the allocation of overall energy capacity. The matching or linking of the model's variables to real world U.S. energy data is undertaken. In spite of some limitations of the data, the model and its behavior fit the data fairly well. Some energy policy implications are discussed. - Highlights: ► The model shows how energy capacity is allocated to current output production versus added energy capacity production. ► Two variables in the allocation are the shadow price of capacity and the control variable that determines the allocation. ► The model was linked to U.S. historical energy data and fit the data quite well. ► In particular, the policy control variable was cyclical and consistent with the model. ► Policy implications relevant to the allocation of energy capacity are discussed briefly.

  13. Renormalization group analysis of a simple hierarchical fermion model

    International Nuclear Information System (INIS)

    Dorlas, T.C.

    1991-01-01

    A simple hierarchical fermion model is constructed which gives rise to an exact renormalization transformation in a 2-dimensional parameter space. The behaviour of this transformation is studied. It has two hyperbolic fixed points for which the existence of a global critical line is proven. The asymptotic behaviour of the transformation is used to prove the existence of the thermodynamic limit in a certain domain in parameter space. Also the existence of a continuum limit for these theories is investigated using information about the asymptotic renormalization behaviour. It turns out that the 'trivial' fixed point gives rise to a two-parameter family of continuum limits corresponding to that part of parameter space where the renormalization trajectories originate at this fixed point. Although the model is not very realistic it serves as a simple example of the appliclation of the renormalization group to proving the existence of the thermodynamic limit and the continuum limit of lattice models. Moreover, it illustrates possible complications that can arise in global renormalization group behaviour, and that might also be present in other models where no global analysis of the renormalization transformation has yet been achieved. (orig.)

  14. Cost effectiveness of recycling: A systems model

    Energy Technology Data Exchange (ETDEWEB)

    Tonjes, David J., E-mail: david.tonjes@stonybrook.edu [Department of Technology and Society, College of Engineering and Applied Sciences, Stony Brook University, Stony Brook, NY 11794-3560 (United States); Waste Reduction and Management Institute, School of Marine and Atmospheric Sciences, Stony Brook University, Stony Brook, NY 11794-5000 (United States); Center for Bioenergy Research and Development, Advanced Energy Research and Technology Center, Stony Brook University, 1000 Innovation Rd., Stony Brook, NY 11794-6044 (United States); Mallikarjun, Sreekanth, E-mail: sreekanth.mallikarjun@stonybrook.edu [Department of Technology and Society, College of Engineering and Applied Sciences, Stony Brook University, Stony Brook, NY 11794-3560 (United States)

    2013-11-15

    Highlights: • Curbside collection of recyclables reduces overall system costs over a range of conditions. • When avoided costs for recyclables are large, even high collection costs are supported. • When avoided costs for recyclables are not great, there are reduced opportunities for savings. • For common waste compositions, maximizing curbside recyclables collection always saves money. - Abstract: Financial analytical models of waste management systems have often found that recycling costs exceed direct benefits, and in order to economically justify recycling activities, externalities such as household expenses or environmental impacts must be invoked. Certain more empirically based studies have also found that recycling is more expensive than disposal. Other work, both through models and surveys, have found differently. Here we present an empirical systems model, largely drawn from a suburban Long Island municipality. The model accounts for changes in distribution of effort as recycling tonnages displace disposal tonnages, and the seven different cases examined all show that curbside collection programs that manage up to between 31% and 37% of the waste stream should result in overall system savings. These savings accrue partially because of assumed cost differences in tip fees for recyclables and disposed wastes, and also because recycling can result in a more efficient, cost-effective collection program. These results imply that increases in recycling are justifiable due to cost-savings alone, not on more difficult to measure factors that may not impact program budgets.

  15. A Simple Exercise Reveals the Way Students Think about Scientific Modeling

    Science.gov (United States)

    Ruebush, Laura; Sulikowski, Michelle; North, Simon

    2009-01-01

    Scientific modeling is an integral part of contemporary science, yet many students have little understanding of how models are developed, validated, and used to predict and explain phenomena. A simple modeling exercise led to significant gains in understanding key attributes of scientific modeling while revealing some stubborn misconceptions.…

  16. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    Science.gov (United States)

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Modelling simple helically delivered dose distributions

    International Nuclear Information System (INIS)

    Fenwick, John D; Tome, Wolfgang A; Kissick, Michael W; Mackie, T Rock

    2005-01-01

    In a previous paper, we described quality assurance procedures for Hi-Art helical tomotherapy machines. Here, we develop further some ideas discussed briefly in that paper. Simple helically generated dose distributions are modelled, and relationships between these dose distributions and underlying characteristics of Hi-Art treatment systems are elucidated. In particular, we describe the dependence of dose levels along the central axis of a cylinder aligned coaxially with a Hi-Art machine on fan beam width, couch velocity and helical delivery lengths. The impact on these dose levels of angular variations in gantry speed or output per linear accelerator pulse is also explored

  18. Computerized cost model for pressurized water reactors

    International Nuclear Information System (INIS)

    Meneely, T.K.; Tabata, Hiroaki; Labourey, P.

    1999-01-01

    A computerized cost model has been developed in order to allow utility users to improve their familiarity with pressurized water reactor overnight capital costs and the various factors which influence them. This model organizes its cost data in the standard format of the Energy Economic Data Base (EEDB), and encapsulates simplified relationships between physical plant design information and capital cost information in a computer code. Model calculations are initiated from a base case, which was established using traditional cost calculation techniques. The user enters a set of plant design parameters, selected to allow consideration of plant models throughout the typical three- and four-loop PWR power range, and for plant sites in Japan, Europe, and the United States. Calculation of the new capital cost is then performed in a very brief time. The presentation of the program's output allows comparison of various cases with each other or with separately calculated baseline data. The user can start at a high level summary, and by selecting values of interest on a display grid show progressively more and more detailed information, including links to background information such as individual cost driver accounts and physical plant variables for each case. Graphical presentation of the comparison summaries is provided, and the numerical results may be exported to a spreadsheet for further processing. (author)

  19. A Simple Model for Nonlinear Confocal Ultrasonic Beams

    Science.gov (United States)

    Zhang, Dong; Zhou, Lin; Si, Li-Sheng; Gong, Xiu-Fen

    2007-01-01

    A confocally and coaxially arranged pair of focused transmitter and receiver represents one of the best geometries for medical ultrasonic imaging and non-invasive detection. We develop a simple theoretical model for describing the nonlinear propagation of a confocal ultrasonic beam in biological tissues. On the basis of the parabolic approximation and quasi-linear approximation, the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation is solved by using the angular spectrum approach. Gaussian superposition technique is applied to simplify the solution, and an analytical solution for the second harmonics in the confocal ultrasonic beam is presented. Measurements are performed to examine the validity of the theoretical model. This model provides a preliminary model for acoustic nonlinear microscopy.

  20. A simple and cost-effective method for fabrication of integrated electronic-microfluidic devices using a laser-patterned PDMS layer

    KAUST Repository

    Li, Ming

    2011-12-03

    We report a simple and cost-effective method for fabricating integrated electronic-microfluidic devices with multilayer configurations. A CO 2 laser plotter was employed to directly write patterns on a transferred polydimethylsiloxane (PDMS) layer, which served as both a bonding and a working layer. The integration of electronics in microfluidic devices was achieved by an alignment bonding of top and bottom electrode-patterned substrates fabricated with conventional lithography, sputtering and lift-off techniques. Processes of the developed fabrication method were illustrated. Major issues associated with this method as PDMS surface treatment and characterization, thickness-control of the transferred PDMS layer, and laser parameters optimization were discussed, along with the examination and testing of bonding with two representative materials (glass and silicon). The capability of this method was further demonstrated by fabricating a microfluidic chip with sputter-coated electrodes on the top and bottom substrates. The device functioning as a microparticle focusing and trapping chip was experimentally verified. It is confirmed that the proposed method has many advantages, including simple and fast fabrication process, low cost, easy integration of electronics, strong bonding strength, chemical and biological compatibility, etc. © Springer-Verlag 2011.

  1. Cost-Sensitive Estimation of ARMA Models for Financial Asset Return Data

    Directory of Open Access Journals (Sweden)

    Minyoung Kim

    2015-01-01

    Full Text Available The autoregressive moving average (ARMA model is a simple but powerful model in financial engineering to represent time-series with long-range statistical dependency. However, the traditional maximum likelihood (ML estimator aims to minimize a loss function that is inherently symmetric due to Gaussianity. The consequence is that when the data of interest are asset returns, and the main goal is to maximize profit by accurate forecasting, the ML objective may be less appropriate potentially leading to a suboptimal solution. Rather, it is more reasonable to adopt an asymmetric loss where the model's prediction, as long as it is in the same direction as the true return, is penalized less than the prediction in the opposite direction. We propose a quite sensible asymmetric cost-sensitive loss function and incorporate it into the ARMA model estimation. On the online portfolio selection problem with real stock return data, we demonstrate that the investment strategy based on predictions by the proposed estimator can be significantly more profitable than the traditional ML estimator.

  2. Simple models of the thermal structure of the Venusian ionosphere

    International Nuclear Information System (INIS)

    Whitten, R.C.; Knudsen, W.C.

    1980-01-01

    Analytical and numerical models of plasma temperatures in the Venusian ionosphere are proposed. The magnitudes of plasma thermal parameters are calculated using thermal-structure data obtained by the Pioneer Venus Orbiter. The simple models are found to be in good agreement with the more detailed models of thermal balance. Daytime and nighttime temperature data along with corresponding temperature profiles are provided

  3. The System Cost Model: A tool for life cycle cost and risk analysis

    International Nuclear Information System (INIS)

    Hsu, K.; Lundeen, A.; Shropshire, D.; Sherick, M.

    1996-01-01

    In May of 1994, Lockheed Idaho Technologies Company (LITCO) in Idaho Falls, Idaho and subcontractors began development of the System Cost Model (SCM) application. The SCM estimates life cycle costs of the entire US Department of Energy (DOE) complex for designing; constructing; operating; and decommissioning treatment, storage, and disposal (TSD) facilities for mixed low-level, low-level, and transuranic waste. The SCM uses parametric cost functions to estimate life cycle costs for various treatment, storage, and disposal modules which reflect planned and existing waste management facilities at DOE installations. In addition, SCM can model new TSD facilities based on capacity needs over the program life cycle. The user can provide input data (default data is included in the SCM) including the volume and nature of waste to be managed, the time period over which the waste is to be managed, and the configuration of the waste management complex (i.e., where each installation's generated waste will be treated, stored, and disposed). Then the SCM uses parametric cost equations to estimate the costs of pre-operations (designing), construction, operations and maintenance, and decommissioning these waste management facilities. The SCM also provides transportation costs for DOE wastes. Transportation costs are provided for truck and rail and include transport of contact-handled, remote-handled, and alpha (transuranic) wastes. A complement to the SCM is the System Cost Model-Risk (SCM-R) model, which provides relative Environmental, Safety, and Health (ES and H) risk information. A relative ES and H risk basis has been developed and applied by LITCO at the INEL. The risk basis is now being automated in the SCM-R to facilitate rapid risk analysis of system alternatives. The added risk functionality will allow combined cost and risk evaluation of EM alternatives

  4. Waste management facilities cost information: System cost model product description. Revision 2

    International Nuclear Information System (INIS)

    Lundeen, A.S.; Hsu, K.M.; Shropshire, D.E.

    1996-02-01

    In May of 1994, Lockheed Idaho Technologies Company (LITCO) in Idaho Falls, Idaho and subcontractors developed the System Cost Model (SCM) application. The SCM estimates life-cycle costs of the entire US Department of Energy (DOE) complex for designing; constructing; operating; and decommissioning treatment, storage, and disposal (TSD) facilities for mixed low-level, low-level, transuranic, and mixed transuranic waste. The SCM uses parametric cost functions to estimate life-cycle costs for various treatment, storage, and disposal modules which reflect planned and existing facilities at DOE installations. In addition, SCM can model new facilities based on capacity needs over the program life cycle. The SCM also provides transportation costs for DOE wastes. Transportation costs are provided for truck and rail and include transport of contact-handled, remote-handled, and alpha (transuranic) wastes. The user can provide input data (default data is included in the SCM) including the volume and nature of waste to be managed, the time period over which the waste is to be managed, and the configuration of the waste management complex (i.e., where each installation's generated waste will be treated, stored, and disposed). Then the SCM uses parametric cost equations to estimate the costs of pre-operations (designing), construction costs, operation management, and decommissioning these waste management facilities

  5. A case-mix classification system for explaining healthcare costs using administrative data in Italy.

    Science.gov (United States)

    Corti, Maria Chiara; Avossa, Francesco; Schievano, Elena; Gallina, Pietro; Ferroni, Eliana; Alba, Natalia; Dotto, Matilde; Basso, Cristina; Netti, Silvia Tiozzo; Fedeli, Ugo; Mantoan, Domenico

    2018-03-04

    The Italian National Health Service (NHS) provides universal coverage to all citizens, granting primary and hospital care with a copayment system for outpatient and drug services. Financing of Local Health Trusts (LHTs) is based on a capitation system adjusted only for age, gender and area of residence. We applied a risk-adjustment system (Johns Hopkins Adjusted Clinical Groups System, ACG® System) in order to explain health care costs using routinely collected administrative data in the Veneto Region (North-eastern Italy). All residents in the Veneto Region were included in the study. The ACG system was applied to classify the regional population based on the following information sources for the year 2015: Hospital Discharges, Emergency Room visits, Chronic disease registry for copayment exemptions, ambulatory visits, medications, the Home care database, and drug prescriptions. Simple linear regressions were used to contrast an age-gender model to models incorporating more comprehensive risk measures aimed at predicting health care costs. A simple age-gender model explained only 8% of the variance of 2015 total costs. Adding diagnoses-related variables provided a 23% increase, while pharmacy based variables provided an additional 17% increase in explained variance. The adjusted R-squared of the comprehensive model was 6 times that of the simple age-gender model. ACG System provides substantial improvement in predicting health care costs when compared to simple age-gender adjustments. Aging itself is not the main determinant of the increase of health care costs, which is better explained by the accumulation of chronic conditions and the resulting multimorbidity. Copyright © 2018. Published by Elsevier B.V.

  6. From complex to simple: interdisciplinary stochastic models

    International Nuclear Information System (INIS)

    Mazilu, D A; Zamora, G; Mazilu, I

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions for certain physical quantities, such as the time dependence of the length of the microtubules, and diffusion coefficients. The second one is a stochastic adsorption model with applications in surface deposition, epidemics and voter systems. We introduce the ‘empty interval method’ and show sample calculations for the time-dependent particle density. These models can serve as an introduction to the field of non-equilibrium statistical physics, and can also be used as a pedagogical tool to exemplify standard statistical physics concepts, such as random walks or the kinetic approach of the master equation. (paper)

  7. A simple and low-cost fully 3D-printed non-planar emulsion generator

    KAUST Repository

    Zhang, Jiaming

    2015-12-23

    Droplet-based microfluidic devices provide a powerful platform for material, chemical and biological applications based on droplet templates. The technique traditionally utilized to fabricate microfluidic emulsion generators, i.e. soft-lithography, is complex and expensive for producing three-dimensional (3D) structures. The emergent 3D printing technology provides an attractive alternative due to its simplicity and low-cost. Recently a handful of studies have already demonstrated droplet production through 3D-printed microfluidic devices. However, these devices invariably use purely two-dimensional (2D) flow structures. Herein we apply 3D printing technology to fabricate simple and low-cost 3D miniaturized fluidic devices for droplet generation (single emulsion) and droplet-in-droplet (double emulsion) without need for surface treatment of the channel walls. This is accomplished by varying the channel diameters at the junction, so the inner liquid does not touch the outer walls. This 3D-printed emulsion generator has been successfully tested over a range of conditions. We also formulate and demonstrate, for the first time, uniform scaling laws for the emulsion drop sizes generated in different regimes, by incorporating the dynamic contact angle effects during the drop formation. Magnetically responsive microspheres are also produced with our emulsion templates, demonstrating the potential applications of this 3D emulsion generator in chemical and material engineering.

  8. Modeling Simple Driving Tasks with a One-Boundary Diffusion Model

    Science.gov (United States)

    Ratcliff, Roger; Strayer, David

    2014-01-01

    A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620

  9. Cost estimation model for advanced planetary programs, fourth edition

    Science.gov (United States)

    Spadoni, D. J.

    1983-01-01

    The development of the planetary program cost model is discussed. The Model was updated to incorporate cost data from the most recent US planetary flight projects and extensively revised to more accurately capture the information in the historical cost data base. This data base is comprised of the historical cost data for 13 unmanned lunar and planetary flight programs. The revision was made with a two fold objective: to increase the flexibility of the model in its ability to deal with the broad scope of scenarios under consideration for future missions, and to maintain and possibly improve upon the confidence in the model's capabilities with an expected accuracy of 20%. The Model development included a labor/cost proxy analysis, selection of the functional forms of the estimating relationships, and test statistics. An analysis of the Model is discussed and two sample applications of the cost model are presented.

  10. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  11. A model of proteostatic energy cost and its use in analysis of proteome trends and sequence evolution.

    Directory of Open Access Journals (Sweden)

    Kasper P Kepp

    Full Text Available A model of proteome-associated chemical energetic costs of cells is derived from protein-turnover kinetics and protein folding. Minimization of the proteostatic maintenance cost can explain a range of trends of proteomes and combines both protein function, stability, size, proteostatic cost, temperature, resource availability, and turnover rates in one simple framework. We then explore the ansatz that the chemical energy remaining after proteostatic maintenance is available for reproduction (or cell division and thus, proportional to organism fitness. Selection for lower proteostatic costs is then shown to be significant vs. typical effective population sizes of yeast. The model explains and quantifies evolutionary conservation of highly abundant proteins as arising both from functional mutations and from changes in other properties such as stability, cost, or turnover rates. We show that typical hypomorphic mutations can be selected against due to increased cost of compensatory protein expression (both in the mutated gene and in related genes, i.e. epistasis rather than compromised function itself, although this compensation depends on the protein's importance. Such mutations exhibit larger selective disadvantage in abundant, large, synthetically costly, and/or short-lived proteins. Selection against increased turnover costs of less stable proteins rather than misfolding toxicity per se can explain equilibrium protein stability distributions, in agreement with recent findings in E. coli. The proteostatic selection pressure is stronger at low metabolic rates (i.e. scarce environments and in hot habitats, explaining proteome adaptations towards rough environments as a question of energy. The model may also explain several trade-offs observed in protein evolution and suggests how protein properties can coevolve to maintain low proteostatic cost.

  12. Simple and cost-effective fabrication of highly flexible, transparent superhydrophobic films with hierarchical surface design.

    Science.gov (United States)

    Kim, Tae-Hyun; Ha, Sung-Hun; Jang, Nam-Su; Kim, Jeonghyo; Kim, Ji Hoon; Park, Jong-Kweon; Lee, Deug-Woo; Lee, Jaebeom; Kim, Soo-Hyung; Kim, Jong-Man

    2015-03-11

    Optical transparency and mechanical flexibility are both of great importance for significantly expanding the applicability of superhydrophobic surfaces. Such features make it possible for functional surfaces to be applied to various glass-based products with different curvatures. In this work, we report on the simple and potentially cost-effective fabrication of highly flexible and transparent superhydrophobic films based on hierarchical surface design. The hierarchical surface morphology was easily fabricated by the simple transfer of a porous alumina membrane to the top surface of UV-imprinted polymeric micropillar arrays and subsequent chemical treatments. Through optimization of the hierarchical surface design, the resultant superhydrophobic films showed superior surface wetting properties (with a static contact angle of >170° and contact angle hysteresis of 82% at 550 nm wavelength). The superhydrophobic films were also experimentally found to be robust without significant degradation in the superhydrophobicity, even under repetitive bending and pressing for up to 2000 cycles. Finally, the practical usability of the proposed superhydorphobic films was clearly demonstrated by examining the antiwetting performance in real time while pouring water on the film and submerging the film in water.

  13. Process-Improvement Cost Model for the Emergency Department.

    Science.gov (United States)

    Dyas, Sheila R; Greenfield, Eric; Messimer, Sherri; Thotakura, Swati; Gholston, Sampson; Doughty, Tracy; Hays, Mary; Ivey, Richard; Spalding, Joseph; Phillips, Robin

    2015-01-01

    The objective of this report is to present a simplified, activity-based costing approach for hospital emergency departments (EDs) to use with Lean Six Sigma cost-benefit analyses. The cost model complexity is reduced by removing diagnostic and condition-specific costs, thereby revealing the underlying process activities' cost inefficiencies. Examples are provided for evaluating the cost savings from reducing discharge delays and the cost impact of keeping patients in the ED (boarding) after the decision to admit has been made. The process-improvement cost model provides a needed tool in selecting, prioritizing, and validating Lean process-improvement projects in the ED and other areas of patient care that involve multiple dissimilar diagnoses.

  14. Comparison of blood biochemics between acute myocardial infarction models with blood stasis and simple acute myocardial infarction models in rats

    International Nuclear Information System (INIS)

    Qu Shaochun; Yu Xiaofeng; Wang Jia; Zhou Jinying; Xie Haolin; Sui Dayun

    2010-01-01

    Objective: To construct the acute myocardial infarction models in rats with blood stasis and study the difference on blood biochemics between the acute myocardial infarction models with blood stasis and the simple acute myocardial infarction models. Methods: Wistar rats were randomly divided into control group, acute blood stasis model group, acute myocardial infarction sham operation group, acute myocardial infarction model group and of acute myocardial infarction model with blood stasis group. The acute myocardial infarction models under the status of the acute blood stasis in rats were set up. The serum malondialdehyde (MDA), nitric oxide (NO), free fatty acid (FFA), tumor necrosis factor-α (TNF-α) levels were detected, the activities of serum superoxide dismutase (SOD), glutathione peroxidase (GSH-Px) and the levels of prostacycline (PGI2), thromboxane A 2 (TXA 2 ) and endothelin (ET) in plasma were determined. Results: There were not obvious differences in MDA, SOD, GSH-Px and FFA between the acute myocardial infarction models with blood stasis in rats and the simple acute myocardial infarction models (P 2 and NO, and the increase extents of TXA 2 , ET and TNF-α in the acute myocardial infarction models in rats with blood stasis were higher than those in the simple acute myocardial infarction models (P 2 and NO, are significant when the acute myocardial infarction models in rats with blood stasis and the simple acute myocardial infarction models are compared. The results show that it is defective to evaluate pharmacodynamics of traditional Chinese drug with only simple acute myocardial infarction models. (authors)

  15. Climate stability and sensitivity in some simple conceptual models

    Energy Technology Data Exchange (ETDEWEB)

    Bates, J. Ray [University College Dublin, Meteorology and Climate Centre, School of Mathematical Sciences, Dublin (Ireland)

    2012-02-15

    A theoretical investigation of climate stability and sensitivity is carried out using three simple linearized models based on the top-of-the-atmosphere energy budget. The simplest is the zero-dimensional model (ZDM) commonly used as a conceptual basis for climate sensitivity and feedback studies. The others are two-zone models with tropics and extratropics of equal area; in the first of these (Model A), the dynamical heat transport (DHT) between the zones is implicit, in the second (Model B) it is explicitly parameterized. It is found that the stability and sensitivity properties of the ZDM and Model A are very similar, both depending only on the global-mean radiative response coefficient and the global-mean forcing. The corresponding properties of Model B are more complex, depending asymmetrically on the separate tropical and extratropical values of these quantities, as well as on the DHT coefficient. Adopting Model B as a benchmark, conditions are found under which the validity of the ZDM and Model A as climate sensitivity models holds. It is shown that parameter ranges of physical interest exist for which such validity may not hold. The 2 x CO{sub 2} sensitivities of the simple models are studied and compared. Possible implications of the results for sensitivities derived from GCMs and palaeoclimate data are suggested. Sensitivities for more general scenarios that include negative forcing in the tropics (due to aerosols, inadvertent or geoengineered) are also studied. Some unexpected outcomes are found in this case. These include the possibility of a negative global-mean temperature response to a positive global-mean forcing, and vice versa. (orig.)

  16. Development of an EVA systems cost model. Volume 3: EVA systems cost model

    Science.gov (United States)

    1975-01-01

    The EVA systems cost model presented is based on proposed EVA equipment for the space shuttle program. General information on EVA crewman requirements in a weightless environment and an EVA capabilities overview are provided.

  17. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  18. A Response Surface-Based Cost Model for Wind Farm Design

    International Nuclear Information System (INIS)

    Zhang Jie; Chowdhury, Souma; Messac, Achille; Castillo, Luciano

    2012-01-01

    A Response Surface-Based Wind Farm Cost (RS-WFC) model is developed for the engineering planning of wind farms. The RS-WFC model is developed using Extended Radial Basis Functions (E-RBF) for onshore wind farms in the U.S. This model is then used to explore the influences of different design and economic parameters, including number of turbines, rotor diameter and labor cost, on the cost of a wind farm. The RS-WFC model is composed of three components that estimate the effects of engineering and economic factors on (i) the installation cost, (ii) the annual Operation and Maintenance (O and M) cost, and (iii) the total annual cost of a wind farm. The accuracy of the cost model is favorably established through comparison with pertinent commercial data. The final RS-WFC model provided interesting insights into cost variation with respect to critical engineering and economic parameters. In addition, a newly developed analytical wind farm engineering model is used to determine the power generated by the farm, and the subsequent Cost of Energy (COE). This COE is optimized for a unidirectional uniform “incoming wind speed” scenario using Particle Swarm Optimization (PSO). We found that the COE could be appreciably minimized through layout optimization, thereby yielding significant cost savings. - Highlights: ► We present a Response Surface-Based Wind Farm Cost (RS-WFC) model for wind farm design. ► The model could estimate installation cost, Operation and Maintenance cost, and total annual cost of a wind farm. ► The Cost of Energy is optimized using Particle Swarm Optimization. ► Layout optimization could yield significant cost savings.

  19. A New Proposed Cost Model for List Accessing Problem using Buffering

    OpenAIRE

    Mohanty, Rakesh; Bhoi, Seetaya; Tripathy, Sasmita

    2011-01-01

    There are many existing well known cost models for the list accessing problem. The standard cost model developed by Sleator and Tarjan is most widely used. In this paper, we have made a comprehensive study of the existing cost models and proposed a new cost model for the list accessing problem. In our proposed cost model, for calculating the processing cost of request sequence using a singly linked list, we consider the access cost, matching cost and replacement cost. The cost of processing a...

  20. Cost evaluation of clinical laboratory in Taiwan's National Health System by using activity-based costing.

    Science.gov (United States)

    Su, Bin-Guang; Chen, Shao-Fen; Yeh, Shu-Hsing; Shih, Po-Wen; Lin, Ching-Chiang

    2016-11-01

    To cope with the government's policies to reduce medical costs, Taiwan's healthcare service providers are striving to survive by pursuing profit maximization through cost control. This article aimed to present the results of cost evaluation using activity-based costing performed in the laboratory in order to throw light on the differences between costs and the payment system of National Health Insurance (NHI). This study analyzed the data of costs and income of the clinical laboratory. Direct costs belong to their respective sections of the department. The department's shared costs, including public expenses and administrative assigned costs, were allocated to the department's respective sections. A simple regression equation was created to predict profit and loss, and evaluate the department's break-even point, fixed cost, and contribution margin ratio. In clinical chemistry and seroimmunology sections, the cost per test was lower than the NHI payment and their major laboratory tests had revenues with the profitability ratio of 8.7%, while the other sections had a higher cost per test than the NHI payment and their major tests were in deficit. The study found a simple linear regression model as follows: "Balance=-84,995+0.543×income (R2=0.544)". In order to avoid deficit, laboratories are suggested to increase test volumes, enhance laboratory test specialization, and become marginal scale. A hospital could integrate with regional medical institutions through alliances or OEM methods to increase volumes to reach marginal scale and reduce laboratory costs, enhancing the level and quality of laboratory medicine.

  1. TTS-Polttopuu - cost calculation model for fuelwood

    International Nuclear Information System (INIS)

    Naett, H.; Ryynaenen, S.

    1998-01-01

    The TTS-Institutes's Forestry Department has developed a computer based costcalculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486-level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY Research Programme. (orig.)

  2. A simple model of bipartite cooperation for ecological and organizational networks.

    Science.gov (United States)

    Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian

    2009-01-22

    In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs. Here, building on previous stochastic models of consumer-resource interactions between species, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner-partner interactions, as exemplified by plant-animal mutualistic networks. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer-contractor interactions exhibits similar structural patterns to plant-animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society.

  3. Simple model with damping of the mode-coupling instability

    Energy Technology Data Exchange (ETDEWEB)

    Pestrikov, D V [AN SSSR, Novosibirsk (Russian Federation). Inst. Yadernoj Fiziki

    1996-08-01

    In this paper we use a simple model to study the suppression of the transverse mode-coupling instability. Two possibilities are considered. One is due to the damping of particular synchrobetatron modes, and another - due to Landau damping, caused by the nonlinearity of betatron oscillations. (author)

  4. A parametric costing model for wave energy technology

    International Nuclear Information System (INIS)

    1992-01-01

    This document describes the philosophy and technical approach to a parametric cost model for offshore wave energy systems. Consideration is given both to existing known devices and other devices yet to be conceptualised. The report is complementary to a spreadsheet based cost estimating model. The latter permits users to derive capital cost estimates using either inherent default data or user provided data, if a particular scheme provides sufficient design definition for more accurate estimation. The model relies on design default data obtained from wave energy device designs and a set of specifically collected cost data. (author)

  5. Use of Paired Simple and Complex Models to Reduce Predictive Bias and Quantify Uncertainty

    DEFF Research Database (Denmark)

    Doherty, John; Christensen, Steen

    2011-01-01

    -constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology...... of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration...... that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights...

  6. Experimental Determination of Demand Response Control Models and Cost of Control for Ensembles of Window-Mount Air Conditioners

    Energy Technology Data Exchange (ETDEWEB)

    Geller, Drew Adam [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-29

    Control of consumer electrical devices for providing electrical grid services is expanding in both the scope and the diversity of loads that are engaged in control, but there are few experimentally-based models of these devices suitable for control designs and for assessing the cost of control. A laboratory-scale test system is developed to experimentally evaluate the use of a simple window-mount air conditioner for electrical grid regulation services. The experimental test bed is a single, isolated air conditioner embedded in a test system that both emulates the thermodynamics of an air conditioned room and also isolates the air conditioner from the real-world external environmental and human variables that perturb the careful measurements required to capture a model that fully characterizes both the control response functions and the cost of control. The control response functions and cost of control are measured using harmonic perturbation of the temperature set point and a test protocol that further isolates the air conditioner from low frequency environmental variability.

  7. A simple cost-effective and eco-friendly wet chemical process for the fabrication of superhydrophobic cotton fabrics

    International Nuclear Information System (INIS)

    Richard, Edna; Lakshmi, R.V.; Aruna, S.T.; Basu, Bharathibai J.

    2013-01-01

    Superhydrophobic surfaces were created on hydrophilic cotton fabrics by a simple wet chemical process. The fabric was immersed in a colloidal suspension of zinc hydroxide followed by subsequent hydrophobization with stearic acid. The wettability of the modified cotton fabric sample was studied by water contact angle (WCA) and water shedding angle (WSA) measurements. The modified cotton fabrics exhibited superhydrophobicity with a WCA of 151° for 8 μL water droplet and a WSA of 5–10° for 40 μL water droplet. The superhydrophobic cotton sample was also characterized by field emission scanning electron microscopy (FESEM) and energy dispersive X-ray spectroscopy (EDX). The method is simple, eco-friendly and cost-effective and can be applied to large area of cotton fabric materials. It was shown that superhydrophobicity of the fabric was due to the combined effect of surface roughness imparted by zinc hydroxide and the low surface energy of stearic acid.

  8. A simple cost-effective and eco-friendly wet chemical process for the fabrication of superhydrophobic cotton fabrics

    Energy Technology Data Exchange (ETDEWEB)

    Richard, Edna; Lakshmi, R.V.; Aruna, S.T., E-mail: aruna_reddy@nal.res.in; Basu, Bharathibai J.

    2013-07-15

    Superhydrophobic surfaces were created on hydrophilic cotton fabrics by a simple wet chemical process. The fabric was immersed in a colloidal suspension of zinc hydroxide followed by subsequent hydrophobization with stearic acid. The wettability of the modified cotton fabric sample was studied by water contact angle (WCA) and water shedding angle (WSA) measurements. The modified cotton fabrics exhibited superhydrophobicity with a WCA of 151° for 8 μL water droplet and a WSA of 5–10° for 40 μL water droplet. The superhydrophobic cotton sample was also characterized by field emission scanning electron microscopy (FESEM) and energy dispersive X-ray spectroscopy (EDX). The method is simple, eco-friendly and cost-effective and can be applied to large area of cotton fabric materials. It was shown that superhydrophobicity of the fabric was due to the combined effect of surface roughness imparted by zinc hydroxide and the low surface energy of stearic acid.

  9. Structure of simple liquids; Structure des liquides simples

    Energy Technology Data Exchange (ETDEWEB)

    Blain, J F [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires

    1969-07-01

    The results obtained by application to argon and sodium of the two important methods of studying the structure of liquids: scattering of X-rays and neutrons, are presented on one hand. On the other hand the principal models employed for reconstituting the structure of simple liquids are exposed: mathematical models, lattice models and their derived models, experimental models. (author) [French] On presente d'une part les resultats obtenus par application a l'argon et au sodium des deux principales methodes d'etude de la structure des liquides: la diffusion des rayons X et la diffusion des neutrons; d'autre part, les principaux modeles employes pour reconstituer la structure des liquides simples sont exposes: modeles mathematiques, modeles des reseaux et modeles derives, modeles experimentaux. (auteur)

  10. A simple model for behaviour change in epidemics

    Directory of Open Access Journals (Sweden)

    Brauer Fred

    2011-02-01

    Full Text Available Abstract Background People change their behaviour during an epidemic. Infectious members of a population may reduce the number of contacts they make with other people because of the physical effects of their illness and possibly because of public health announcements asking them to do so in order to decrease the number of new infections, while susceptible members of the population may reduce the number of contacts they make in order to try to avoid becoming infected. Methods We consider a simple epidemic model in which susceptible and infectious members respond to a disease outbreak by reducing contacts by different fractions and analyze the effect of such contact reductions on the size of the epidemic. We assume constant fractional reductions, without attempting to consider the way in which susceptible members might respond to information about the epidemic. Results We are able to derive upper and lower bounds for the final size of an epidemic, both for simple and staged progression models. Conclusions The responses of uninfected and infected individuals in a disease outbreak are different, and this difference affects estimates of epidemic size.

  11. Development of a simple, low cost, indirect ion beam fluence measurement system for ion implanters, accelerators

    Science.gov (United States)

    Suresh, K.; Balaji, S.; Saravanan, K.; Navas, J.; David, C.; Panigrahi, B. K.

    2018-02-01

    We developed a simple, low cost user-friendly automated indirect ion beam fluence measurement system for ion irradiation and analysis experiments requiring indirect beam fluence measurements unperturbed by sample conditions like low temperature, high temperature, sample biasing as well as in regular ion implantation experiments in the ion implanters and electrostatic accelerators with continuous beam. The system, which uses simple, low cost, off-the-shelf components/systems and two distinct layers of in-house built softwarenot only eliminates the need for costly data acquisition systems but also overcomes difficulties in using properietry software. The hardware of the system is centered around a personal computer, a PIC16F887 based embedded system, a Faraday cup drive cum monitor circuit, a pair of Faraday Cups and a beam current integrator and the in-house developed software include C based microcontroller firmware and LABVIEW based virtual instrument automation software. The automatic fluence measurement involves two important phases, a current sampling phase lasting over 20-30 seconds during which the ion beam current is continuously measured by intercepting the ion beam and the averaged beam current value is computed. A subsequent charge computation phase lasting 700-900 seconds is executed making the ion beam to irradiate the samples and the incremental fluence received by the sampleis estimated usingthe latest averaged beam current value from the ion beam current sampling phase. The cycle of current sampling-charge computation is repeated till the required fluence is reached. Besides simplicity and cost-effectiveness, other important advantages of the developed system include easy reconfiguration of the system to suit customisation of experiments, scalability, easy debug and maintenance of the hardware/software, ability to work as a standalone system. The system was tested with different set of samples and ion fluences and the results were verified using

  12. Proposals for software analysis of cost effectiveness and cost-benefit for optimisation of radiation protection

    International Nuclear Information System (INIS)

    Schieber, C.; Lombard, J.; Lefaure, C.

    1990-06-01

    The objective of this report is to present the principles of decision making software for radiation protection option, applying ALARA principle. The choice of optimum options is performed by applying the models of cost effectiveness and cost-benefit. Options of radiation protection are described by two indicators: a simple economic indicator: cost of radiation protection; and dosimetry indicator: collective dose related to protection. For both analyses the software enables sensitivity analysis. It would be possible to complete the software by integrating a module which would take into account combinations of two options since they are not independent

  13. Los Alamos Waste Management Cost Estimation Model

    International Nuclear Information System (INIS)

    Matysiak, L.M.; Burns, M.L.

    1994-03-01

    This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs

  14. Validation of the replica trick for simple models

    Science.gov (United States)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  15. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    Science.gov (United States)

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used

  16. A Simple, Realistic Stochastic Model of Gastric Emptying.

    Directory of Open Access Journals (Sweden)

    Jiraphat Yokrattanasak

    Full Text Available Several models of Gastric Emptying (GE have been employed in the past to represent the rate of delivery of stomach contents to the duodenum and jejunum. These models have all used a deterministic form (algebraic equations or ordinary differential equations, considering GE as a continuous, smooth process in time. However, GE is known to occur as a sequence of spurts, irregular both in size and in timing. Hence, we formulate a simple stochastic process model, able to represent the irregular decrements of gastric contents after a meal. The model is calibrated on existing literature data and provides consistent predictions of the observed variability in the emptying trajectories. This approach may be useful in metabolic modeling, since it describes well and explains the apparently heterogeneous GE experimental results in situations where common gastric mechanics across subjects would be expected.

  17. Two point function for a simple general relativistic quantum model

    OpenAIRE

    Colosi, Daniele

    2007-01-01

    We study the quantum theory of a simple general relativistic quantum model of two coupled harmonic oscillators and compute the two-point function following a proposal first introduced in the context of loop quantum gravity.

  18. Parabolic Trough Reference Plant for Cost Modeling with the Solar Advisor Model (SAM)

    Energy Technology Data Exchange (ETDEWEB)

    Turchi, C.

    2010-07-01

    This report describes a component-based cost model developed for parabolic trough solar power plants. The cost model was developed by the National Renewable Energy Laboratory (NREL), assisted by WorleyParsons Group Inc., for use with NREL's Solar Advisor Model (SAM). This report includes an overview and explanation of the model, two summary contract reports from WorleyParsons, and an Excel spreadsheet for use with SAM. The cost study uses a reference plant with a 100-MWe capacity and six hours of thermal energy storage. Wet-cooling and dry-cooling configurations are considered. The spreadsheet includes capital and operating cost by component to allow users to estimate the impact of changes in component costs.

  19. Operations and support cost modeling using Markov chains

    Science.gov (United States)

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  20. Simple model of surface roughness for binary collision sputtering simulations

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, Sloan J. [Institute of Solid-State Electronics, TU Wien, Floragasse 7, A-1040 Wien (Austria); Hobler, Gerhard, E-mail: gerhard.hobler@tuwien.ac.at [Institute of Solid-State Electronics, TU Wien, Floragasse 7, A-1040 Wien (Austria); Maciążek, Dawid; Postawa, Zbigniew [Institute of Physics, Jagiellonian University, ul. Lojasiewicza 11, 30348 Kraków (Poland)

    2017-02-15

    Highlights: • A simple model of surface roughness is proposed. • Its key feature is a linearly varying target density at the surface. • The model can be used in 1D/2D/3D Monte Carlo binary collision simulations. • The model fits well experimental glancing incidence sputtering yield data. - Abstract: It has been shown that surface roughness can strongly influence the sputtering yield – especially at glancing incidence angles where the inclusion of surface roughness leads to an increase in sputtering yields. In this work, we propose a simple one-parameter model (the “density gradient model”) which imitates surface roughness effects. In the model, the target’s atomic density is assumed to vary linearly between the actual material density and zero. The layer width is the sole model parameter. The model has been implemented in the binary collision simulator IMSIL and has been evaluated against various geometric surface models for 5 keV Ga ions impinging an amorphous Si target. To aid the construction of a realistic rough surface topography, we have performed MD simulations of sequential 5 keV Ga impacts on an initially crystalline Si target. We show that our new model effectively reproduces the sputtering yield, with only minor variations in the energy and angular distributions of sputtered particles. The success of the density gradient model is attributed to a reduction of the reflection coefficient – leading to increased sputtering yields, similar in effect to surface roughness.

  1. Simple model of surface roughness for binary collision sputtering simulations

    International Nuclear Information System (INIS)

    Lindsey, Sloan J.; Hobler, Gerhard; Maciążek, Dawid; Postawa, Zbigniew

    2017-01-01

    Highlights: • A simple model of surface roughness is proposed. • Its key feature is a linearly varying target density at the surface. • The model can be used in 1D/2D/3D Monte Carlo binary collision simulations. • The model fits well experimental glancing incidence sputtering yield data. - Abstract: It has been shown that surface roughness can strongly influence the sputtering yield – especially at glancing incidence angles where the inclusion of surface roughness leads to an increase in sputtering yields. In this work, we propose a simple one-parameter model (the “density gradient model”) which imitates surface roughness effects. In the model, the target’s atomic density is assumed to vary linearly between the actual material density and zero. The layer width is the sole model parameter. The model has been implemented in the binary collision simulator IMSIL and has been evaluated against various geometric surface models for 5 keV Ga ions impinging an amorphous Si target. To aid the construction of a realistic rough surface topography, we have performed MD simulations of sequential 5 keV Ga impacts on an initially crystalline Si target. We show that our new model effectively reproduces the sputtering yield, with only minor variations in the energy and angular distributions of sputtered particles. The success of the density gradient model is attributed to a reduction of the reflection coefficient – leading to increased sputtering yields, similar in effect to surface roughness.

  2. Application of Simple CFD Models in Smoke Ventilation Design

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter Vilhelm; la Cour-Harbo, Hans

    2004-01-01

    The paper examines the possibilities of using simple CFD models in practical smoke ventilation design. The aim is to assess if it is possible with a reasonable accuracy to predict the behaviour of smoke transport in case of a fire. A CFD code mainly applicable for “ordinary” ventilation design...

  3. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).

  4. Chaos from simple models to complex systems

    CERN Document Server

    Cencini, Massimo; Vulpiani, Angelo

    2010-01-01

    Chaos: from simple models to complex systems aims to guide science and engineering students through chaos and nonlinear dynamics from classical examples to the most recent fields of research. The first part, intended for undergraduate and graduate students, is a gentle and self-contained introduction to the concepts and main tools for the characterization of deterministic chaotic systems, with emphasis to statistical approaches. The second part can be used as a reference by researchers as it focuses on more advanced topics including the characterization of chaos with tools of information theor

  5. On production costs in vertical differentiation models

    OpenAIRE

    Dorothée Brécard

    2009-01-01

    In this paper, we analyse the effects of the introduction of a unit production cost beside a fixed cost of quality improvement in a duopoly model of vertical product differentiation. Thanks to an original methodology, we show that a low unit cost tends to reduce product differentiation and thus prices, whereas a high unit cost leads to widen product differentiation and to increase prices

  6. Simple Regge pole model for Compton scattering of protons

    International Nuclear Information System (INIS)

    Saleem, M.; Fazal-e-Aleem

    1978-01-01

    It is shown that by a phenomenological choice of the residue functions, the differential cross section for ν p → ν p, including the very recent measurements up to - t=4.3 (GeV/c) 2 , can be explained at all measured energies greater than 2 GeV with simple Regge pole model

  7. Fast and simple model for atmospheric radiative transfer

    NARCIS (Netherlands)

    Seidel, F.C.; Kokhanovsky, A.A.; Schaepman, M.E.

    2010-01-01

    Radiative transfer models (RTMs) are of utmost importance for quantitative remote sensing, especially for compensating atmospheric perturbation. A persistent trade-off exists between approaches that prefer accuracy at the cost of computational complexity, versus those favouring simplicity at the

  8. Simple Model-Free Controller for the Stabilization of Planetary Inverted Pendulum

    Directory of Open Access Journals (Sweden)

    Huanhuan Mai

    2012-01-01

    Full Text Available A simple model-free controller is presented for solving the nonlinear dynamic control problems. As an example of the problem, a planetary gear-type inverted pendulum (PIP is discussed. To control the inherently unstable system which requires real-time control responses, the design of a smart and simple controller is made necessary. The model-free controller proposed includes a swing-up controller part and a stabilization controller part; neither controller has any information about the PIP. Since the input/output scaling parameters of the fuzzy controller are highly sensitive, we use genetic algorithm (GA to obtain the optimal control parameters. The experimental results show the effectiveness and robustness of the present controller.

  9. Landau-Zener transitions and Dykhne formula in a simple continuum model

    Science.gov (United States)

    Dunham, Yujin; Garmon, Savannah

    The Landau-Zener model describing the interaction between two linearly driven discrete levels is useful in describing many simple dynamical systems; however, no system is completely isolated from the surrounding environment. Here we examine a generalizations of the original Landau-Zener model to study simple environmental influences. We consider a model in which one of the discrete levels is replaced with a energy continuum, in which we find that the survival probability for the initially occupied diabatic level is unaffected by the presence of the continuum. This result can be predicted by assuming that each step in the evolution for the diabatic state evolves independently according to the Landau-Zener formula, even in the continuum limit. We also show that, at least for the simplest model, this result can also be predicted with the natural generalization of the Dykhne formula for open systems. We also observe dissipation as the non-escape probability from the discrete levels is no longer equal to one.

  10. Obtaining natural-like flow releases in diverted river reaches from simple riparian benefit economic models.

    Science.gov (United States)

    Perona, Paolo; Dürrenmatt, David J; Characklis, Gregory W

    2013-03-30

    We propose a theoretical river modeling framework for generating variable flow patterns in diverted-streams (i.e., no reservoir). Using a simple economic model and the principle of equal marginal utility in an inverse fashion we first quantify the benefit of the water that goes to the environment in relation to that of the anthropic activity. Then, we obtain exact expressions for optimal water allocation rules between the two competing uses, as well as the related statistical distributions. These rules are applied using both synthetic and observed streamflow data, to demonstrate that this approach may be useful in 1) generating more natural flow patterns in the river reach downstream of the diversion, thus reducing the ecodeficit; 2) obtaining a more enlightened economic interpretation of Minimum Flow Release (MFR) strategies, and; 3) comparing the long-term costs and benefits of variable versus MFR policies and showing the greater ecological sustainability of this new approach. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Cost model validation: a technical and cultural approach

    Science.gov (United States)

    Hihn, J.; Rosenberg, L.; Roust, K.; Warfield, K.

    2001-01-01

    This paper summarizes how JPL's parametric mission cost model (PMCM) has been validated using both formal statistical methods and a variety of peer and management reviews in order to establish organizational acceptance of the cost model estimates.

  12. Identifying and quantifying energy savings on fired plant using low cost modelling techniques

    International Nuclear Information System (INIS)

    Tucker, Robert; Ward, John

    2012-01-01

    Research highlights: → Furnace models based on the zone method for radiation calculation are described. → Validated steady-state and transient models have been developed. → We show how these simple models can identify the best options for saving energy. → High emissivity coatings predicted to give performance enhancement on a fired heater. → Optimal heat recovery strategies on a steel reheating furnace are predicted. -- Abstract: Combustion in fired heaters, boilers and furnaces often accounts for the major energy consumption on industrial processes. Small improvements in efficiency can result in large reductions in energy consumption, CO 2 emissions, and operating costs. This paper will describe some useful low cost modelling techniques based on the zone method to help identify energy saving opportunities on high temperature fuel-fired process plant. The zone method has for many decades, been successfully applied to small batch furnaces through to large steel-reheating furnaces, glass tanks, boilers and fired heaters on petrochemical plant. Zone models can simulate both steady-state furnace operation and more complex transient operation typical of a production environment. These models can be used to predict thermal efficiency and performance, and more importantly, to assist in identifying and predicting energy saving opportunities from such measures as: ·Improving air/fuel ratio and temperature controls. ·Improved insulation. ·Use of oxygen or oxygen enrichment. ·Air preheating via flue gas heat recovery. ·Modification to furnace geometry and hearth loading. There is also increasing interest in the application of refractory coatings for increasing surface radiation in fired plant. All of the techniques can yield savings ranging from a few percent upwards and can deliver rapid financial payback, but their evaluation often requires robust and reliable models in order to increase confidence in making financial investment decisions. This paper gives

  13. A simple procedure to model water level fluctuations in partially inundated wetlands

    NARCIS (Netherlands)

    Spieksma, JFM; Schouwenaars, JM

    When modelling groundwater behaviour in wetlands, there are specific problems related to the presence of open water in small-sized mosaic patterns. A simple quasi two-dimensional model to predict water level fluctuations in partially inundated wetlands is presented. In this model, the ratio between

  14. Water nanoelectrolysis: A simple model

    Science.gov (United States)

    Olives, Juan; Hammadi, Zoubida; Morin, Roger; Lapena, Laurent

    2017-12-01

    A simple model of water nanoelectrolysis—defined as the nanolocalization at a single point of any electrolysis phenomenon—is presented. It is based on the electron tunneling assisted by the electric field through the thin film of water molecules (˜0.3 nm thick) at the surface of a tip-shaped nanoelectrode (micrometric to nanometric curvature radius at the apex). By applying, e.g., an electric potential V1 during a finite time t1, and then the potential -V1 during the same time t1, we show that there are three distinct regions in the plane (t1, V1): one for the nanolocalization (at the apex of the nanoelectrode) of the electrolysis oxidation reaction, the second one for the nanolocalization of the reduction reaction, and the third one for the nanolocalization of the production of bubbles. These parameters t1 and V1 completely control the time at which the electrolysis reaction (of oxidation or reduction) begins, the duration of this reaction, the electrolysis current intensity (i.e., the tunneling current), the number of produced O2 or H2 molecules, and the radius of the nanolocalized bubbles. The model is in good agreement with our experiments.

  15. Costs of health care across primary care models in Ontario.

    Science.gov (United States)

    Laberge, Maude; Wodchis, Walter P; Barnsley, Jan; Laporte, Audrey

    2017-08-01

    The purpose of this study is to analyze the relationship between newly introduced primary care models in Ontario, Canada, and patients' primary care and total health care costs. A specific focus is on the payment mechanisms for primary care physicians, i.e. fee-for-service (FFS), enhanced-FFS, and blended capitation, and whether providers practiced as part of a multidisciplinary team. Utilization data for a one year period was measured using administrative databases for a 10% sample selected at random from the Ontario adult population. Primary care and total health care costs were calculated at the individual level and included costs from physician services, hospital visits and admissions, long term care, drugs, home care, lab tests, and visits to non-medical health care providers. Generalized linear model regressions were conducted to assess the differences in costs between primary care models. Patients not enrolled with a primary care physicians were younger, more likely to be males and of lower socio-economic status. Patients in blended capitation models were healthier and wealthier than FFS and enhanced-FFS patients. Primary care and total health care costs were significantly different across Ontario primary care models. Using the traditional FFS as the reference, we found that patients in the enhanced-FFS models had the lowest total health care costs, and also the lowest primary care costs. Patients in the blended capitation models had higher primary care costs but lower total health care costs. Patients that were in multidisciplinary teams (FHT), where physicians are also paid on a blended capitation basis, had higher total health care costs than non-FHT patients but still lower than the FFS reference group. Primary care and total health care costs increased with patients' age, morbidity, and lower income quintile across all primary care payment types. The new primary care models were associated with lower total health care costs for patients compared to the

  16. The simple modelling method for storm- and grey-water quality ...

    African Journals Online (AJOL)

    The simple modelling method for storm- and grey-water quality management applied to Alexandra settlement. ... objectives optimally consist of educational programmes, erosion and sediment control, street sweeping, removal of sanitation system overflows, impervious cover reduction, downspout disconnections, removal of ...

  17. A simple model of hysteresis behavior using spreadsheet analysis

    Science.gov (United States)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  18. A simple model of hysteresis behavior using spreadsheet analysis

    International Nuclear Information System (INIS)

    Ehrmann, A; Blachowicz, T

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur

  19. Analysis and modeling of rail maintenance costs

    Directory of Open Access Journals (Sweden)

    Amir Ali Bakhshi

    2012-01-01

    Full Text Available Railroad maintenance engineering plays an important role on availability of roads and reducing the cost of railroad incidents. Rail is of the most important parts of railroad industry, which needs regular maintenance since it covers a significant part of total maintenance cost. Any attempt on optimizing total cost of maintenance could substantially reduce the cost of railroad system and it can reduce total cost of the industry. The paper presents a new method to estimate the cost of rail failure using different cost components such as cost of inspection and cost of risk associated with possible accidents. The proposed model of this paper is used for a real-world case study of railroad transportation of Tehran region and the results have been analyzed.

  20. A simple model explaining super-resolution in absolute optical instruments

    Science.gov (United States)

    Leonhardt, Ulf; Sahebdivan, Sahar; Kogan, Alex; Tyc, Tomáš

    2015-05-01

    We develop a simple, one-dimensional model for super-resolution in absolute optical instruments that is able to describe the interplay between sources and detectors. Our model explains the subwavelength sensitivity of a point detector to a point source reported in previous computer simulations and experiments (Miñano 2011 New J. Phys.13 125009; Miñano 2014 New J. Phys.16 033015).

  1. Melanoma costs: a dynamic model comparing estimated overall costs of various clinical stages.

    Science.gov (United States)

    Alexandrescu, Doru Traian

    2009-11-15

    The rapidly increasing incidence of melanoma occurs at the same time as an increase in general healthcare costs, particularly the expenses associated with cancer care. Previous cost estimates in melanoma have not utilized a dynamic model considering the evolution of the disease and have not integrated the multiple costs associated with different aspects of medical interventions and patient-related factors. Futhermore, previous calculations have not been updated to reflect the modern tendencies in healthcare costs. We designed a comprehensive model of expenses in melanoma that considers the dynamic costs generated by the natural progression of the disease, which produces costs associated with treatment, surveillance, loss of income, and terminal care. The complete range of initial clinical (TNM) stages of the disease and initial tumor stages were analyzed in this model and the total healthcare costs for the five years following melanoma presentation at each particular stage were calculated. We have observed dramatic incremental total costs associated with progressively higher initial stages of the disease, ranging from a total of $4,648.48 for in situ tumors to $159,808.17 for Stage IV melanoma. By stage, early lesions associate 30-55 percent of their costs for the treatment of the primary tumor, due to a low rate of recurrence (local, regional, or distant), which limits the need for additional interventions. For in situ melanoma, T1a, and T1b, surveillance is an important contributor to the medical costs, accounting for more than 25 percent of the total cost over 5 years. In contrast, late lesions incur a much larger proportion of their associated costs (up to 80-85%) from the diagnosis and treatment of metastatic disease because of the increased propensity of those lesions to disseminate. This cost increases with increasing tumor stage (from $2,442.17 for T1a to $6,678.00 for T4b). The most expensive items in the medical care of patients with melanoma consist of

  2. Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model

    Directory of Open Access Journals (Sweden)

    Hong Xue

    2018-01-01

    Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to

  3. Molten Salt Power Tower Cost Model for the System Advisor Model (SAM)

    Energy Technology Data Exchange (ETDEWEB)

    Turchi, C. S.; Heath, G. A.

    2013-02-01

    This report describes a component-based cost model developed for molten-salt power tower solar power plants. The cost model was developed by the National Renewable Energy Laboratory (NREL), using data from several prior studies, including a contracted analysis from WorleyParsons Group, which is included herein as an Appendix. The WorleyParsons' analysis also estimated material composition and mass for the plant to facilitate a life cycle analysis of the molten salt power tower technology. Details of the life cycle assessment have been published elsewhere. The cost model provides a reference plant that interfaces with NREL's System Advisor Model or SAM. The reference plant assumes a nominal 100-MWe (net) power tower running with a nitrate salt heat transfer fluid (HTF). Thermal energy storage is provided by direct storage of the HTF in a two-tank system. The design assumes dry-cooling. The model includes a spreadsheet that interfaces with SAM via the Excel Exchange option in SAM. The spreadsheet allows users to estimate the costs of different-size plants and to take into account changes in commodity prices. This report and the accompanying Excel spreadsheet can be downloaded at https://sam.nrel.gov/cost.

  4. Simple and robust determination of the activity signature of key carbohydrate metabolism enzymes for physiological phenotyping in model and crop plants

    DEFF Research Database (Denmark)

    Jammer, Alexandra; Gasperl, Anna; Luschin-Ebengreuth, Nora

    2015-01-01

    The analysis of physiological parameters is important to understand the link between plant phenotypes and their genetic bases, and therefore is needed as an important element in the analysis of model and crop plants. The activities of enzymes involved in primary carbohydrate metabolism have been...... shown to be strongly associated with growth performance, crop yield, and quality, as well as stress responses. A simple, fast, and cost-effective method to determine activities for 13 key enzymes involved in carbohydrate metabolism has been established, mainly based on coupled spectrophotometric kinetic...

  5. Simple steps help minimize costs, risks in project contracts

    International Nuclear Information System (INIS)

    Camps, J.A.

    1996-01-01

    Contrary to prevailing opinion, risks and project financing costs can be higher for lump sum (LS) project contracts than under reimbursable-type contracts. An element-by-element analysis of the risks and costs associated with a project enables investors to develop variations of reimbursable contracts. Project managers can use this three-step procedure, along with other recommendations, to measure the hidden project costs and risks associated with LS contracts. The author bases his conclusions on case studies of recent projects in the petroleum refining and petrochemical industries. The findings, however, are general enough to be applicable in other industrial sectors

  6. A cost-based empirical model of the aggregate price determination for the Turkish economy: A multivariate cointegration approach

    Directory of Open Access Journals (Sweden)

    Zeren Fatma

    2010-01-01

    Full Text Available This paper tries to examine the long run relationships between the aggregate consumer prices and some cost-based components for the Turkish economy. Based on a simple economic model of the macro-scaled price formation, multivariate cointegration techniques have been applied to test whether the real data support the a priori model construction. The results reveal that all of the factors, related to the price determination, have a positive impact on the consumer prices as expected. We find that the most significant component contributing to the price setting is the nominal exchange rate depreciation. We also cannot reject the linear homogeneity of the sum of all the price data as to the domestic inflation. The paper concludes that the Turkish consumer prices have in fact a strong cost-push component that contributes to the aggregate pricing.

  7. SUPPLIES COSTS: AN EXPLORATORY STUDY WITH APPLICATION OF MEASUREMENT MODEL OF LOGISTICS COSTS

    OpenAIRE

    Ana Paula Ferreira Alves; José Vanderlei Silva Borba; Gilberto Tavares dos Santos; Artur Roberto Gibbon

    2013-01-01

    One of the main reasons for the difficulty in adopting an integrated method of calculation of logistics costs is still a lack of adequate information about costs. The management of the supply chain and identify its costs can provide information for their managers, with regard to decision making, generating competitive advantage. Some models of calculating logistics costs are proposed by Uelze (1974), Dias (1996), Goldratt (2002), Christopher (2007), Castiglioni (2009) and Borba & Gibbon (2009...

  8. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  9. Explaining Home Bias in Trade: The Role of Time Costs

    Directory of Open Access Journals (Sweden)

    Inkoo Lee

    2010-12-01

    Full Text Available We study how time costs, combined with elasticity of substitution across home and foreign goods, can explain the home bias puzzle in a framework of flexible prices. Using a simple two-country model, we show that introducing time costs to an otherwise standard competitive model improves its ability to rationalize home bias in trade. Our analysis suggests that home bias and corresponding incomplete risk-sharing naturally arise in the presence of time costs, even under the assumption of complete financial markets and low elasticity of substitution between home and foreign goods.

  10. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.

  11. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    International Nuclear Information System (INIS)

    Kim, S. K.; Kang, G. B.; Ko, W. I.

    2013-01-01

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole

  12. Simple mechanical parameters identification of induction machine using voltage sensor only

    International Nuclear Information System (INIS)

    Horen, Yoram; Strajnikov, Pavel; Kuperman, Alon

    2015-01-01

    Highlights: • A simple low cost algorithm for induction motor mechanical parameters estimation is proposed. • Voltage sensing only is performed; speed sensor is not required. • The method is suitable for both wound rotor and squirrel cage motors. - Abstract: A simple low cost algorithm for induction motor mechanical parameters estimation without speed sensor is presented in this paper. Estimation is carried out by recording stator terminal voltage during natural braking and subsequent offline curve fitting. The algorithm allows accurately reconstructing mechanical time constant as well as loading torque speed dependency. Although the mathematical basis of the presented method is developed for wound rotor motors, it is shown to be suitable for squirrel cage motors as well. The algorithm is first tested by reconstruction of simulation model parameters and then by processing measurement results of several motors. Simulation and experimental results support the validity of the proposed algorithm

  13. A simple stationary semi-analytical wake model

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.

    We present an idealized simple, but fast, semi-analytical algorithm for computation of stationary wind farm wind fields with a possible potential within a multi-fidelity strategy for wind farm topology optimization. Basically, the model considers wakes as linear perturbations on the ambient non......-linear. With each of these approached, a parabolic system are described, which is initiated by first considering the most upwind located turbines and subsequently successively solved in the downstream direction. Algorithms for the resulting wind farm flow fields are proposed, and it is shown that in the limit......-uniform mean wind field, although the modelling of the individual stationary wake flow fields includes non-linear terms. The simulation of the individual wake contributions are based on an analytical solution of the thin shear layer approximation of the NS equations. The wake flow fields are assumed...

  14. Total life cycle cost model for electric power stations

    International Nuclear Information System (INIS)

    Cardullo, M.W.

    1995-01-01

    The Total Life Cycle Cost (TLCC) model for electric power stations was developed to provide a technology screening model. The TLCC analysis involves normalizing cost estimates with respect to performance standards and financial assumptions and preparing a profile of all costs over the service life of the power station. These costs when levelized present a value in terms of a utility electricity rate. Comparison of cost and the pricing of the electricity for a utility shows if a valid project exists. Cost components include both internal and external costs. Internal costs are direct costs associated with the purchase, and operation of the power station and include initial capital costs, operating and maintenance costs. External costs result from societal and/or environmental impacts that are external to the marketplace and can include air quality impacts due to emissions, infrastructure costs, and other impacts. The cost stream is summed (current dollars) or discounted (constant dollars) to some base year to yield a overall TLCC of each power station technology on a common basis. While minimizing life cycle cost is an important consideration, it may not always be a preferred method for some utilities who may prefer minimizing capital costs. Such consideration does not always result in technology penetration in a marketplace such as the utility sector. Under various regulatory climates, the utility is likely to heavily weigh initial capital costs while giving limited consideration to other costs such as societal costs. Policy makers considering external costs, such as those resulting from environmental impacts, may reach significantly different conclusions about which technologies are most advantageous to society. The TLCC analysis model for power stations was developed to facilitate consideration of all perspectives

  15. Microalgal CO2 sequestering – Modeling microalgae production costs

    International Nuclear Information System (INIS)

    Bilanovic, Dragoljub; Holland, Mark; Armon, Robert

    2012-01-01

    Highlights: ► Microalgae production costs were modeled as a function of specific expenses. ► The effects of uncontrollable expenses/factors were incorporated into the model. ► Modeled microalgae production costs were in the range $102–1503 t −1 ha −1 y −1 . - Abstract: Microalgae CO 2 sequestering facilities might become an industrial reality if microalgae biomass could be produced at cost below $500.00 t −1 . We develop a model for estimation of total production costs of microalgae as a function of known production-specific expenses, and incorporate into the model the effects of uncontrollable factors which affect known production-specific expenses. Random fluctuations were intentionally incorporated into the model, consequently into generated cost/technology scenarios, because each and every logically interconnected equipment/operation that is used in design/construction/operation/maintenance of a production process is inevitably subject to random cost/price fluctuations which can neither be eliminated nor a priori controlled. A total of 152 costs/technology scenarios were evaluated to find 44 scenarios in which predicted total production costs of microalgae (PTPCM) was in the range $200–500 t −1 ha −1 y −1 . An additional 24 scenarios were found with PTCPM in the range of $102–200 t −1 ha −1 y −1 . These findings suggest that microalgae CO 2 sequestering and the production of commercial compounds from microalgal biomass can be economically viable venture even today when microalgae production technology is still far from its optimum.

  16. User Delay Cost Model and Facilities Maintenance Cost Model for a Terminal Control Area : Volume 1. Model Formulation and Demonstration

    Science.gov (United States)

    1978-05-01

    The User Delay Cost Model (UDCM) is a Monte Carlo computer simulation of essential aspects of Terminal Control Area (TCA) air traffic movements that would be affected by facility outages. The model can also evaluate delay effects due to other factors...

  17. Activity-Based Costing Model for Assessing Economic Performance.

    Science.gov (United States)

    DeHayes, Daniel W.; Lovrinic, Joseph G.

    1994-01-01

    An economic model for evaluating the cost performance of academic and administrative programs in higher education is described. Examples from its application at Indiana University-Purdue University Indianapolis are used to illustrate how the model has been used to control costs and reengineer processes. (Author/MSE)

  18. Automated cost modeling for coal combustion systems

    International Nuclear Information System (INIS)

    Rowe, R.M.; Anast, K.R.

    1991-01-01

    This paper reports on cost information developed at AMAX R and D Center for coal-water slurry production implemented in an automated spreadsheet (Lotus 123) for personal computer use. The spreadsheet format allows the user toe valuate impacts of various process options, coal feedstock characteristics, fuel characteristics, plant location sites, and plant sizes on fuel cost. Model flexibility reduces time and labor required to determine fuel costs and provides a basis to compare fuels manufactured by different processes. The model input includes coal characteristics, plant flowsheet definition, plant size, and market location. Based on these inputs, selected unit operations are chosen for coal processing

  19. Life Cycle Costing Model for Solid Waste Management

    DEFF Research Database (Denmark)

    Martinez-Sanchez, Veronica; Astrup, Thomas Fruergaard

    2014-01-01

    To ensure sustainability of solid waste management, there is a need for cost assessment models which are consistent with environmental and social assessments. However, there is a current lack of standardized terminology and methodology to evaluate economic performances and this complicates...... LCC, e.g. waste generator, waste operator and public finances and the perspective often defines the systemboundaries of the study, e.g. waste operators often focus on her/his own cost, i.e. technology based,whereas waste generators and public finances often focus on the entire waste system, i.......e. system based. Figure 1 illustrates the proposed modeling framework that distinguishes between: a) budget cost, b) externality costs and 3) transfers and defines unit costs of each technology (per ton of input waste). Unitcosts are afterwards combined with a mass balance to calculate the technology cost...

  20. Wilderness Recreation Demand: A Comparison of Travel Cost and On-Site Cost Models

    Science.gov (United States)

    J.M. Bowker; A. Askew; L. Seymour; J.P. Zhu; D. English; C.M. Starbuck

    2009-01-01

    This study used travel cost and on-site day cost models, coupled with the Forest Service’s National Visitor Use Monitoring data, to examine the demand for and value of recreation access to designated Wilderness.

  1. Ship Repair Workflow Cost Model

    National Research Council Canada - National Science Library

    McDevitt, Mike

    2003-01-01

    The effects of intermittent work patterns and funding on the costs of ship repair and maintenance were modeled for the San Diego region in 2002 for Supervisor of Shipbuilding and Repair (SUPSHIP) San Diego...

  2. Model reduction by weighted Component Cost Analysis

    Science.gov (United States)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  3. Modeling the lowest-cost splitting of a herd of cows by optimizing a cost function

    Science.gov (United States)

    Gajamannage, Kelum; Bollt, Erik M.; Porter, Mason A.; Dawkins, Marian S.

    2017-06-01

    Animals live in groups to defend against predation and to obtain food. However, for some animals—especially ones that spend long periods of time feeding—there are costs if a group chooses to move on before their nutritional needs are satisfied. If the conflict between feeding and keeping up with a group becomes too large, it may be advantageous for some groups of animals to split into subgroups with similar nutritional needs. We model the costs and benefits of splitting in a herd of cows using a cost function that quantifies individual variation in hunger, desire to lie down, and predation risk. We model the costs associated with hunger and lying desire as the standard deviations of individuals within a group, and we model predation risk as an inverse exponential function of the group size. We minimize the cost function over all plausible groups that can arise from a given herd and study the dynamics of group splitting. We examine how the cow dynamics and cost function depend on the parameters in the model and consider two biologically-motivated examples: (1) group switching and group fission in a herd of relatively homogeneous cows, and (2) a herd with an equal number of adult males (larger animals) and adult females (smaller animals).

  4. Response of Simple, Model Systems to Extreme Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, Rodney C. [Univ. of Michigan, Ann Arbor, MI (United States); Lang, Maik [Univ. of Michigan, Ann Arbor, MI (United States)

    2015-07-30

    The focus of the research was on the application of high-pressure/high-temperature techniques, together with intense energetic ion beams, to the study of the behavior of simple oxide systems (e.g., SiO2, GeO2, CeO2, TiO2, HfO2, SnO2, ZnO and ZrO2) under extreme conditions. These simple stoichiometries provide unique model systems for the analysis of structural responses to pressure up to and above 1 Mbar, temperatures of up to several thousands of kelvin, and the extreme energy density generated by energetic heavy ions (tens of keV/atom). The investigations included systematic studies of radiation- and pressure-induced amorphization of high P-T polymorphs. By studying the response of simple stoichiometries that have multiple structural “outcomes”, we have established the basic knowledge required for the prediction of the response of more complex structures to extreme conditions. We especially focused on the amorphous state and characterized the different non-crystalline structure-types that result from the interplay of radiation and pressure. For such experiments, we made use of recent technological developments, such as the perforated diamond-anvil cell and in situ investigation using synchrotron x-ray sources. We have been particularly interested in using extreme pressures to alter the electronic structure of a solid prior to irradiation. We expected that the effects of modified band structure would be evident in the track structure and morphology, information which is much needed to describe theoretically the fundamental physics of track-formation. Finally, we investigated the behavior of different simple-oxide, composite nanomaterials (e.g., uncoated nanoparticles vs. core/shell systems) under coupled, extreme conditions. This provided insight into surface and boundary effects on phase stability under extreme conditions.

  5. Cost and cost-effectiveness of tuberculosis treatment shortening: a model-based analysis.

    Science.gov (United States)

    Gomez, G B; Dowdy, D W; Bastos, M L; Zwerling, A; Sweeney, S; Foster, N; Trajman, A; Islam, M A; Kapiga, S; Sinanovic, E; Knight, G M; White, R G; Wells, W A; Cobelens, F G; Vassall, A

    2016-12-01

    Despite improvements in treatment success rates for tuberculosis (TB), current six-month regimen duration remains a challenge for many National TB Programmes, health systems, and patients. There is increasing investment in the development of shortened regimens with a number of candidates in phase 3 trials. We developed an individual-based decision analytic model to assess the cost-effectiveness of a hypothetical four-month regimen for first-line treatment of TB, assuming non-inferiority to current regimens of six-month duration. The model was populated using extensive, empirically-collected data to estimate the economic impact on both health systems and patients of regimen shortening for first-line TB treatment in South Africa, Brazil, Bangladesh, and Tanzania. We explicitly considered 'real world' constraints such as sub-optimal guideline adherence. From a societal perspective, a shortened regimen, priced at USD1 per day, could be a cost-saving option in South Africa, Brazil, and Tanzania, but would not be cost-effective in Bangladesh when compared to one gross domestic product (GDP) per capita. Incorporating 'real world' constraints reduces cost-effectiveness. Patient-incurred costs could be reduced in all settings. From a health service perspective, increased drug costs need to be balanced against decreased delivery costs. The new regimen would remain a cost-effective option, when compared to each countries' GDP per capita, even if new drugs cost up to USD7.5 and USD53.8 per day in South Africa and Brazil; this threshold was above USD1 in Tanzania and under USD1 in Bangladesh. Reducing the duration of first-line TB treatment has the potential for substantial economic gains from a patient perspective. The potential economic gains for health services may also be important, but will be context-specific and dependent on the appropriate pricing of any new regimen.

  6. Hi-Plex for Simple, Accurate, and Cost-Effective Amplicon-based Targeted DNA Sequencing.

    Science.gov (United States)

    Pope, Bernard J; Hammet, Fleur; Nguyen-Dumont, Tu; Park, Daniel J

    2018-01-01

    Hi-Plex is a suite of methods to enable simple, accurate, and cost-effective highly multiplex PCR-based targeted sequencing (Nguyen-Dumont et al., Biotechniques 58:33-36, 2015). At its core is the principle of using gene-specific primers (GSPs) to "seed" (or target) the reaction and universal primers to "drive" the majority of the reaction. In this manner, effects on amplification efficiencies across the target amplicons can, to a large extent, be restricted to early seeding cycles. Product sizes are defined within a relatively narrow range to enable high-specificity size selection, replication uniformity across target sites (including in the context of fragmented input DNA such as that derived from fixed tumor specimens (Nguyen-Dumont et al., Biotechniques 55:69-74, 2013; Nguyen-Dumont et al., Anal Biochem 470:48-51, 2015), and application of high-specificity genetic variant calling algorithms (Pope et al., Source Code Biol Med 9:3, 2014; Park et al., BMC Bioinformatics 17:165, 2016). Hi-Plex offers a streamlined workflow that is suitable for testing large numbers of specimens without the need for automation.

  7. Alternative methods of modeling wind generation using production costing models

    International Nuclear Information System (INIS)

    Milligan, M.R.; Pang, C.K.

    1996-08-01

    This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models

  8. Liquid-liquid critical point in a simple analytical model of water

    Science.gov (United States)

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  9. The attentional drift-diffusion model extends to simple purchasing decisions.

    Science.gov (United States)

    Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio

    2012-01-01

    How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions.

  10. System cost model user's manual, version 1.2

    International Nuclear Information System (INIS)

    Shropshire, D.

    1995-06-01

    The System Cost Model (SCM) was developed by Lockheed Martin Idaho Technologies in Idaho Falls, Idaho and MK-Environmental Services in San Francisco, California to support the Baseline Environmental Management Report sensitivity analysis for the U.S. Department of Energy (DOE). The SCM serves the needs of the entire DOE complex for treatment, storage, and disposal (TSD) of mixed low-level, low-level, and transuranic waste. The model can be used to evaluate total complex costs based on various configuration options or to evaluate site-specific options. The site-specific cost estimates are based on generic assumptions such as waste loads and densities, treatment processing schemes, existing facilities capacities and functions, storage and disposal requirements, schedules, and cost factors. The SCM allows customization of the data for detailed site-specific estimates. There are approximately forty TSD module designs that have been further customized to account for design differences for nonalpha, alpha, remote-handled, and transuranic wastes. The SCM generates cost profiles based on the model default parameters or customized user-defined input and also generates costs for transporting waste from generators to TSD sites

  11. Modeling the cost and benefit of proteome regulation in a growing bacterial cell

    Science.gov (United States)

    Sharma, Pooja; Pratim Pandey, Parth; Jain, Sanjay

    2018-07-01

    Escherichia coli cells differentially regulate the production of metabolic and ribosomal proteins in order to stay close to an optimal growth rate in different environments, and exhibit the bacterial growth laws as a consequence. We present a simple mathematical model of a growing-dividing cell in which an internal dynamical mechanism regulates the allocation of proteomic resources between different protein sectors. The model allows an endogenous determination of the growth rate of the cell as a function of cellular and environmental parameters, and reproduces the bacterial growth laws. We use the model and its variants to study the balance between the cost and benefit of regulation. A cost is incurred because cellular resources are diverted to produce the regulatory apparatus. We show that there is a window of environments or a ‘niche’ in which the unregulated cell has a higher fitness than the regulated cell. Outside this niche there is a large space of constant and time varying environments in which regulation is an advantage. A knowledge of the ‘niche boundaries’ allows one to gain an intuitive understanding of the class of environments in which regulation is an advantage for the organism and which would therefore favour the evolution of regulation. The model allows us to determine the ‘niche boundaries’ as a function of cellular parameters such as the size of the burden of the regulatory apparatus. This class of models may be useful in elucidating various tradeoffs in cells and in making in-silico predictions relevant for synthetic biology.

  12. A simple shear limited, single size, time dependent flocculation model

    Science.gov (United States)

    Kuprenas, R.; Tran, D. A.; Strom, K.

    2017-12-01

    This research focuses on the modeling of flocculation of cohesive sediment due to turbulent shear, specifically, investigating the dependency of flocculation on the concentration of cohesive sediment. Flocculation is important in larger sediment transport models as cohesive particles can create aggregates which are orders of magnitude larger than their unflocculated state. As the settling velocity of each particle is determined by the sediment size, density, and shape, accounting for this aggregation is important in determining where the sediment is deposited. This study provides a new formulation for flocculation of cohesive sediment by modifying the Winterwerp (1998) flocculation model (W98) so that it limits floc size to that of the Kolmogorov micro length scale. The W98 model is a simple approach that calculates the average floc size as a function of time. Because of its simplicity, the W98 model is ideal for implementing into larger sediment transport models; however, the model tends to over predict the dependency of the floc size on concentration. It was found that the modification of the coefficients within the original model did not allow for the model to capture the dependency on concentration. Therefore, a new term within the breakup kernel of the W98 formulation was added. The new formulation results is a single size, shear limited, and time dependent flocculation model that is able to effectively capture the dependency of the equilibrium size of flocs on both suspended sediment concentration and the time to equilibrium. The overall behavior of the new model is explored and showed align well with other studies on flocculation. Winterwerp, J. C. (1998). A simple model for turbulence induced flocculation of cohesive sediment. .Journal of Hydraulic Research, 36(3):309-326.

  13. Preliminary Multi-Variable Parametric Cost Model for Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Hendrichs, Todd

    2010-01-01

    This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.

  14. Alternative wind power modeling methods using chronological and load duration curve production cost models

    Energy Technology Data Exchange (ETDEWEB)

    Milligan, M R

    1996-04-01

    As an intermittent resource, capturing the temporal variation in windpower is an important issue in the context of utility production cost modeling. Many of the production cost models use a method that creates a cumulative probability distribution that is outside the time domain. The purpose of this report is to examine two production cost models that represent the two major model types: chronological and load duration cure models. This report is part of the ongoing research undertaken by the Wind Technology Division of the National Renewable Energy Laboratory in utility modeling and wind system integration.

  15. Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example

    Science.gov (United States)

    Devi, Y. D.; Kota, V. K. B.

    1993-07-01

    A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150Nd.

  16. Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example

    International Nuclear Information System (INIS)

    Devi, Y.D.; Kota, V.K.B.

    1993-01-01

    A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150 Nd

  17. A Simple Model to Study Tau Pathology

    Directory of Open Access Journals (Sweden)

    Alexander L. Houck

    2016-01-01

    Full Text Available Tau proteins play a role in the stabilization of microtubules, but in pathological conditions, tauopathies, tau is modified by phosphorylation and can aggregate into aberrant aggregates. These aggregates could be toxic to cells, and different cell models have been used to test for compounds that might prevent these tau modifications. Here, we have used a cell model involving the overexpression of human tau in human embryonic kidney 293 cells. In human embryonic kidney 293 cells expressing tau in a stable manner, we have been able to replicate the phosphorylation of intracellular tau. This intracellular tau increases its own level of phosphorylation and aggregates, likely due to the regulatory effect of some growth factors on specific tau kinases such as GSK3. In these conditions, a change in secreted tau was observed. Reversal of phosphorylation and aggregation of tau was found by the use of lithium, a GSK3 inhibitor. Thus, we propose this as a simple cell model to study tau pathology in nonneuronal cells due to their viability and ease to work with.

  18. A maintenance and operations cost model for DSN

    Science.gov (United States)

    Burt, R. W.; Kirkbride, H. L.

    1977-01-01

    A cost model for the DSN is developed which is useful in analyzing the 10-year Life Cycle Cost of the Bent Pipe Project. The philosophy behind the development and the use made of a computer data base are detailed; the applicability of this model to other projects is discussed.

  19. Swarming behavior of simple model squirmers

    International Nuclear Information System (INIS)

    Thutupalli, Shashi; Seemann, Ralf; Herminghaus, Stephan

    2011-01-01

    We have studied experimentally the collective behavior of self-propelling liquid droplets, which closely mimic the locomotion of some protozoal organisms, the so-called squirmers. For the sake of simplicity, we concentrate on quasi-two-dimensional (2D) settings, although our swimmers provide a fully 3D propulsion scheme. At an areal density of 0.46, we find strong polar correlation of the locomotion velocities of neighboring droplets, which decays over less than one droplet diameter. When the areal density is increased to 0.78, distinct peaks show up in the angular correlation function, which point to the formation of ordered rafts. This shows that pronounced textures, beyond what has been seen in simulations so far, may show up in crowds of simple model squirmers, despite the simplicity of their (purely physical) mutual interaction.

  20. Swarming behavior of simple model squirmers

    Energy Technology Data Exchange (ETDEWEB)

    Thutupalli, Shashi; Seemann, Ralf; Herminghaus, Stephan, E-mail: shashi.thutupalli@ds.mpg.de, E-mail: stephan.herminghaus@ds.mpg.de [Max Planck Institute for Dynamics and Self-Organization, Bunsenstrasse 10, 37073 Goettingen (Germany)

    2011-07-15

    We have studied experimentally the collective behavior of self-propelling liquid droplets, which closely mimic the locomotion of some protozoal organisms, the so-called squirmers. For the sake of simplicity, we concentrate on quasi-two-dimensional (2D) settings, although our swimmers provide a fully 3D propulsion scheme. At an areal density of 0.46, we find strong polar correlation of the locomotion velocities of neighboring droplets, which decays over less than one droplet diameter. When the areal density is increased to 0.78, distinct peaks show up in the angular correlation function, which point to the formation of ordered rafts. This shows that pronounced textures, beyond what has been seen in simulations so far, may show up in crowds of simple model squirmers, despite the simplicity of their (purely physical) mutual interaction.

  1. An improved COCOMO software cost estimation model | Duke ...

    African Journals Online (AJOL)

    In this paper, we discuss the methodologies adopted previously in software cost estimation using the COnstructive COst MOdels (COCOMOs). From our analysis, COCOMOs produce very high software development efforts, which eventually produce high software development costs. Consequently, we propose its extension, ...

  2. Modelling the phonotactic structure of natural language words with simple recurrent networks

    NARCIS (Netherlands)

    Stoianov, [No Value; Nerbonne, J; Bouma, H; Coppen, PA; vanHalteren, H; Teunissen, L

    1998-01-01

    Simple Recurrent Networks (SRN) are Neural Network (connectionist) models able to process natural language. Phonotactics concerns the order of symbols in words. We continued an earlier unsuccessful trial to model the phonotactics of Dutch words with SRNs. In order to overcome the previously reported

  3. An Environmentally Oriented Constructive Cost Model In Information ...

    African Journals Online (AJOL)

    A model was designed to assist software developers in Nigeria to estimate software effort, duration and cost as a result of the difficulties in understanding the parameters of the traditional Constructive Cost Model II (COCOMO II) , which was designed for a specif ic environment, using Source Lines of Code (SLOC) . Results ...

  4. Local Telephone Costs and the Design of Rate Structures,

    Science.gov (United States)

    1981-05-01

    basic principles developed from this theory. These principles call for provisionally pricing each of the firm’s outputs at its marginal cost, testing...rule--prices are increased above marginal costs in inverse proportion to the individual price elasticities of demand. This paper applies ratemaking ...The fol- lowing sections develop a series of simple models that successively incorporate these basic elements. Throughout the paper I make several

  5. Trophic dynamics of a simple model ecosystem.

    Science.gov (United States)

    Bell, Graham; Fortier-Dubois, Étienne

    2017-09-13

    We have constructed a model of community dynamics that is simple enough to enumerate all possible food webs, yet complex enough to represent a wide range of ecological processes. We use the transition matrix to predict the outcome of succession and then investigate how the transition probabilities are governed by resource supply and immigration. Low-input regimes lead to simple communities whereas trophically complex communities develop when there is an adequate supply of both resources and immigrants. Our interpretation of trophic dynamics in complex communities hinges on a new principle of mutual replenishment, defined as the reciprocal alternation of state in a pair of communities linked by the invasion and extinction of a shared species. Such neutral couples are the outcome of succession under local dispersal and imply that food webs will often be made up of suites of trophically equivalent species. When immigrants arrive from an external pool of fixed composition a similar principle predicts a dynamic core of webs constituting a neutral interchange network, although communities may express an extensive range of other webs whose membership is only in part predictable. The food web is not in general predictable from whole-community properties such as productivity or stability, although it may profoundly influence these properties. © 2017 The Author(s).

  6. Grotoco@SLAM: Second Language Acquisition Modeling with Simple Features, Learners and Task-wise Models

    DEFF Research Database (Denmark)

    Klerke, Sigrid; Martínez Alonso, Héctor; Plank, Barbara

    2018-01-01

    We present our submission to the 2018 Duolingo Shared Task on Second Language Acquisition Modeling (SLAM). We focus on evaluating a range of features for the task, including user-derived measures, while examining how far we can get with a simple linear classifier. Our analysis reveals that errors...

  7. Modelling the Costs of Preserving Digital Assets

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad; Nielsen, Anders Bo; Thirifays, Alex

    2012-01-01

    Information is increasingly being produced in digital form, and some of it must be preserved for the longterm. Digital preservation includes a series of actively managed activities that require on-going funding. To obtain sufficient resources, there is a need for assessing the costs...... and the benefits accrued by preserving the assets. Cost data is also needed for optimizing activities and comparing the costs of different preservation alternatives. The purpose of this study is to analyse generic requirements for modelling the cost of preserving digital assets. The analysis was based...

  8. Scenario Analysis With Economic-Energy Systems Models Coupled to Simple Climate Models

    Science.gov (United States)

    Hanson, D. A.; Kotamarthi, V. R.; Foster, I. T.; Franklin, M.; Zhu, E.; Patel, D. M.

    2008-12-01

    Here, we compare two scenarios based on Stanford University's Energy Modeling Forum Study 22 on global cooperative and non-cooperative climate policies. In the former, efficient transition paths are implemented including technology Research and Development effort, energy conservation programs, and price signals for greenhouse gas (GHG) emissions. In the non-cooperative case, some countries try to relax their regulations and be free riders. Total emissions and costs are higher in the non-cooperative scenario. The simulations, including climate impacts, run to the year 2100. We use the Argonne AMIGA-MARS economic-energy systems model, the Texas AM University's Forest and Agricultural Sector Optimization Model (FASOM), and the University of Illinois's Integrated Science Assessment Model (ISAM), with offline coupling between the FASOM and AMIGA-MARS and an online coupling between AMIGA-MARS and ISAM. This set of models captures the interaction of terrestrial systems, land use, crops and forests, climate change, human activity, and energy systems. Our scenario simulations represent dynamic paths over which all the climate, terrestrial, economic, and energy technology equations are solved simultaneously Special attention is paid to biofuels and how they interact with conventional gasoline/diesel fuel markets. Possible low-carbon penetration paths are based on estimated costs for new technologies, including cellulosic biomass, coal-to-liquids, plug-in electric vehicles, solar and nuclear energy. We explicitly explore key uncertainties that affect mitigation and adaptation scenarios.

  9. A simple model for calculating air pollution within street canyons

    Science.gov (United States)

    Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.

    2014-04-01

    This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.

  10. Improving Power System Modeling. A Tool to Link Capacity Expansion and Production Cost Models

    Energy Technology Data Exchange (ETDEWEB)

    Diakov, Victor [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cole, Wesley [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sullivan, Patrick [National Renewable Energy Lab. (NREL), Golden, CO (United States); Brinkman, Gregory [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-11-01

    Capacity expansion models (CEM) provide a high-level long-term view at the prospects of the evolving power system. In simulating the possibilities of long-term capacity expansion, it is important to maintain the viability of power system operation in the short-term (daily, hourly and sub-hourly) scales. Production-cost models (PCM) simulate routine power system operation on these shorter time scales using detailed load, transmission and generation fleet data by minimizing production costs and following reliability requirements. When based on CEM 'predictions' about generating unit retirements and buildup, PCM provide more detailed simulation for the short-term system operation and, consequently, may confirm the validity of capacity expansion predictions. Further, production cost model simulations of a system that is based on capacity expansion model solution are 'evolutionary' sound: the generator mix is the result of logical sequence of unit retirement and buildup resulting from policy and incentives. The above has motivated us to bridge CEM with PCM by building a capacity expansion - to - production cost model Linking Tool (CEPCoLT). The Linking Tool is built to onset capacity expansion model prescriptions onto production cost model inputs. NREL's ReEDS and Energy Examplar's PLEXOS are the capacity expansion and the production cost models, respectively. Via the Linking Tool, PLEXOS provides details of operation for the regionally-defined ReEDS scenarios.

  11. Oscillations in a simple climate–vegetation model

    Directory of Open Access Journals (Sweden)

    J. Rombouts

    2015-05-01

    Full Text Available We formulate and analyze a simple dynamical systems model for climate–vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate–vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various timescales is discussed.

  12. Oscillations in a simple climate-vegetation model

    Science.gov (United States)

    Rombouts, J.; Ghil, M.

    2015-05-01

    We formulate and analyze a simple dynamical systems model for climate-vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate-vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various timescales is discussed.

  13. Cost-optimization of the IPv4 zeroconf protocol

    NARCIS (Netherlands)

    Bohnenkamp, H.C.; van der Stok, Peter; Hermanns, H.; Vaandrager, Frits

    2003-01-01

    This paper investigates the tradeoff between reliability and effectiveness for the IPv4 Zeroconf protocol, proposed by Cheshire/Adoba/Guttman in 2002, dedicated to the selfconfiguration of IP network interfaces. We develop a simple stochastic cost model of the protocol, where reliability is measured

  14. Cost Calculation Model for Logistics Service Providers

    Directory of Open Access Journals (Sweden)

    Zoltán Bokor

    2012-11-01

    Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly

  15. Manufacturing Cost Levelization Model – A User’s Guide

    Energy Technology Data Exchange (ETDEWEB)

    Morrow, William R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shehabi, Arman [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Smith, Sarah Josephine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-08-01

    The Manufacturing Cost Levelization Model is a cost-performance techno-economic model that estimates total large-scale manufacturing costs for necessary to produce a given product. It is designed to provide production cost estimates for technology researchers to help guide technology research and development towards an eventual cost-effective product. The model presented in this user’s guide is generic and can be tailored to the manufacturing of any product, including the generation of electricity (as a product). This flexibility, however, requires the user to develop the processes and process efficiencies that represents a full-scale manufacturing facility. The generic model is comprised of several modules that estimate variable costs (material, labor, and operating), fixed costs (capital & maintenance), financing structures (debt and equity financing), and tax implications (taxable income after equipment and building depreciation, debt interest payments, and expenses) of a notional manufacturing plant. A cash-flow method is used to estimate a selling price necessary for the manufacturing plant to recover its total cost of production. A levelized unit sales price ($ per unit of product) is determined by dividing the net-present value of the manufacturing plant’s expenses ($) by the net present value of its product output. A user defined production schedule drives the cash-flow method that determines the levelized unit price. In addition, an analyst can increase the levelized unit price to include a gross profit margin to estimate a product sales price. This model allows an analyst to understand the effect that any input variables could have on the cost of manufacturing a product. In addition, the tool is able to perform sensitivity analysis, which can be used to identify the key variables and assumptions that have the greatest influence on the levelized costs. This component is intended to help technology researchers focus their research attention on tasks

  16. A Simple Model of Wings in Heavy-Ion Collisions

    CERN Document Server

    Parikh, Aditya

    2015-01-01

    We create a simple model of heavy ion collisions independent of any generators as a way of investigating a possible source of the wings seen in data. As a first test, we reproduce a standard correlations plot to verify the integrity of the model. We then proceed to test whether an η dependent v2 could be a source of the wings and take projections along multiple Δφ intervals and compare with data. Other variations of the model are tested by having dN/dφ and v2 depend on η as well as including pions and protons into the model to make it more realistic. Comparisons with data seem to indicate that an η dependent v2 is not the main source of the wings.

  17. Trans and cis influences and effects in cobalamins and in their simple models.

    Science.gov (United States)

    De March, Matteo; Demitri, Nicola; Geremia, Silvano; Hickey, Neal; Randaccio, Lucio

    2012-11-01

    The interligand interactions in coordination compounds have been principally interpreted in terms of cis and trans influences and effects, which can be defined as the ability of a ligand X to affect the bond of another ligand, cis or trans to X, to the metal. This review analyzes these effects/influences in cobalamins (XCbl) and their simple models cobaloximes, LCo(chel)X. Important properties of these complexes, such as geometry, stability, and reactivity, can be rationalized in terms of steric and electronic factors of the ligands. Experimental evidence of normal and inverse trans influence is described in alkylcobaloximes for the first time. The study of simple B(12) models has complemented that on the more complex cobalamins, with particular emphasis on the properties of the axial L-Co-X moiety. Some of the conclusions reached for the axial fragment of simple models have also been qualitatively detected in cobalamins and have furnished new insight into the as yet unestablished mechanism for the homolytic cleavage of the Co - C bond in the AdoCbl-based enzymes. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. A simple operational gas release and swelling model. Pt. 1

    International Nuclear Information System (INIS)

    Wood, M.H.; Matthews, J.R.

    1980-01-01

    A new and simple model of fission gas release and swelling has been developed for oxide nuclear fuel under operational conditions. The model, which is to be incorporated into a fuel element behaviour code, is physically based and applicable to fuel at both thermal and fast reactor ratings. In this paper we present that part of the model describing the behaviour of intragranular gas: a future paper will detail the treatment of the grain boundary gas. The results of model calculations are compared with recent experimental observations of intragranular bubble concentrations and sizes, and gas release from fuel irradiated under isothermal conditions. Good agreement is found between experiment and theory. (orig.)

  19. The fermion content of the Standard Model from a simple world-line theory

    Energy Technology Data Exchange (ETDEWEB)

    Mansfield, Paul, E-mail: P.R.W.Mansfield@durham.ac.uk

    2015-04-09

    We describe a simple model that automatically generates the sum over gauge group representations and chiralities of a single generation of fermions in the Standard Model, augmented by a sterile neutrino. The model is a modification of the world-line approach to chiral fermions.

  20. Discounted cost model for condition-based maintenance optimization

    International Nuclear Information System (INIS)

    Weide, J.A.M. van der; Pandey, M.D.; Noortwijk, J.M. van

    2010-01-01

    This paper presents methods to evaluate the reliability and optimize the maintenance of engineering systems that are damaged by shocks or transients arriving randomly in time and overall degradation is modeled as a cumulative stochastic point process. The paper presents a conceptually clear and comprehensive derivation of formulas for computing the discounted cost associated with a maintenance policy combining both condition-based and age-based criteria for preventive maintenance. The proposed discounted cost model provides a more realistic basis for optimizing the maintenance policies than those based on the asymptotic, non-discounted cost rate criterion.

  1. Overview of SDCM - The Spacecraft Design and Cost Model

    Science.gov (United States)

    Ferebee, Melvin J.; Farmer, Jeffery T.; Andersen, Gregory C.; Flamm, Jeffery D.; Badi, Deborah M.

    1988-01-01

    The Spacecraft Design and Cost Model (SDCM) is a computer-aided design and analysis tool for synthesizing spacecraft configurations, integrating their subsystems, and generating information concerning on-orbit servicing and costs. SDCM uses a bottom-up method in which the cost and performance parameters for subsystem components are first calculated; the model then sums the contributions from individual components in order to obtain an estimate of sizes and costs for each candidate configuration within a selected spacecraft system. An optimum spacraft configuration can then be selected.

  2. Renewable Energy Cost Modeling. A Toolkit for Establishing Cost-Based Incentives in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Gifford, Jason S. [Sustainable Energy Advantage, LLC, Framington, MA (United States); Grace, Robert C. [Sustainable Energy Advantage, LLC, Framington, MA (United States); Rickerson, Wilson H. [Meister Consultants Group, Inc., Boston, MA (United States)

    2011-05-01

    This report serves as a resource for policymakers who wish to learn more about levelized cost of energy (LCOE) calculations, including cost-based incentives. The report identifies key renewable energy cost modeling options, highlights the policy implications of choosing one approach over the other, and presents recommendations on the optimal characteristics of a model to calculate rates for cost-based incentives, FITs, or similar policies. These recommendations shaped the design of NREL's Cost of Renewable Energy Spreadsheet Tool (CREST), which is used by state policymakers, regulators, utilities, developers, and other stakeholders to assist with analyses of policy and renewable energy incentive payment structures. Authored by Jason S. Gifford and Robert C. Grace of Sustainable Energy Advantage LLC and Wilson H. Rickerson of Meister Consultants Group, Inc.

  3. Simple Electromagnetic Modeling of Small Airplanes: Neural Network Approach

    OpenAIRE

    Koudelka, V.; Raida, Zbyněk; Tobola, P.

    2009-01-01

    The paper deals with the development of simple electromagnetic models of small airplanes, which can contain composite materials in their construction. Electromagnetic waves can penetrate through the surface of the aircraft due to the specific electromagnetic properties of the composite materials, which can increase the intensity of fields inside the airplane and can negatively influence the functionality of the sensitive avionics. The airplane is simulated by two parallel dielectric layers (t...

  4. Cost Analysis of Prenatal Care Using the Activity-Based Costing Model: A Pilot Study

    Science.gov (United States)

    Gesse, Theresa; Golembeski, Susan; Potter, Jonell

    1999-01-01

    The cost of prenatal care in a private nurse-midwifery practice was examined using the activity-based costing system. Findings suggest that the activities of the nurse-midwife (the health care provider) constitute the major cost driver of this practice and that the model of care and associated, time-related activities influence the cost. This pilot study information will be used in the development of a comparative study of prenatal care, client education, and self care. PMID:22945985

  5. Cost analysis of prenatal care using the activity-based costing model: a pilot study.

    Science.gov (United States)

    Gesse, T; Golembeski, S; Potter, J

    1999-01-01

    The cost of prenatal care in a private nurse-midwifery practice was examined using the activity-based costing system. Findings suggest that the activities of the nurse-midwife (the health care provider) constitute the major cost driver of this practice and that the model of care and associated, time-related activities influence the cost. This pilot study information will be used in the development of a comparative study of prenatal care, client education, and self care.

  6. Simple mathematical models of gene regulatory dynamics

    CERN Document Server

    Mackey, Michael C; Tyran-Kamińska, Marta; Zeron, Eduardo S

    2016-01-01

    This is a short and self-contained introduction to the field of mathematical modeling of gene-networks in bacteria. As an entry point to the field, we focus on the analysis of simple gene-network dynamics. The notes commence with an introduction to the deterministic modeling of gene-networks, with extensive reference to applicable results coming from dynamical systems theory. The second part of the notes treats extensively several approaches to the study of gene-network dynamics in the presence of noise—either arising from low numbers of molecules involved, or due to noise external to the regulatory process. The third and final part of the notes gives a detailed treatment of three well studied and concrete examples of gene-network dynamics by considering the lactose operon, the tryptophan operon, and the lysis-lysogeny switch. The notes contain an index for easy location of particular topics as well as an extensive bibliography of the current literature. The target audience of these notes are mainly graduat...

  7. Operating cost budgeting methods: quantitative methods to improve the process

    Directory of Open Access Journals (Sweden)

    José Olegário Rodrigues da Silva

    Full Text Available Abstract Operating cost forecasts are used in economic feasibility studies of projects and in budgeting process. Studies have pointed out that some companies are not satisfied with the budgeting process and chief executive officers want updates more frequently. In these cases, the main problem lies in the costs versus benefits. Companies seek simple and cheap forecasting methods without, at the same time, conceding in terms of quality of the resulting information. This study aims to compare operating cost forecasting models to identify the ones that are relatively easy to implement and turn out less deviation. For this purpose, we applied ARIMA (autoregressive integrated moving average and distributed dynamic lag models to data from a Brazilian petroleum company. The results suggest that the models have potential application, and that multivariate models fitted better and showed itself a better way to forecast costs than univariate models.

  8. Analysis of divertor asymmetry using a simple five-point model

    International Nuclear Information System (INIS)

    Hayashi, Nobuhiko; Takizuka, Tomonori; Hatayama, Akiyoshi; Ogasawara, Masatada.

    1997-03-01

    A simple five-point model of the scrape-off layer (SOL) plasma outside the separatrix of a diverted tokamak has been developed to study the inside/outside divertor asymmetry. The SOL current, gas pumping/puffing in the divertor region, and divertor plate biasing are included in this model. Gas pumping/puffing and biasing are shown to control divertor asymmetry. In addition, the SOL current is found to form asymmetric solutions without external controls of gas pumping/puffing and biasing. (author)

  9. A simple conceptual model of abrupt glacial climate events

    Directory of Open Access Journals (Sweden)

    H. Braun

    2007-11-01

    Full Text Available Here we use a very simple conceptual model in an attempt to reduce essential parts of the complex nonlinearity of abrupt glacial climate changes (the so-called Dansgaard-Oeschger events to a few simple principles, namely (i the existence of two different climate states, (ii a threshold process and (iii an overshooting in the stability of the system at the start and the end of the events, which is followed by a millennial-scale relaxation. By comparison with a so-called Earth system model of intermediate complexity (CLIMBER-2, in which the events represent oscillations between two climate states corresponding to two fundamentally different modes of deep-water formation in the North Atlantic, we demonstrate that the conceptual model captures fundamental aspects of the nonlinearity of the events in that model. We use the conceptual model in order to reproduce and reanalyse nonlinear resonance mechanisms that were already suggested in order to explain the characteristic time scale of Dansgaard-Oeschger events. In doing so we identify a new form of stochastic resonance (i.e. an overshooting stochastic resonance and provide the first explicitly reported manifestation of ghost resonance in a geosystem, i.e. of a mechanism which could be relevant for other systems with thresholds and with multiple states of operation. Our work enables us to explicitly simulate realistic probability measures of Dansgaard-Oeschger events (e.g. waiting time distributions, which are a prerequisite for statistical analyses on the regularity of the events by means of Monte-Carlo simulations. We thus think that our study is an important advance in order to develop more adequate methods to test the statistical significance and the origin of the proposed glacial 1470-year climate cycle.

  10. A CASKCOM: A cask life cycle cost model

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    CASKCOM (cask cost model) is a computerized model which calculates the life cycle costs (LCC) associated with specific transportation cask designs and discounts those costs, if the user so chooses, to a net present value. The model has been used to help analyze and compare the life cycle economics of burnup credit and nonburnup credit cask designs being considered as conditions for a new generation of spent fuel transportation casks. CASKCOM is parametric in the sense that its input data can be easily changed in order to analyze and compare the life cycle cost implications arising from alternative assumptions. The input data themselves are organized into two main groupings. The first grouping comprises a set of data which is independent of cask design. This first grouping does not change from the analysis of one cask design to another. The second grouping of data is specific to each individual cask design. This second grouping thus changes each time a new cask design is analyzed

  11. A dynamic model for costing disaster mitigation policies.

    Science.gov (United States)

    Altay, Nezih; Prasad, Sameer; Tata, Jasmine

    2013-07-01

    The optimal level of investment in mitigation strategies is usually difficult to ascertain in the context of disaster planning. This research develops a model to provide such direction by relying on cost of quality literature. This paper begins by introducing a static approach inspired by Joseph M. Juran's cost of quality management model (Juran, 1951) to demonstrate the non-linear trade-offs in disaster management expenditure. Next it presents a dynamic model that includes the impact of dynamic interactions of the changing level of risk, the cost of living, and the learning/investments that may alter over time. It illustrates that there is an optimal point that minimises the total cost of disaster management, and that this optimal point moves as governments learn from experience or as states get richer. It is hoped that the propositions contained herein will help policymakers to plan, evaluate, and justify voluntary disaster mitigation expenditures. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  12. Modelling the cost effectiveness of antidepressant treatment in primary care.

    Science.gov (United States)

    Revicki, D A; Brown, R E; Palmer, W; Bakish, D; Rosser, W W; Anton, S F; Feeny, D

    1995-12-01

    The aim of this study was to estimate the cost effectiveness of nefazodone compared with imipramine or fluoxetine in treating women with major depressive disorder. Clinical decision analysis and a Markov state-transition model were used to estimate the lifetime health outcomes and medical costs of 3 antidepressant treatments. The model, which represents ideal primary care practice, compares treatment with nefazodone to treatment with either imipramine or fluoxetine. The economic analysis was based on the healthcare system of the Canadian province of Ontario, and considered only direct medical costs. Health outcomes were expressed as quality-adjusted life years (QALYs) and costs were in 1993 Canadian dollars ($Can; $Can1 = $US0.75, September 1995). Incremental cost-utility ratios were calculated comparing the relative lifetime discounted medical costs and QALYs associated with nefazodone with those of imipramine or fluoxetine. Data for constructing the model and estimating necessary parameters were derived from the medical literature, clinical trial data, and physician judgement. Data included information on: Ontario primary care physicians' clinical management of major depression; medical resource use and costs; probabilities of recurrence of depression; suicide rates; compliance rates; and health utilities. Estimates of utilities for depression-related hypothetical health states were obtained from patients with major depression (n = 70). Medical costs and QALYs were discounted to present value using a 5% rate. Sensitivity analyses tested the assumptions of the model by varying the discount rate, depression recurrence rates, compliance rates, and the duration of the model. The base case analysis found that nefazodone treatment costs $Can1447 less per patient than imipramine treatment (discounted lifetime medical costs were $Can50,664 vs $Can52,111) and increases the number of QALYs by 0.72 (13.90 vs 13.18). Nefazodone treatment costs $Can14 less than fluoxetine

  13. Cost and Performance Model for Photovoltaic Systems

    Science.gov (United States)

    Borden, C. S.; Smith, J. H.; Davisson, M. C.; Reiter, L. J.

    1986-01-01

    Lifetime cost and performance (LCP) model assists in assessment of design options for photovoltaic systems. LCP is simulation of performance, cost, and revenue streams associated with photovoltaic power systems connected to electric-utility grid. LCP provides user with substantial flexibility in specifying technical and economic environment of application.

  14. Simple Predictive Models for Saturated Hydraulic Conductivity of Technosands

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Razzaghi, Fatemeh; Møldrup, Per

    2012-01-01

    Accurate estimation of saturated hydraulic conductivity (Ks) of technosands (gravel-free, coarse sands with negligible organic matter content) is important for irrigation and drainage management of athletic fields and golf courses. In this study, we developed two simple models for predicting Ks......-Rammler particle size distribution (PSD) function. The Ks and PSD data of 14 golf course sands from literature as well as newly measured data for a size fraction of Lunar Regolith Simulant, packed at three different dry bulk densities, were used for model evaluation. The pore network tortuosity......-connectivity parameter (m) obtained for pure coarse sand after fitting to measured Ks data was 1.68 for both models and in good agreement with m values obtained from recent solute and gas diffusion studies. Both the modified K-C and R-C models are easy to use and require limited parameter input, and both models gave...

  15. Determination of particle-release conditions in microfiltration: A simple single-particle model tested on a model membrane

    NARCIS (Netherlands)

    Kuiper, S.; van Rijn, C.J.M.; Nijdam, W.; Krijnen, Gijsbertus J.M.; Elwenspoek, Michael Curt

    2000-01-01

    A simple single-particle model was developed for cross-flow microfiltration with microsieves. The model describes the cross-flow conditions required to release a trapped spherical particle from a circular pore. All equations are derived in a fully analytical way without any fitting parameters. For

  16. Bus Lifecycle Cost Model for Federal Land Management Agencies.

    Science.gov (United States)

    2011-09-30

    The Bus Lifecycle Cost Model is a spreadsheet-based planning tool that estimates capital, operating, and maintenance costs for various bus types over the full lifecycle of the vehicle. The model is based on a number of operating characteristics, incl...

  17. Application of a simple analytical model to estimate effectiveness of radiation shielding for neutrons

    International Nuclear Information System (INIS)

    Frankle, S.C.; Fitzgerald, D.H.; Hutson, R.L.; Macek, R.J.; Wilkinson, C.A.

    1993-01-01

    Neutron dose equivalent rates have been measured for 800-MeV proton beam spills at the Los Alamos Meson Physics Facility. Neutron detectors were used to measure the neutron dose levels at a number of locations for each beam-spill test, and neutron energy spectra were measured for several beam-spill tests. Estimates of expected levels for various detector locations were made using a simple analytical model developed for 800-MeV proton beam spills. A comparison of measurements and model estimates indicates that the model is reasonably accurate in estimating the neutron dose equivalent rate for simple shielding geometries. The model fails for more complicated shielding geometries, where indirect contributions to the dose equivalent rate can dominate

  18. Make or buy analysis model based on tolerance allocation to minimize manufacturing cost and fuzzy quality loss

    Science.gov (United States)

    Rosyidi, C. N.; Puspitoingrum, W.; Jauhari, W. A.; Suhardi, B.; Hamada, K.

    2016-02-01

    The specification of tolerances has a significant impact on the quality of product and final production cost. The company should carefully pay attention to the component or product tolerance so they can produce a good quality product at the lowest cost. Tolerance allocation has been widely used to solve problem in selecting particular process or supplier. But before merely getting into the selection process, the company must first make a plan to analyse whether the component must be made in house (make), to be purchased from a supplier (buy), or used the combination of both. This paper discusses an optimization model of process and supplier selection in order to minimize the manufacturing costs and the fuzzy quality loss. This model can also be used to determine the allocation of components to the selected processes or suppliers. Tolerance, process capability and production capacity are three important constraints that affect the decision. Fuzzy quality loss function is used in this paper to describe the semantic of the quality, in which the product quality level is divided into several grades. The implementation of the proposed model has been demonstrated by solving a numerical example problem that used a simple assembly product which consists of three components. The metaheuristic approach were implemented to OptQuest software from Oracle Crystal Ball in order to obtain the optimal solution of the numerical example.

  19. Waste Management facilities cost information: System Cost Model Software Quality Assurance Plan. Revision 2

    International Nuclear Information System (INIS)

    Peterson, B.L.; Lundeen, A.S.

    1996-02-01

    In May of 1994, Lockheed Idaho Technologies Company (LITCO) in Idaho Falls, Idaho and subcontractors developed the System Cost Model (SCM) application. The SCM estimates life-cycle costs of the entire US Department of Energy (DOE) complex for designing; constructing; operating; and decommissioning treatment, storage, and disposal (TSD) facilities for mixed low-level, low-level, transuranic, and mixed transuranic waste. The SCM uses parametric cost functions to estimate life-cycle costs for various treatment, storage, and disposal modules which reflect planned and existing facilities at DOE installations. In addition, SCM can model new facilities based on capacity needs over the program life cycle. The SCM also provides transportation costs for truck and rail, which include transport of contact-handled, remote-handled, and alpha (transuranic) wastes. The user can provide input data (default data is included in the SCM) including the volume and nature of waste to be managed, the time period over which the waste is to be managed, and the configuration of the waste management complex (i.e., where each installation's generated waste will be treated, stored, and disposed). Then the SCM uses parametric cost equations to estimate the costs of pre-operations (designing), construction costs, operation management, and decommissioning these waste management facilities. For the product to be effective and useful the SCM users must have a high level of confidence in the data generated by the software model. The SCM Software Quality Assurance Plan is part of the overall SCM project management effort to ensure that the SCM is maintained as a quality product and can be relied on to produce viable planning data. This document defines tasks and deliverables to ensure continued product integrity, provide increased confidence in the accuracy of the data generated, and meet the LITCO's quality standards during the software maintenance phase. 8 refs., 1 tab

  20. Waste Management facilities cost information: System Cost Model Software Quality Assurance Plan. Revision 2

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, B.L.; Lundeen, A.S.

    1996-02-01

    In May of 1994, Lockheed Idaho Technologies Company (LITCO) in Idaho Falls, Idaho and subcontractors developed the System Cost Model (SCM) application. The SCM estimates life-cycle costs of the entire US Department of Energy (DOE) complex for designing; constructing; operating; and decommissioning treatment, storage, and disposal (TSD) facilities for mixed low-level, low-level, transuranic, and mixed transuranic waste. The SCM uses parametric cost functions to estimate life-cycle costs for various treatment, storage, and disposal modules which reflect planned and existing facilities at DOE installations. In addition, SCM can model new facilities based on capacity needs over the program life cycle. The SCM also provides transportation costs for truck and rail, which include transport of contact-handled, remote-handled, and alpha (transuranic) wastes. The user can provide input data (default data is included in the SCM) including the volume and nature of waste to be managed, the time period over which the waste is to be managed, and the configuration of the waste management complex (i.e., where each installation`s generated waste will be treated, stored, and disposed). Then the SCM uses parametric cost equations to estimate the costs of pre-operations (designing), construction costs, operation management, and decommissioning these waste management facilities. For the product to be effective and useful the SCM users must have a high level of confidence in the data generated by the software model. The SCM Software Quality Assurance Plan is part of the overall SCM project management effort to ensure that the SCM is maintained as a quality product and can be relied on to produce viable planning data. This document defines tasks and deliverables to ensure continued product integrity, provide increased confidence in the accuracy of the data generated, and meet the LITCO`s quality standards during the software maintenance phase. 8 refs., 1 tab.

  1. An equivalent marginal cost-pricing model for the district heating market

    International Nuclear Information System (INIS)

    Zhang, Junli; Ge, Bin; Xu, Hongsheng

    2013-01-01

    District heating pricing is a core element in reforming the heating market. Existing district heating pricing methods, such as the cost-plus pricing method and the conventional marginal-cost pricing method, cannot simultaneously provide both high efficiency and sufficient investment cost return. To solve this problem, the paper presents a new pricing model, namely Equivalent Marginal Cost Pricing (EMCP) model, which is based on the EVE pricing theory and the unique characteristics of heat products and district heating. The EMCP model uses exergy as the measurement of heating product value and places products from different district heating regions into the same competition platform. In the proposed model, the return on investment cost is closely related to the quoted cost, and within the limitations of the Heating Capacity Cost Reference and the maximum compensated shadow capacity cost, both lower and higher price speculations of heat producers are restricted. Simulation results show that the model can guide heat producers to bid according to their production costs and to provide reasonable returns on investment, which contributes to stimulate the role of price leverage and to promote the optimal allocation of heat resources. - Highlights: • Presents a new district heating pricing model. • Provides both high market efficiency and sufficient investment cost return. • Provides a competition mechanism for various products from different DH regions. • Both of lower and higher price speculations are restricted in the new model

  2. Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs.

    Science.gov (United States)

    Ishak, K Jack; Stolar, Marilyn; Hu, Ming-yi; Alvarez, Piedad; Wang, Yamei; Getsios, Denis; Williams, Gregory C

    2012-12-01

    Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs.Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike's Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. The MA Case Mix dataset included data

  3. Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs

    Directory of Open Access Journals (Sweden)

    Ishak K

    2012-12-01

    Full Text Available Abstract Background Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS by an average per-diem (PD cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. Methods An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP control during the stay can be affected by the approach used to derive hospitalization costs. Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike’s Information Criterion, or AIC. This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood

  4. Cost optimization model and its heuristic genetic algorithms

    International Nuclear Information System (INIS)

    Liu Wei; Wang Yongqing; Guo Jilin

    1999-01-01

    Interest and escalation are large quantity in proportion to the cost of nuclear power plant construction. In order to optimize the cost, the mathematics model of cost optimization for nuclear power plant construction was proposed, which takes the maximum net present value as the optimization goal. The model is based on the activity networks of the project and is an NP problem. A heuristic genetic algorithms (HGAs) for the model was introduced. In the algorithms, a solution is represented with a string of numbers each of which denotes the priority of each activity for assigned resources. The HGAs with this encoding method can overcome the difficulty which is harder to get feasible solutions when using the traditional GAs to solve the model. The critical path of the activity networks is figured out with the concept of predecessor matrix. An example was computed with the HGAP programmed in C language. The results indicate that the model is suitable for the objectiveness, the algorithms is effective to solve the model

  5. Distinguishing Little-Higgs product and simple group models at the LHC and ILC

    International Nuclear Information System (INIS)

    Kilian, W.; Rainwater, D.

    2006-09-01

    We propose a means to discriminate between the two basic variants of Little Higgs models, the Product Group and Simple Group models, at the next generation of colliders. It relies on a special coupling of light pseudoscalar particles present in Little Higgs models, the pseudo-axions, to the Z and the Higgs boson, which is present only in Simple Group models. We discuss the collider phenomenology of the pseudo-axion in the presence of such a coupling at the LHC, where resonant production and decay of either the Higgs or the pseudo-axion induced by that coupling can be observed for much of parameter space. The full allowed range of parameters, including regions where the observability is limited at the LHC, is covered by a future ILC, where double scalar production would be a golden channel to look for. (orig.)

  6. Distinguishing Little-Higgs product and simple group models at the LHC and ILC

    Energy Technology Data Exchange (ETDEWEB)

    Kilian, W. [Siegen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik]|[Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Rainwater, D. [Rochester Univ., NY (United States). Dept. of Physics and Astronomy; Reuter, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2006-09-15

    We propose a means to discriminate between the two basic variants of Little Higgs models, the Product Group and Simple Group models, at the next generation of colliders. It relies on a special coupling of light pseudoscalar particles present in Little Higgs models, the pseudo-axions, to the Z and the Higgs boson, which is present only in Simple Group models. We discuss the collider phenomenology of the pseudo-axion in the presence of such a coupling at the LHC, where resonant production and decay of either the Higgs or the pseudo-axion induced by that coupling can be observed for much of parameter space. The full allowed range of parameters, including regions where the observability is limited at the LHC, is covered by a future ILC, where double scalar production would be a golden channel to look for. (orig.)

  7. NEARSOL - a simple program to model actinide speciation and solubility under waste disposal conditions

    International Nuclear Information System (INIS)

    Leach, S.J.; Pryke, D.C.

    1986-05-01

    A simple program, NearSol, has been written in Fortran 77 on the Harwell Central Computer to model the aqueous speciation and solubility of actinides under near-field conditions for disposal using a simple thermodynamic approach. The methodology and running of the program are described together with a worked example. (author)

  8. A Simple Singlet Fermionic Dark-Matter Model Revisited

    International Nuclear Information System (INIS)

    Qin Hong-Yi; Wang Wen-Yu; Xiong Zhao-Hua

    2011-01-01

    We evaluate the spin-independent elastic dark matter-nucleon scattering cross section in the framework of the simple singlet fermionic dark matter extension of the standard model and constrain the model parameter space with the following considerations: (i) new dark matter measurement, in which, apart from WMAP and CDMS, the results from the XENON experiment are also used in constraining the model; (ii) new fitted value of the quark fractions in nucleons, in which the updated value of f T s from the recent lattice simulation is much smaller than the previous one and may reduce the scattering rate significantly; (iii) new dark matter annihilation channels, in which the scenario where top quark and Higgs pairs produced by dark matter annihilation was not included in the previous works. We find that unlike in the minimal supersymmetric standard model, the cross section is just reduced by a factor of about 1/4 and dark matter lighter than 100 GeV is not favored by the WMAP, CDMS and XENON experiments. (the physics of elementary particles and fields)

  9. A Simple Hybrid Model for Short-Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Suseelatha Annamareddi

    2013-01-01

    Full Text Available The paper proposes a simple hybrid model to forecast the electrical load data based on the wavelet transform technique and double exponential smoothing. The historical noisy load series data is decomposed into deterministic and fluctuation components using suitable wavelet coefficient thresholds and wavelet reconstruction method. The variation characteristics of the resulting series are analyzed to arrive at reasonable thresholds that yield good denoising results. The constitutive series are then forecasted using appropriate exponential adaptive smoothing models. A case study performed on California energy market data demonstrates that the proposed method can offer high forecasting precision for very short-term forecasts, considering a time horizon of two weeks.

  10. A Simple Model of Offshore Outsourcing, Technology Upgrading and Welfare

    OpenAIRE

    Jung , Jaewon; Mercenier , Jean

    2009-01-01

    We adapt Yeaple's (2005) heterogeneous agents framework to model firms in the North as making explicit offshore outsourcing decisions to cheap-labor economies. Globalization results from a lowering of the set-up costs incurred when engaging in offshore activities. We highlight how firms'technology transformations due to global- ization will induce skill upgrading in the North, increase aggregate productivity, av- erage wages and therefore total welfare at the cost of increased wage inequaliti...

  11. A simple model for skewed species-lifetime distributions

    KAUST Repository

    Murase, Yohsuke

    2010-06-11

    A simple model of a biological community assembly is studied. Communities are assembled by successive migrations and extinctions of species. In the model, species are interacting with each other. The intensity of the interaction between each pair of species is denoted by an interaction coefficient. At each time step, a new species is introduced to the system with randomly assigned interaction coefficients. If the sum of the coefficients, which we call the fitness of a species, is negative, the species goes extinct. The species-lifetime distribution is found to be well characterized by a stretched exponential function with an exponent close to 1/2. This profile agrees not only with more realistic population dynamics models but also with fossil records. We also find that an age-independent and inversely diversity-dependent mortality, which is confirmed in the simulation, is a key mechanism accounting for the distribution. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.

  12. A simple statistical signal loss model for deep underground garage

    DEFF Research Database (Denmark)

    Nguyen, Huan Cong; Gimenez, Lucas Chavarria; Kovacs, Istvan

    2016-01-01

    In this paper we address the channel modeling aspects for a deep-indoor scenario with extreme coverage conditions in terms of signal losses, namely underground garage areas. We provide an in-depth analysis in terms of path loss (gain) and large scale signal shadowing, and a propose simple...... propagation model which can be used to predict cellular signal levels in similar deep-indoor scenarios. The proposed frequency-independent floor attenuation factor (FAF) is shown to be in range of 5.2 dB per meter deep....

  13. Characterization of simple wireless neurostimulators and sensors.

    Science.gov (United States)

    Gulick, Daniel W; Towe, Bruce C

    2014-01-01

    A single diode with a wireless power source and electrodes can act as an implantable stimulator or sensor. We have built such devices using RF and ultrasound power coupling. These simple devices could drastically reduce the size, weight, and cost of implants for applications where efficiency is not critical. However, a shortcoming has been a lack of control: any movement of the external power source would change the power coupling, thereby changing the stimulation current or modulating the sensor response. To correct for changes in power and signal coupling, we propose to use harmonic signals from the device. The diode acts as a frequency multiplier, and the harmonics it emits contain information about the drive level and bias. A simplified model suggests that estimation of power, electrode bias, and electrode resistance is possible from information contained in radiated harmonics even in the presence of significant noise. We also built a simple RF-powered stimulator with an onboard voltage limiter.

  14. Counting the costs of overweight and obesity: modeling clinical and cost outcomes.

    Science.gov (United States)

    Tucker, Daniel M D; Palmer, Andrew J; Valentine, William J; Roze, Stéphane; Ray, Joshua A

    2006-03-01

    To quantify changes in clinical and cost outcomes associated with increasing levels of body mass index (BMI) in a US setting. A semi-Markov model was developed to project and compare life expectancy (LE), quality-adjusted life expectancy (QALE) and direct medical costs associated with distinct levels of BMI in simulated adult cohorts over a lifetime horizon. Cohort definitions included age (20-65 years), gender, race, and BMI (24-45 kg m(-2)). Cohorts were exclusively male or female and either Caucasian or African-American. Mortality rates were adjusted according to these factors using published data. BMI progression over time was modeled. BMI-dependent US direct medical costs were derived from published sources and inflated to year 2004 values. A third party reimbursement perspective was taken. QALE and costs were discounted at 3% per annum. In young Caucasian cohorts LE decreased as BMI increased. However, in older Caucasian cohorts the BMI associated with greatest longevity was higher than 25 kg m(-2). A similar pattern was observed in young adult African-American cohorts. A survival paradox was projected in older African-American cohorts, with some BMI levels in the obese category associated with greatest longevity. QALE in all four race/gender cohorts followed similar patterns to LE. Sensitivity analyses demonstrated that simulating BMI progression over time had an important impact on results. Direct costs in all four cohorts increased with BMI, with a few exceptions. Optimal BMI, in terms of longevity, varied between race/gender cohorts and within these cohorts, according to age, contributing to the debate over what BMI level or distribution should be considered ideal in terms of mortality risk. Simulating BMI progression over time had a substantial impact on health outcomes and should be modeled in future health economic analyses of overweight and obesity.

  15. Simple model for multiple-choice collective decision making.

    Science.gov (United States)

    Lee, Ching Hua; Lucas, Andrew

    2014-11-01

    We describe a simple model of heterogeneous, interacting agents making decisions between n≥2 discrete choices. For a special class of interactions, our model is the mean field description of random field Potts-like models and is effectively solved by finding the extrema of the average energy E per agent. In these cases, by studying the propagation of decision changes via avalanches, we argue that macroscopic dynamics is well captured by a gradient flow along E. We focus on the permutation symmetric case, where all n choices are (on average) the same, and spontaneous symmetry breaking (SSB) arises purely from cooperative social interactions. As examples, we show that bimodal heterogeneity naturally provides a mechanism for the spontaneous formation of hierarchies between decisions and that SSB is a preferred instability to discontinuous phase transitions between two symmetric points. Beyond the mean field limit, exponentially many stable equilibria emerge when we place this model on a graph of finite mean degree. We conclude with speculation on decision making with persistent collective oscillations. Throughout the paper, we emphasize analogies between methods of solution to our model and common intuition from diverse areas of physics, including statistical physics and electromagnetism.

  16. A simple model of EG and G reverse reach-through APDs

    CERN Document Server

    Musienko, Y; Swain, J D

    2000-01-01

    A simple model of reverse reach-through APDs is described. APD parameters including the dependence of the electric field and gain on the bias voltage, dependence of gain on wavelength are calculated using the McIntyre approach and an assumed doping profile of the APD.

  17. A simple model of EG and G reverse reach-through APDs

    Energy Technology Data Exchange (ETDEWEB)

    Musienko, Y. E-mail: iouri.moussienko@cern.ch; Reucroft, S.; Swain, J

    2000-03-11

    A simple model of reverse reach-through APDs is described. APD parameters including the dependence of the electric field and gain on the bias voltage, dependence of gain on wavelength are calculated using the McIntyre approach and an assumed doping profile of the APD.

  18. A Simple Model of the Variability of Soil Depths

    Directory of Open Access Journals (Sweden)

    Fang Yu

    2017-06-01

    Full Text Available Soil depth tends to vary from a few centimeters to several meters, depending on many natural and environmental factors. We hypothesize that the cumulative effect of these factors on soil depth, which is chiefly dependent on the process of biogeochemical weathering, is particularly affected by soil porewater (i.e., solute transport and infiltration from the land surface. Taking into account evidence for a non-Gaussian distribution of rock weathering rates, we propose a simple mathematical model to describe the relationship between soil depth and infiltration flux. The model was tested using several areas in mostly semi-arid climate zones. The application of this model demonstrates the use of fundamental principles of physics to quantify the coupled effects of the five principal soil-forming factors of Dokuchaev.

  19. Solid waste integrated cost analysis model: 1991 project year report

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    The purpose of the City of Houston's 1991 Solid Waste Integrated Cost Analysis Model (SWICAM) project was to continue the development of a computerized cost analysis model. This model is to provide solid waste managers with tool to evaluate the dollar cost of real or hypothetical solid waste management choices. Those choices have become complicated by the implementation of Subtitle D of the Resources Conservation and Recovery Act (RCRA) and the EPA's Integrated Approach to managing municipal solid waste;. that is, minimize generation, maximize recycling, reduce volume (incinerate), and then bury (landfill) only the remainder. Implementation of an integrated solid waste management system involving all or some of the options of recycling, waste to energy, composting, and landfilling is extremely complicated. Factors such as hauling distances, markets, and prices for recyclable, costs and benefits of transfer stations, and material recovery facilities must all be considered. A jurisdiction must determine the cost impacts of implementing a number of various possibilities for managing, handling, processing, and disposing of waste. SWICAM employs a single Lotus 123 spreadsheet to enable a jurisdiction to predict or assess the costs of its waste management system. It allows the user to select his own process flow for waste material and to manipulate the model to include as few or as many options as he or she chooses. The model will calculate the estimated cost for those choices selected. The user can then change the model to include or exclude waste stream components, until the mix of choices suits the user. Graphs can be produced as a visual communication aid in presenting the results of the cost analysis. SWICAM also allows future cost projections to be made.

  20. Superficial tension: experimental model with simple materials

    Directory of Open Access Journals (Sweden)

    Tintori Ferreira, María Alejandra

    2012-09-01

    Full Text Available In this work appears a didactic offer based on an experimental activity using materials of very low cost, orientated to achieving that the student understand and interpret the phenomenon of superficial tension together with the importance of the modeling in sciences. It has as principal aim of education bring the student over to the mechanics of the static fluids and the intermolecular forces, combining scientific contents with questions near to the student what provides an additional motivation to the reflection of the scientific investigation.

  1. Modeling Exposure to Heat Stress with a Simple Urban Model

    Directory of Open Access Journals (Sweden)

    Peter Hoffmann

    2018-01-01

    Full Text Available As a first step in modeling health-related urban well-being (UrbWellth, a mathematical model is constructed that dynamically simulates heat stress exposure of commuters in an idealized city. This is done by coupling the Simple Urban Radiation Model (SURM, which computes the mean radiant temperature ( T m r t , with a newly developed multi-class multi-mode traffic model. Simulation results with parameters chosen for the city of Hamburg for a hot summer day show that commuters are potentially most exposed to heat stress in the early afternoon when T m r t has its maximum. Varying the morphology with respect to street width and building height shows that a more compact city configuration reduces T m r t and therefore the exposure to heat stress. The impact resulting from changes in the city structure on traffic is simulated to determine the time spent outside during the commute. While the time in traffic jams increases for compact cities, the total commuting time decreases due to shorter distances between home and work place. Concerning adaptation measures, it is shown that increases in the albedo of the urban surfaces lead to an increase in daytime heat stress. Dramatic increases in heat stress exposure are found when both, wall and street albedo, are increased.

  2. Simple model for the dynamics towards metastable states

    International Nuclear Information System (INIS)

    Meijer, P.H.E.; Keskin, M.; Bodegom, E.

    1986-01-01

    Circumstances under which a quenched system will freeze in a metastable state are studied in simple systems with long-range order. The model used is the time-dependent pair approximation, based on the most probable path (MPP) method. The time dependence of the solution is shown by means of flow diagrams. The fixed points and other features of the differential equations in time are independent of the choice of the rate constants. It is explained qualitatively how the system behaves under varying descending temperatures: the role of the initial conditions, the dependence on the quenching rate, and the response to precooling

  3. BRICK v0.2, a simple, accessible, and transparent model framework for climate and regional sea-level projections

    Science.gov (United States)

    Wong, Tony E.; Bakker, Alexander M. R.; Ruckert, Kelsey; Applegate, Patrick; Slangen, Aimée B. A.; Keller, Klaus

    2017-07-01

    Simple models can play pivotal roles in the quantification and framing of uncertainties surrounding climate change and sea-level rise. They are computationally efficient, transparent, and easy to reproduce. These qualities also make simple models useful for the characterization of risk. Simple model codes are increasingly distributed as open source, as well as actively shared and guided. Alas, computer codes used in the geosciences can often be hard to access, run, modify (e.g., with regards to assumptions and model components), and review. Here, we describe the simple model framework BRICK (Building blocks for Relevant Ice and Climate Knowledge) v0.2 and its underlying design principles. The paper adds detail to an earlier published model setup and discusses the inclusion of a land water storage component. The framework largely builds on existing models and allows for projections of global mean temperature as well as regional sea levels and coastal flood risk. BRICK is written in R and Fortran. BRICK gives special attention to the model values of transparency, accessibility, and flexibility in order to mitigate the above-mentioned issues while maintaining a high degree of computational efficiency. We demonstrate the flexibility of this framework through simple model intercomparison experiments. Furthermore, we demonstrate that BRICK is suitable for risk assessment applications by using a didactic example in local flood risk management.

  4. Use of travel cost models in planning: A case study

    Science.gov (United States)

    Allan Marsinko; William T. Zawacki; J. Michael Bowker

    2002-01-01

    This article examines the use of the travel cost, method in tourism-related decision making in the area of nonconsumptive wildlife-associated recreation. A travel cost model of nonconsumptive wildlife-associated recreation, developed by Zawacki, Maninko, and Bowker, is used as a case study for this analysis. The travel cost model estimates the demand for the activity...

  5. Modeling of Construction Cost of Villas in Oman

    Directory of Open Access Journals (Sweden)

    MA Al-Mohsin

    2014-06-01

    Full Text Available In this research, a model for estimating construction cost of villas is presented. The model takes into account four major factors affecting villa's cost, namely: built up area, number of toilets, number of bedrooms and the number of stories. A field survey was conducted to collect information required for such model using data collection form designed by the researchers. Information about 150 villas was collected from six well experienced consultants in the field of villa design and supervision in Oman. Collected data was analyzed to develop suggested model which consists of two main levels of estimate. The first level is at the conceptual design stage where the client presents his/her need of space and basic information about the available plot for construction. The second level of cost estimation is carried out after the preliminary design stage where the client has to decide on the finishes and type of structure. At the second level of estimation, the client should be able to decide whether to precede for construction or not, according to his/her budget. The model is general and can be used anywhere and was validated for accepted degree of confidence using the actual cost of the 112 executed villa projects in Oman. The villas included in this study were owned by clients from both high and low income brackets and had different types of finishing material. The developed equations showed good correlation between the selected variables and the actual cost with R2  = 0.79 in the case of conceptual estimate and R2  = 0.601 for preliminary estimate.

  6. WEB-DHM: A distributed biosphere hydrological model developed by coupling a simple biosphere scheme with a hillslope hydrological model

    Science.gov (United States)

    The coupling of land surface models and hydrological models potentially improves the land surface representation, benefiting both the streamflow prediction capabilities as well as providing improved estimates of water and energy fluxes into the atmosphere. In this study, the simple biosphere model 2...

  7. On the Cost-vs-Quality Tradeoff in Make-or-Buy Decisions

    OpenAIRE

    Andersson, Fredrik

    2010-01-01

    The make-or-buy decision is analyzed in a simple two-task principal-agent model. There is a cost-saving/quality tradeoff in effort provision. The principal faces a dichotomous choice between weak ("make") and strong ("buy") cost-saving incentives for the agent; the dichotomy is due to an incomplete-contracting limitation necessitating that one party be residual claimant. Choosing "buy" rather than "make" leads to higher cost-saving effort and -- in a plausible "main case" -- to lower quality ...

  8. A simple rain attenuation model for earth-space radio links operating at 10-35 GHz

    Science.gov (United States)

    Stutzman, W. L.; Yon, K. M.

    1986-01-01

    The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.

  9. A simple analytical model for dynamics of time-varying target leverage ratios

    Science.gov (United States)

    Lo, C. F.; Hui, C. H.

    2012-03-01

    In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.

  10. A simple data loss model for positron camera systems

    International Nuclear Information System (INIS)

    Eriksson, L.; Dahlbom, M.

    1994-01-01

    A simple model to describe data losses in PET cameras is presented. The model is not intended to be used primarily for dead time corrections in existing scanners, although this is possible. Instead the model is intended to be used for data simulations in order to determine the figures of merit of future camera systems, based on data handling state-of-art solutions. The model assumes the data loss to be factorized into two components, one describing the detector or block-detector performance and the other the remaining data handling such as coincidence determinations, data transfer and data storage. Two modern positron camera systems have been investigated in terms of this model. These are the Siemens-CTI ECAT EXACT and ECAT EXACT HR systems, which both have an axial field-of-view (FOV) of about 15 cm. They both have retractable septa and can acquire data from the whole volume within the FOV and can reconstruct volume image data. An example is given how to use the model for live time calculation in a futuristic large axial FOV cylindrical system

  11. A simple model of gas flow in a porous powder compact

    Energy Technology Data Exchange (ETDEWEB)

    Shugard, Andrew D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Robinson, David B. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-04-01

    This report describes a simple model for ideal gas flow from a vessel through a bed of porous material into another vessel. It assumes constant temperature and uniform porosity. Transport is treated as a combination of viscous and molecular flow, with no inertial contribution (low Reynolds number). This model can be used to fit data to obtain permeability values, determine flow rates, understand the relative contributions of viscous and molecular flow, and verify volume calibrations. It draws upon the Dusty Gas Model and other detailed studies of gas flow through porous media.

  12. Benchmarking in pathology: development of an activity-based costing model.

    Science.gov (United States)

    Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John

    2012-12-01

    Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.

  13. A simple analytic treatment of rescattering effects in the Deck model

    International Nuclear Information System (INIS)

    Bowler, M.G.

    1979-01-01

    A simple application of old-fashioned final-state interaction theory is shown to give the result that rescattering the Deck model of diffraction dissociation is well represented by multiplying the bare amplitude by esup(idelta)cosdelta. The physical reasons for this result emerge particularly clearly in this formulation. (author)

  14. Development of an integrated cost model for nuclear plant decommissioning

    International Nuclear Information System (INIS)

    Amos, G.; Roy, R.

    2003-01-01

    A need for an integrated cost estimating tool for nuclear decommissioning and associated waste processing and storage facilities for Intermediate Level Waste (ILW) was defined during the authors recent MSc studies. In order to close the defined gap a prototype tool was developed using logically derived CER's and cost driver variables. The challenge in developing this was to be able to produce a model that could produce realistic cost estimates from the limited levels of historic cost data that was available for analysis. The model is an excel based tool supported by 3 point risk estimating output and is suitable for producing estimates for strategic or optional cost estimates (±30%) early in the conceptual stage of a decommissioning project. The model was validated using minimal numbers of case studies supported by expert opinion discussion. The model provides an enhanced approach for integrated decommissioning estimates which will be produced concurrently with strategic options analysis on a nuclear site

  15. Reduced cost mission design using surrogate models

    Science.gov (United States)

    Feldhacker, Juliana D.; Jones, Brandon A.; Doostan, Alireza; Hampton, Jerrad

    2016-01-01

    This paper uses surrogate models to reduce the computational cost associated with spacecraft mission design in three-body dynamical systems. Sampling-based least squares regression is used to project the system response onto a set of orthogonal bases, providing a representation of the ΔV required for rendezvous as a reduced-order surrogate model. Models are presented for mid-field rendezvous of spacecraft in orbits in the Earth-Moon circular restricted three-body problem, including a halo orbit about the Earth-Moon L2 libration point (EML-2) and a distant retrograde orbit (DRO) about the Moon. In each case, the initial position of the spacecraft, the time of flight, and the separation between the chaser and the target vehicles are all considered as design inputs. The results show that sample sizes on the order of 102 are sufficient to produce accurate surrogates, with RMS errors reaching 0.2 m/s for the halo orbit and falling below 0.01 m/s for the DRO. A single function call to the resulting surrogate is up to two orders of magnitude faster than computing the same solution using full fidelity propagators. The expansion coefficients solved for in the surrogates are then used to conduct a global sensitivity analysis of the ΔV on each of the input parameters, which identifies the separation between the spacecraft as the primary contributor to the ΔV cost. Finally, the models are demonstrated to be useful for cheap evaluation of the cost function in constrained optimization problems seeking to minimize the ΔV required for rendezvous. These surrogate models show significant advantages for mission design in three-body systems, in terms of both computational cost and capabilities, over traditional Monte Carlo methods.

  16. [Threshold value for reimbursement of costs of new drugs: cost-effectiveness research and modelling are essential links].

    Science.gov (United States)

    Frederix, Geert W J; Hövels, Anke M; Severens, Johan L; Raaijmakers, Jan A M; Schellens, Jan H M

    2015-01-01

    There is increasing discussion in the Netherlands about the introduction of a threshold value for the costs per extra year of life when reimbursing costs of new drugs. The Medicines Committee ('Commissie Geneesmiddelen'), a division of the Netherlands National Healthcare Institute ('Zorginstituut Nederland'), advises on reimbursement of costs of new drugs. This advice is based upon the determination of therapeutic value of the drug and the results of economic evaluations. Mathematical models that predict future costs and effectiveness are often used in economic evaluations; these models can vary greatly in transparency and quality due to author assumptions. Standardisation of cost-effectiveness models is one solution to overcome the unwanted variation in quality. Discussions about the introduction of a threshold value can only be meaningful if all involved are adequately informed, and by high quality in cost-effectiveness research and, particularly, economic evaluations. Collaboration and discussion between medical specialists, patients or patient organisations, health economists and policy makers, both in development of methods and in standardisation, are essential to improve the quality of decision making.

  17. A simple 2D biofilm model yields a variety of morphological features.

    Science.gov (United States)

    Hermanowicz, S W

    2001-01-01

    A two-dimensional biofilm model was developed based on the concept of cellular automata. Three simple, generic processes were included in the model: cell growth, internal and external mass transport and cell detachment (erosion). The model generated a diverse range of biofilm morphologies (from dense layers to open, mushroom-like forms) similar to those observed in real biofilm systems. Bulk nutrient concentration and external mass transfer resistance had a large influence on the biofilm structure.

  18. A simple parameter can switch between different weak-noise-induced phenomena in a simple neuron model

    Science.gov (United States)

    Yamakou, Marius E.; Jost, Jürgen

    2017-10-01

    In recent years, several, apparently quite different, weak-noise-induced resonance phenomena have been discovered. Here, we show that at least two of them, self-induced stochastic resonance (SISR) and inverse stochastic resonance (ISR), can be related by a simple parameter switch in one of the simplest models, the FitzHugh-Nagumo (FHN) neuron model. We consider a FHN model with a unique fixed point perturbed by synaptic noise. Depending on the stability of this fixed point and whether it is located to either the left or right of the fold point of the critical manifold, two distinct weak-noise-induced phenomena, either SISR or ISR, may emerge. SISR is more robust to parametric perturbations than ISR, and the coherent spike train generated by SISR is more robust than that generated deterministically. ISR also depends on the location of initial conditions and on the time-scale separation parameter of the model equation. Our results could also explain why real biological neurons having similar physiological features and synaptic inputs may encode very different information.

  19. Simple interphase drag model for numerical two-fluid modeling of two-phase flow systems

    International Nuclear Information System (INIS)

    Chow, H.; Ransom, V.H.

    1984-01-01

    The interphase drag model that has been developed for RELAP5/MOD2 is based on a simple formulation having flow regime maps for both horizontal and vertical flows. The model is based on a conventional semi-empirical formulation that includes the product of drag coefficient, interfacial area, and relative dynamic pressure. The interphase drag model is implemented in the RELAP5/MOD2 light water reactor transient analysis code and has been used to simulate a variety of separate effects experiments to assess the model accuracy. The results from three of these simulations, the General Electric Company small vessel blowdown experiment, Dukler and Smith's counter-current flow experiment, and a Westinghouse Electric Company FLECHT-SEASET forced reflood experiment, are presented and discussed

  20. Simple non-Markovian microscopic models for the depolarizing channel of a single qubit

    International Nuclear Information System (INIS)

    Fonseca Romero, K M; Lo Franco, R

    2012-01-01

    The archetypal one-qubit noisy channels - depolarizing, phase-damping and amplitude-damping channels - describe both Markovian and non-Markovian evolution. Simple microscopic models for the depolarizing channel, both classical and quantum, are considered. Microscopic models that describe phase-damping and amplitude-damping channels are briefly reviewed.

  1. Simple standard model extension by heavy charged scalar

    Science.gov (United States)

    Boos, E.; Volobuev, I.

    2018-05-01

    We consider a Standard Model (SM) extension by a heavy charged scalar gauged only under the UY(1 ) weak hypercharge gauge group. Such an extension, being gauge invariant with respect to the SM gauge group, is a simple special case of the well-known Zee model. Since the interactions of the charged scalar with the Standard Model fermions turn out to be significantly suppressed compared to the Standard Model interactions, the charged scalar provides an example of a long-lived charged particle being interesting to search for at the LHC. We present the pair and single production cross sections of the charged scalar at different colliders and the possible decay widths for various boson masses. It is shown that the current ATLAS and CMS searches at 8 and 13 TeV collision energy lead to the bounds on the scalar boson mass of about 300-320 GeV. The limits are expected to be much larger for higher collision energies and, assuming 15 a b-1 integrated luminosity, reach about 2.7 TeV at future 27 TeV LHC thus covering the most interesting mass region.

  2. A simple non-linear model of immune response

    International Nuclear Information System (INIS)

    Gutnikov, Sergei; Melnikov, Yuri

    2003-01-01

    It is still unknown why the adaptive immune response in the natural immune system based on clonal proliferation of lymphocytes requires interaction of at least two different cell types with the same antigen. We present a simple mathematical model illustrating that the system with separate types of cells for antigen recognition and patogen destruction provides more robust adaptive immunity than the system where just one cell type is responsible for both recognition and destruction. The model is over-simplified as we did not have an intention of describing the natural immune system. However, our model provides a tool for testing the proposed approach through qualitative analysis of the immune system dynamics in order to construct more sophisticated models of the immune systems that exist in the living nature. It also opens a possibility to explore specific features of highly non-linear dynamics in nature-inspired computational paradigms like artificial immune systems and immunocomputing . We expect this paper to be of interest not only for mathematicians but also for biologists; therefore we made effort to explain mathematics in sufficient detail for readers without professional mathematical background

  3. Surface pressure model for simple delta wings at high angles of attack

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    polynomial function approach, splines with limited support and neural network models are ... for thin streamlined bodies, the normal force and pitching moment .... eter, a simple point vortex over an infinite plate is used to derive some results.

  4. Introduction of a simple-model-based land surface dataset for Europe

    Science.gov (United States)

    Orth, Rene; Seneviratne, Sonia I.

    2015-04-01

    Land surface hydrology can play a crucial role during extreme events such as droughts, floods and even heat waves. We introduce in this study a new hydrological dataset for Europe that consists of soil moisture, runoff and evapotranspiration (ET). It is derived with a simple water balance model (SWBM) forced with precipitation, temperature and net radiation. The SWBM dataset extends over the period 1984-2013 with a daily time step and 0.5° × 0.5° resolution. We employ a novel calibration approach, in which we consider 300 random parameter sets chosen from an observation-based range. Using several independent validation datasets representing soil moisture (or terrestrial water content), ET and streamflow, we identify the best performing parameter set and hence the new dataset. To illustrate its usefulness, the SWBM dataset is compared against several state-of-the-art datasets (ERA-Interim/Land, MERRA-Land, GLDAS-2-Noah, simulations of the Community Land Model Version 4), using all validation datasets as reference. For soil moisture dynamics it outperforms the benchmarks. Therefore the SWBM soil moisture dataset constitutes a reasonable alternative to sparse measurements, little validated model results, or proxy data such as precipitation indices. Also in terms of runoff the SWBM dataset performs well, whereas the evaluation of the SWBM ET dataset is overall satisfactory, but the dynamics are less well captured for this variable. This highlights the limitations of the dataset, as it is based on a simple model that uses uniform parameter values. Hence some processes impacting ET dynamics may not be captured, and quality issues may occur in regions with complex terrain. Even though the SWBM is well calibrated, it cannot replace more sophisticated models; but as their calibration is a complex task the present dataset may serve as a benchmark in future. In addition we investigate the sources of skill of the SWBM dataset and find that the parameter set has a similar

  5. Introduction to Solar Motion Geometry on the Basis of a Simple Model

    Science.gov (United States)

    Khavrus, Vyacheslav; Shelevytsky, Ihor

    2010-01-01

    By means of a simple mathematical model developed by the authors, the apparent movement of the Sun can be studied for arbitrary latitudes. Using this model, it is easy to gain insight into various phenomena, such as the passage of the seasons, dependences of position and time of sunrise or sunset on a specific day of year, day duration for…

  6. The Cost-Effectiveness of Low-Cost Essential Antihypertensive Medicines for Hypertension Control in China: A Modelling Study.

    Directory of Open Access Journals (Sweden)

    Dongfeng Gu

    2015-08-01

    Full Text Available Hypertension is China's leading cardiovascular disease risk factor. Improved hypertension control in China would result in result in enormous health gains in the world's largest population. A computer simulation model projected the cost-effectiveness of hypertension treatment in Chinese adults, assuming a range of essential medicines list drug costs.The Cardiovascular Disease Policy Model-China, a Markov-style computer simulation model, simulated hypertension screening, essential medicines program implementation, hypertension control program administration, drug treatment and monitoring costs, disease-related costs, and quality-adjusted life years (QALYs gained by preventing cardiovascular disease or lost because of drug side effects in untreated hypertensive adults aged 35-84 y over 2015-2025. Cost-effectiveness was assessed in cardiovascular disease patients (secondary prevention and for two blood pressure ranges in primary prevention (stage one, 140-159/90-99 mm Hg; stage two, ≥160/≥100 mm Hg. Treatment of isolated systolic hypertension and combined systolic and diastolic hypertension were modeled as a reduction in systolic blood pressure; treatment of isolated diastolic hypertension was modeled as a reduction in diastolic blood pressure. One-way and probabilistic sensitivity analyses explored ranges of antihypertensive drug effectiveness and costs, monitoring frequency, medication adherence, side effect severity, background hypertension prevalence, antihypertensive medication treatment, case fatality, incidence and prevalence, and cardiovascular disease treatment costs. Median antihypertensive costs from Shanghai and Yunnan province were entered into the model in order to estimate the effects of very low and high drug prices. Incremental cost-effectiveness ratios less than the per capita gross domestic product of China (11,900 international dollars [Int$] in 2015 were considered cost-effective. Treating hypertensive adults with prior

  7. Capital cost models for geothermal power plants and fluid transmission systems. [GEOCOST

    Energy Technology Data Exchange (ETDEWEB)

    Schulte, S.C.

    1977-09-01

    The GEOCOST computer program is a simulation model for evaluating the economics of developing geothermal resources. The model was found to be both an accurate predictor of geothermal power production facility costs and a valid designer of such facilities. GEOCOST first designs a facility using thermodynamic optimization routines and then estimates costs for the selected design using cost models. Costs generated in this manner appear to correspond closely with detailed cost estimates made by industry planning groups. Through the use of this model, geothermal power production costs can be rapidly and accurately estimated for many alternative sites making the evaluation process much simpler yet more meaningful.

  8. Simple suggestions for including vertical physics in oil spill models

    International Nuclear Information System (INIS)

    D'Asaro, Eric; University of Washington, Seatle, WA

    2001-01-01

    Current models of oil spills include no vertical physics. They neglect the effect of vertical water motions on the transport and concentration of floating oil. Some simple ways to introduce vertical physics are suggested here. The major suggestion is to routinely measure the density stratification of the upper ocean during oil spills in order to develop a database on the effect of stratification. (Author)

  9. A life cycle cost economics model for projects with uniformly varying operating costs. [management planning

    Science.gov (United States)

    Remer, D. S.

    1977-01-01

    A mathematical model is developed for calculating the life cycle costs for a project where the operating costs increase or decrease in a linear manner with time. The life cycle cost is shown to be a function of the investment costs, initial operating costs, operating cost gradient, project life time, interest rate for capital and salvage value. The results show that the life cycle cost for a project can be grossly underestimated (or overestimated) if the operating costs increase (or decrease) uniformly over time rather than being constant as is often assumed in project economic evaluations. The following range of variables is examined: (1) project life from 2 to 30 years; (2) interest rate from 0 to 15 percent per year; and (3) operating cost gradient from 5 to 90 percent of the initial operating costs. A numerical example plus tables and graphs is given to help calculate project life cycle costs over a wide range of variables.

  10. Payload maintenance cost model for the space telescope

    Science.gov (United States)

    White, W. L.

    1980-01-01

    An optimum maintenance cost model for the space telescope for a fifteen year mission cycle was developed. Various documents and subsequent updates of failure rates and configurations were made. The reliability of the space telescope for one year, two and one half years, and five years were determined using the failure rates and configurations. The failure rates and configurations were also used in the maintenance simulation computer model which simulate the failure patterns for the fifteen year mission life of the space telescope. Cost algorithms associated with the maintenance options as indicated by the failure patterns were developed and integrated into the model.

  11. Stochastic Modeling of Traffic Air Pollution

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    2014-01-01

    In this paper, modeling of traffic air pollution is discussed with special reference to infrastructures. A number of subjects related to health effects of air pollution and the different types of pollutants are briefly presented. A simple model for estimating the social cost of traffic related air...... and using simple Monte Carlo techniques to obtain a stochastic estimate of the costs of traffic air pollution for infrastructures....... pollution is derived. Several authors have published papers on this very complicated subject, but no stochastic modelling procedure have obtained general acceptance. The subject is discussed basis of a deterministic model. However, it is straightforward to modify this model to include uncertain parameters...

  12. A simple analytical infiltration model for short-duration rainfall

    Science.gov (United States)

    Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming

    2017-12-01

    Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.

  13. A new methodology for modeling of direct landslide costs for transportation infrastructures

    Science.gov (United States)

    Klose, Martin; Terhorst, Birgit

    2014-05-01

    The world's transportation infrastructure is at risk of landslides in many areas across the globe. A safe and affordable operation of traffic routes are the two main criteria for transportation planning in landslide-prone areas. The right balancing of these often conflicting priorities requires, amongst others, profound knowledge of the direct costs of landslide damage. These costs include capital investments for landslide repair and mitigation as well as operational expenditures for first response and maintenance works. This contribution presents a new methodology for ex post assessment of direct landslide costs for transportation infrastructures. The methodology includes tools to compile, model, and extrapolate landslide losses on different spatial scales over time. A landslide susceptibility model enables regional cost extrapolation by means of a cost figure obtained from local cost compilation for representative case study areas. On local level, cost survey is closely linked with cost modeling, a toolset for cost estimation based on landslide databases. Cost modeling uses Landslide Disaster Management Process Models (LDMMs) and cost modules to simulate and monetize cost factors for certain types of landslide damage. The landslide susceptibility model provides a regional exposure index and updates the cost figure to a cost index which describes the costs per km of traffic route at risk of landslides. Both indexes enable the regionalization of local landslide losses. The methodology is applied and tested in a cost assessment for highways in the Lower Saxon Uplands, NW Germany, in the period 1980 to 2010. The basis of this research is a regional subset of a landslide database for the Federal Republic of Germany. In the 7,000 km² large Lower Saxon Uplands, 77 km of highway are located in potential landslide hazard area. Annual average costs of 52k per km of highway at risk of landslides are identified as cost index for a local case study area in this region. The

  14. Solar PV Manufacturing Cost Model Group: Installed Solar PV System Prices (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Goodrich, A. C.; Woodhouse, M.; James, T.

    2011-02-01

    EERE's Solar Energy Technologies Program is charged with leading the Secretary's SunShot Initiative to reduce the cost of electricity from solar by 75% to be cost competitive with conventional energy sources without subsidy by the end of the decade. As part of this Initiative, the program has funded the National Renewable Energy Laboratory (NREL) to develop module manufacturing and solar PV system installation cost models to ensure that the program's cost reduction targets are carefully aligned with current and near term industry costs. The NREL cost analysis team has leveraged the laboratories' extensive experience in the areas of project finance and deployment, as well as industry partnerships, to develop cost models that mirror the project cost analysis tools used by project managers at leading U.S. installers. The cost models are constructed through a "bottoms-up" assessment of each major cost element, beginning with the system's bill of materials, labor requirements (type and hours) by component, site-specific charges, and soft costs. In addition to the relevant engineering, procurement, and construction costs, the models also consider all relevant costs to an installer, including labor burdens and overhead rates, supply chain costs, and overhead and materials inventory costs, and assume market-specific profits.

  15. Environmental Parametric Cost Model in Oil and Gas EPC Contracts

    Directory of Open Access Journals (Sweden)

    Madjid Abbaspour

    2018-01-01

    Full Text Available This study aims at identifying the parameters that govern the environmental costs in oil and gas projects. An initial conceptual model was proposed. Next, the costs of environmental management work packages were estimated, separately and were applied in project control tools (WBS/CBS. Then, an environmental parametric cost model was designed to determine the environmental costs and relevant weighting factors. The suggested model can be considered as an innovative approach to designate the environmental indicators in oil and gas projects. The validity of variables was investigated based on Delphi method. The results indicated that the project environmental management’s weighting factor is 0.87% of total project’s weighting factor.

  16. On Transaction-Cost Models in Continuous-Time Markets

    Directory of Open Access Journals (Sweden)

    Thomas Poufinas

    2015-04-01

    Full Text Available Transaction-cost models in continuous-time markets are considered. Given that investors decide to buy or sell at certain time instants, we study the existence of trading strategies that reach a certain final wealth level in continuous-time markets, under the assumption that transaction costs, built in certain recommended ways, have to be paid. Markets prove to behave in manners that resemble those of complete ones for a wide variety of transaction-cost types. The results are important, but not exclusively, for the pricing of options with transaction costs.

  17. Dynamic modeling and simulation of power transformer maintenance costs

    Directory of Open Access Journals (Sweden)

    Ristić Olga

    2016-01-01

    Full Text Available The paper presents the dynamic model of maintenance costs of the power transformer functional components. Reliability is modeled combining the exponential and Weibull's distribution. The simulation was performed with the aim of corrective maintenance and installation of the continuous monitoring system of the most critical components. Simulation Dynamic System (SDS method and VENSIM PLE software was used to simulate the cost. In this way, significant savings in maintenance costs will be achieved with a small initial investment. [Projekat Ministarstva nauke Republike Srbije, br. III 41025 i br. OI 171007

  18. A drug cost model for injuries due to road traffic accidents.

    Directory of Open Access Journals (Sweden)

    Riewpaiboon A

    2008-03-01

    Full Text Available Objective: This study aimed to develop a drug cost model for injuries due to road traffic accidents for patients receiving treatment at a regional hospital in Thailand. Methods: The study was designed as a retrospective, descriptive analysis. The cases were all from road traffic accidents receiving treatment at a public regional hospital in the fiscal year 2004. Results: Three thousand seven hundred and twenty-three road accident patients were included in the study. The mean drug cost per case was USD18.20 (SD=73.49, median=2.36. The fitted drug cost model had an adjusted R2 of 0.449. The positive significant predictor variables of drug costs were prolonged length of stay, age over 30 years old, male, Universal Health Coverage Scheme, time of accident during 18:00-24:00 o’clock, and motorcycle comparing to bus. To forecast the drug budget for 2006, there were two approaches identified, the mean drug cost and the predicted average drug cost. The predicted average drug cost was calculated based on the forecasted values of statistically significant (p<0.05 predictor variables included in the fitted model; predicted total drug cost was USD44,334. Alternatively, based on the mean cost, predicted total drug cost in 2006 was USD63,408. This was 43% higher than the figure based on the predicted cost approach.Conclusions: The planned budget of drug cost based on the mean cost and predicted average cost were meaningfully different. The application of a predicted average cost model could result in a more accurate budget planning than that of a mean statistic approach.

  19. process setting models for the minimization of costs defectives

    African Journals Online (AJOL)

    Dr Obe

    determine the mean setting so as to minimise the total loss through under-limit complaints and loss of sales and goodwill as well as over-limit losses through excess materials and rework costs. Models are developed for the two types of setting of the mean so that the minimum costs of losses are achieved. Also, a model is ...

  20. Simple model for polar cap convection patterns and generation of theta auroras

    International Nuclear Information System (INIS)

    Lyons, L.R.

    1985-01-01

    The simple addition of a uniform interplanetary magnetic field and the Earth's dipole magnetic field is used to evaluate electric field convection patterns over the polar caps that result from solar wind flow across open geomagnetic field lines. This model is found to account for observed polar-cap convection patterns as a function of the interplanetary magnetic field components B/sub y/ and B/sub z/. In particular, the model offers an explanation for sunward and antisunward convection over the polar caps for B/sub z/>0. Observed field-aligned current patterns within the polar cap and observed auroral arcs across the polar cap are also explained by the model. In addition, the model gives several predictions concerning the polar cap that should be testable. Effects of solar wind pressure and magnetospheric currents on magnetospheric electric and magnetic fields are neglected. That observed polar cap features are reproduced suggests that the neglected effects do not modify the large-scale topology of magnetospheric electric and magnetic fields along open polar cap field lines. Of course, the neglected effects significantly modify the magnetic geometry, so that the results of this paper are not quantitatively realistic and many details may be incorrect. Nevertheless, the model provides a simple explanation for many qualitative features of polar cap convection

  1. Simple unification

    International Nuclear Information System (INIS)

    Ponce, W.A.; Zepeda, A.

    1987-08-01

    We present the results obtained from our systematic search of a simple Lie group that unifies weak and electromagnetic interactions in a single truly unified theory. We work with fractionally charged quarks, and allow for particles and antiparticles to belong to the same irreducible representation. We found that models based on SU(6), SU(7), SU(8) and SU(10) are viable candidates for simple unification. (author). 23 refs

  2. A simple 1D model with thermomechanical coupling for superelastic SMAs

    International Nuclear Information System (INIS)

    Zaki, W; Morin, C; Moumni, Z

    2010-01-01

    This paper presents an outline for a new uniaxial model for shape memory alloys that accounts for thermomechanical coupling. The coupling provides an explanation of the dependence of SMA behavior on the loading rate. 1D simulations are carried in Matlab using simple finite-difference discretization of the mechanical and thermal equations.

  3. X-1 to X-Wings: Developing a Parametric Cost Model

    Science.gov (United States)

    Sterk, Steve; McAtee, Aaron

    2015-01-01

    In todays cost-constrained environment, NASA needs an X-Plane database and parametric cost model that can quickly provide rough order of magnitude predictions of cost from initial concept to first fight of potential X-Plane aircraft. This paper takes a look at the steps taken in developing such a model and reports the results. The challenges encountered in the collection of historical data and recommendations for future database management are discussed. A step-by-step discussion of the development of Cost Estimating Relationships (CERs) is then covered.

  4. Cost Analysis of a Digital Health Care Model in Sweden.

    Science.gov (United States)

    Ekman, Björn

    2017-09-22

    Digital technologies in health care are expected to increase in scope and to affect ever more parts of the health care system. It is important to enhance the knowledge of whether new digital methods and innovations provide value for money compared with traditional models of care. The objective of the study was to evaluate whether a digital health care model for primary care is a less costly alternative compared with traditional in-office primary care in Sweden. Cost data for the two care models were collected and analyzed to obtain a measure in local currency per care contact. The comparison showed that the total economic cost of a digital consultation is 1960 Swedish krona (SEK) (SEK100 = US$11.29; February 2017) compared with SEK3348 for a traditional consultation at a health care clinic. Cost differences arose on both the provider side and on the user side. The digital health care model may be a less costly alternative to the traditional health care model. Depending on the rate of digital substitution, gross economic cost savings of between SEK1 billion and SEK10 billion per year could be realized if more digital consultations were made. Further studies are needed to validate the findings, assess the types of care most suitable for digital care, and also to obtain various quality-adjusted outcome measures.

  5. INTEGRATED COST MODEL FOR IMPROVING THE PRODUCTION IN COMPANIES

    Directory of Open Access Journals (Sweden)

    Zuzana Hajduova

    2014-12-01

    Full Text Available Purpose: All processes in the company play important role in ensuring functional integrated management system. We point out the importance of need for a systematic approach to the use of quantitative, but especially statistical methods for modelling the cost of the improvement activities that are part of an integrated management system. Development of integrated management systems worldwide leads towards building of systematic procedures of implementation maintenance and improvement of all systems according to the requirements of all the sides involved.Methodology: Statistical evaluation of the economic indicators of improvement costs and the need for a systematic approach to their management in terms of integrated management systems have become a key role also in the management of processes in the company Cu Drôt, a.s. The aim of this publication is to highlight the importance of proper implementation of statistical methods in the process of improvement costs management in the integrated management system of current market conditions and document the legitimacy of a systematic approach in the area of monitoring and analysing indicators of improvement with the aim of the efficient process management of company. We provide specific example of the implementation of appropriate statistical methods in the production of copper wire in a company Cu Drôt, a.s. This publication also aims to create a model for the estimation of integrated improvement costs, which through the use of statistical methods in the company Cu Drôt, a.s. is used to support decision-making on improving efficiency.Findings: In the present publication, a method for modelling the improvement process, by an integrated manner, is proposed. It is a method in which the basic attributes of the improvement in quality, safety and environment are considered and synergistically combined in the same improvement project. The work examines the use of sophisticated quantitative, especially

  6. Multivariable Parametric Cost Model for Ground Optical Telescope Assembly

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2005-01-01

    A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.

  7. Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.

  8. The deployment of electricity generation from renewable energies in Germany and Spain: A comparative analysis based on a simple model

    International Nuclear Information System (INIS)

    Fernández Fernández, Pablo; Ortiz, Eunice Villicaña; Bernat, Jorge Xiberta

    2013-01-01

    The fulfilment of the aims set by the European Union in the deployment of renewable energy sources for electricity generation (RES-E) has counted and must continue to count on public funding from the member states, which promote private investment in this type of facilities. This funding guarantees a cost-oriented remuneration which, being higher than the market price means an additional cost to the electricity system. With the aim of minimizing the economic impact as the weight of RES-E in the electricity mix increases, the generation costs of renewable units must approach those of the market, which are expected to increase according to the fossil fuel price forecasts. The present study analyzes both the RES-E development and deployment in Spain and Germany, two pioneering countries worldwide and with very similar electricity systems. Based on their national action plans and a simple model, this analysis approaches the RES-E surcharge, comparing and contrasting the results obtained in both countries. - Highlights: ► Policies must be assessed according to the surcharge caused per unit generated. ► Surcharge evolution function fitted by an Erlang alike distribution. ► About two-third of the decade surcharge shall be devoted to units commissioned by 2010. ► Germany focused on technology development, while Spain on deployment

  9. A simple model of discontinuous firm’s growth

    OpenAIRE

    D'Elia, Enrico

    2011-01-01

    Typically, firms change their size through a row of discrete leaps over time. Sunk costs, regulatory, financial and organizational constraints, talent distribution and other factors may explain this fact. However, firms tend to grow or fall discontinuously even if those inertial factors were removed. For instance, a very essential model of discontinuous growth can be based on a couple of assumptions concerning only technology and entrepreneurs’ strategy, that is: (a) in the short run, the...

  10. NUMERICAL SIMULATION OF FLOW OVER TWO-DIMENSIONAL MOUNTAIN RIDGE USING SIMPLE ISENTROPIC MODEL

    Directory of Open Access Journals (Sweden)

    Siswanto Siswanto

    2009-07-01

    Full Text Available Model sederhana isentropis telah diaplikasikan untuk mengidentifikasi perilaku aliran masa udara melewati topografi sebuah gunung. Dalam model isentropis, temperature potensial θ digunakan sebagai koordinat vertikal dalam rezim aliran adiabatis. Medan angin dalam arah vertikal dihilangkan dalam koordinat isentropis sehingga mereduksi sistim tiga dimensi menjadi sistim dua dimensi lapisan θ. Skema komputasi beda hingga tengah telah digunakan untuk memformulasikan model adveksi. Paper ini membahas aplikasi sederhana dari model isentropis untuk mempelajari gelombang gravitasi dan fenomena angin gunung  dengan desain komputasi periodik dan kondisi batas lateral serta simulasi dengan topografi yang berbeda.   The aim of this work is to study turbulent flow over two-dimensional hill using a simple isentropic model. The isentropic model is represented by applying the potential temperature θ, as the vertical coordinate and is conversed in adiabatic flow regimes. This implies a vanishing vertical wind in isentropic coordinates which reduces the three dimensional system to a stack of two dimensional θ–layers. The equations for each isentropic layer are formally identical with the shallow water equation. A computational scheme of centered finite differences is used to formulate an advective model. This work reviews a simple isentropic model application to investigate gravity wave and mountain wave phenomena regard to different experimental design of computation and topographic height.

  11. The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making

    OpenAIRE

    Gabriela Tavares; Pietro Perona; Antonio Rangel; Antonio Rangel

    2017-01-01

    Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find e...

  12. Using a Time-Driven Activity-Based Costing Model To Determine the Actual Cost of Services Provided by a Transgenic Core.

    Science.gov (United States)

    Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J

    2018-03-01

    Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.

  13. The Cost of Cache-Oblivious Searching

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gert Stølting; Fagerberg, Rolf

    2003-01-01

    , multilevel memory hierarchies can be modelled. It is shown that as k grows, the search costs of the optimal k-level DAM search structure and of the optimal cache-oblivious search structure rapidly converge. This demonstrates that for a multilevel memory hierarchy, a simple cache-oblivious structure almost......Tight bounds on the cost of cache-oblivious searching are proved. It is shown that no cache-oblivious search structure can guarantee that a search performs fewer than lg e log B N block transfers between any two levels of the memory hierarchy. This lower bound holds even if all of the block sizes...... the random placement of the rst element of the structure in memory. As searching in the Disk Access Model (DAM) can be performed in log B N + 1 block transfers, this result shows a separation between the 2-level DAM and cacheoblivious memory-hierarchy models. By extending the DAM model to k levels...

  14. User Delay Cost Model and Facilities Maintenance Cost Model for a Terminal Control Area : Volume 2. User's Manual and Program Documentation for the User Delay Cost Model

    Science.gov (United States)

    1978-05-01

    The User Delay Cost Model (UDCM) is a Monte Carlo simulation of certain classes of movement of air traffic in the Boston Terminal Control Area (TCA). It incorporates a weather module, an aircraft generation module, a facilities module, and an air con...

  15. An analysis of electric utility embedded power supply costs

    International Nuclear Information System (INIS)

    Kahal, M.; Brown, D.

    1998-01-01

    There is little doubt that for the vast majority of electric utilities the embedded costs of power supply exceed market prices, giving rise to the stranded cost problem. Beyond that simple generalization, there are a number of crucial questions, which this study attempts to answer. What are the regional patterns of embedded cost differences? To what extent is the cost problem attributable to nuclear power? How does the cost of purchased power compare to the cost of utility self-generation? What is the breakdown of utility embedded generation costs between operating costs - which are potentially avoidable--and ownership costs, which by definition are ''sunk'' and therefore not avoidable? How will embedded generation costs and market prices compare over time? These are the crucial questions for states as they address retail-restructuring proposal. This study presents an analysis of generation costs, which addresses these key questions. A computerized costing model was developed and applied using FERC Form 1 data for 1995. The model analyzed embedded power supply costs (i.e.; self-generation plus purchased power) for two groups of investor-owned utilities, 49 non-nuclear vs. 63 nuclear. These two subsamples represent substantially the entire US investor-owned electric utility industry. For each utility, embedded cost is estimated both at busbar and at meter

  16. How much does it cost? The LIFE Project - Costing Models for Digital Curation and Preservation

    Directory of Open Access Journals (Sweden)

    Richard Davies

    2007-11-01

    Full Text Available Digital preservation is concerned with the long-term safekeeping of electronic resources. How can we be confident of their permanence, if we do not know the cost of preservation? The LIFE (Lifecycle Information for E-Literature Project has made a major step forward in understanding the long-term costs in this complex area. The LIFE Project has developed a methodology to model the digital lifecycle and to calculate the costs of preserving digital information for the next 5, 10 or 100 years. National and higher education (HE libraries can now apply this process and plan effectively for the preservation of their digital collections. Based on previous work undertaken on the lifecycles of paper-based materials, the LIFE Project created a lifecycle model and applied it to real-life digital collections across a diverse subject range. Three case studies examined the everyday operations, processes and costs involved in their respective activities. The results were then used to calculate the direct costs for each element of the digital lifecycle. The Project has made major advances in costing preservation activities, as well as making detailed costs of real digital preservation activities available. The second phase of LIFE (LIFE2, which recently started, aims to refine the lifecycle methodology and to add a greater range and breadth to the project with additional exemplar case studies.

  17. Cost Effective Community Based Dementia Screening: A Markov Model Simulation

    Directory of Open Access Journals (Sweden)

    Erin Saito

    2014-01-01

    Full Text Available Background. Given the dementia epidemic and the increasing cost of healthcare, there is a need to assess the economic benefit of community based dementia screening programs. Materials and Methods. Markov model simulations were generated using data obtained from a community based dementia screening program over a one-year period. The models simulated yearly costs of caring for patients based on clinical transitions beginning in pre dementia and extending for 10 years. Results. A total of 93 individuals (74 female, 19 male were screened for dementia and 12 meeting clinical criteria for either mild cognitive impairment (n=7 or dementia (n=5 were identified. Assuming early therapeutic intervention beginning during the year of dementia detection, Markov model simulations demonstrated 9.8% reduction in cost of dementia care over a ten-year simulation period, primarily through increased duration in mild stages and reduced time in more costly moderate and severe stages. Discussion. Community based dementia screening can reduce healthcare costs associated with caring for demented individuals through earlier detection and treatment, resulting in proportionately reduced time in more costly advanced stages.

  18. Update on Parametric Cost Models for Space Telescopes

    Science.gov (United States)

    Stahl. H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2011-01-01

    Since the June 2010 Astronomy Conference, an independent review of our cost data base discovered some inaccuracies and inconsistencies which can modify our previously reported results. This paper will review changes to the data base, our confidence in those changes and their effect on various parametric cost models

  19. Self-Organized Criticality in a Simple Neuron Model Based on Scale-Free Networks

    International Nuclear Information System (INIS)

    Lin Min; Wang Gang; Chen Tianlun

    2006-01-01

    A simple model for a set of interacting idealized neurons in scale-free networks is introduced. The basic elements of the model are endowed with the main features of a neuron function. We find that our model displays power-law behavior of avalanche sizes and generates long-range temporal correlation. More importantly, we find different dynamical behavior for nodes with different connectivity in the scale-free networks.

  20. On the relation between cost and service models for general inventory systems

    NARCIS (Netherlands)

    Houtum, van G.J.J.A.N.; Zijm, W.H.M.

    2000-01-01

    In this paper, we present a systematic overview of possible relations between cost and service models for fairly general single- and multi-stage inventory systems. In particular, we relate various types of penalty costs in pure cost models to equivalent types of service measures in service models.

  1. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  2. A simple model for the dynamic analysis of deteriorating structures

    International Nuclear Information System (INIS)

    Andreaus, U.; Ceradini, G.; D'Asdia, P.

    1983-01-01

    A simple model exhibiting a multi-linear constitutive law is presented which describes the behaviour of structural members and subassemblages under severe cyclic loading. The proposed model allows for: 1) pinched form of force-displacement diagrams due to, e.g., cracks in reinforced concrete members and masonry panels; 2) slippage effects due to lack of bond of steel bars in reinforced concrete and clearances in steel bolted connections; 3) post-buckling behaviour of subassemblages with unstable members; 4) cumulative damage affecting strength and/or stiffness at low cycle fatigue. The parameters governing the model behaviour have to be estimated on the basis of experimental results. The model is well suitable for analysis under statically applied cyclic displacements and forces, and under earthquake excitation. An X-type bracing system is then worked out where the member behaviour is schematized according to the proposed model. (orig.)

  3. Latest NASA Instrument Cost Model (NICM): Version VI

    Science.gov (United States)

    Mrozinski, Joe; Habib-Agahi, Hamid; Fox, George; Ball, Gary

    2014-01-01

    The NASA Instrument Cost Model, NICM, is a suite of tools which allow for probabilistic cost estimation of NASA's space-flight instruments at both the system and subsystem level. NICM also includes the ability to perform cost by analogy as well as joint confidence level (JCL) analysis. The latest version of NICM, Version VI, was released in Spring 2014. This paper will focus on the new features released with NICM VI, which include: 1) The NICM-E cost estimating relationship, which is applicable for instruments flying on Explorer-like class missions; 2) The new cluster analysis ability which, alongside the results of the parametric cost estimation for the user's instrument, also provides a visualization of the user's instrument's similarity to previously flown instruments; and 3) includes new cost estimating relationships for in-situ instruments.

  4. Linear versus quadratic portfolio optimization model with transaction cost

    Science.gov (United States)

    Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah

    2014-06-01

    Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.

  5. BRICK v0.2, a simple, accessible, and transparent model framework for climate and regional sea-level projections

    Directory of Open Access Journals (Sweden)

    T. E. Wong

    2017-07-01

    Full Text Available Simple models can play pivotal roles in the quantification and framing of uncertainties surrounding climate change and sea-level rise. They are computationally efficient, transparent, and easy to reproduce. These qualities also make simple models useful for the characterization of risk. Simple model codes are increasingly distributed as open source, as well as actively shared and guided. Alas, computer codes used in the geosciences can often be hard to access, run, modify (e.g., with regards to assumptions and model components, and review. Here, we describe the simple model framework BRICK (Building blocks for Relevant Ice and Climate Knowledge v0.2 and its underlying design principles. The paper adds detail to an earlier published model setup and discusses the inclusion of a land water storage component. The framework largely builds on existing models and allows for projections of global mean temperature as well as regional sea levels and coastal flood risk. BRICK is written in R and Fortran. BRICK gives special attention to the model values of transparency, accessibility, and flexibility in order to mitigate the above-mentioned issues while maintaining a high degree of computational efficiency. We demonstrate the flexibility of this framework through simple model intercomparison experiments. Furthermore, we demonstrate that BRICK is suitable for risk assessment applications by using a didactic example in local flood risk management.

  6. Rotational modes of a simple Earth model

    Science.gov (United States)

    Seyed-Mahmoud, B.; Rochester, M. G.; Rogister, Y. J. G.

    2017-12-01

    We study the tilt-over mode (TOM), the spin-over mode (SOM), the free core nutation (FCN), and their relationships to each other using a simple Earth model with a homogeneous and incompressible liquid core and a rigid mantle. Analytical solutions for the periods of these modes as well as that of the Chandler wobble is found for the Earth model. We show that the FCN is the same mode as the SOM of a wobbling Earth. The reduced pressure, in terms of which the vector momentum equation is known to reduce to a scalar second order differential equation (the so called Poincaŕe equation), is used as the independent variable. Analytical solutions are then found for the displacement eigenfucntions in a meridional plane of the liquid core for the aforementioned modes. We show that the magnitude of motion in the mantle during the FCN is comparable to that in the liquid core, hence very small. The displacement eigenfunctions for these aforementioned modes as well as those for the free inner core nutation (FICN), computed numerically, are also given for a three layer Earth model which also includes a rigid but capable of wobbling inner core. We will discuss the slow convergence of the period of the FICN in terms of the characteristic surfaces of the Poincare equation.

  7. Calibration of a simple and a complex model of global marine biogeochemistry

    Science.gov (United States)

    Kriest, Iris

    2017-11-01

    The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.

  8. Understanding the Shape of the Land and Watersheds Using Simple Models in the Classroom

    Science.gov (United States)

    Gardiner, L.; Johnson, R.; Russell, R.; Bergman, J.; Genyuk, J.; Lagrave, M.

    2006-12-01

    Middle school students can gain essential understandings of the Earth and its processes in the classroom by making and manipulating simple models. While no substitute for field experiences, simple models made of easily-obtained materials can foster student understanding of natural environments. Through this collection of hands-on activities, students build and manipulate simple models that demonstrate (1) tectonic processes that shape the land, (2) the shape of the land surface, (3) how the shape of the land influences the distribution of waterways and watersheds, and (4) how the human communities within a watershed are interconnected through use of surface water. The classroom activities described in this presentation are available on Windows to the Universe (www.windows.ucar.edu), a project of the University Corporation for Atmospheric Research Office of Education and Outreach. Windows to the Universe, a long-standing Web resource supporting Earth and space science education, provides users with content about the Earth and space sciences at three levels (beginner, intermediate, and advanced) in English and Spanish. Approximately 80 hands-on classroom activities appropriate for K-12 classrooms are available within the teacher resources section of the Windows to the Universe.

  9. Renewable Energy Cost Modeling: A Toolkit for Establishing Cost-Based Incentives in the United States; March 2010 -- March 2011

    Energy Technology Data Exchange (ETDEWEB)

    Gifford, J. S.; Grace, R. C.; Rickerson, W. H.

    2011-05-01

    This report is intended to serve as a resource for policymakers who wish to learn more about establishing cost-based incentives. The report will identify key renewable energy cost modeling options, highlight the policy implications of choosing one approach over the other, and present recommendations on the optimal characteristics of a model to calculate rates for cost-based incentives, feed-in tariffs (FITs), or similar policies. These recommendations will be utilized in designing the Cost of Renewable Energy Spreadsheet Tool (CREST). Three CREST models will be publicly available and capable of analyzing the cost of energy associated with solar, wind, and geothermal electricity generators. The CREST models will be developed for use by state policymakers, regulators, utilities, developers, and other stakeholders to assist them in current and future rate-setting processes for both FIT and other renewable energy incentive payment structures and policy analyses.

  10. A Probabilistic Cost Estimation Model for Unexploded Ordnance Removal

    National Research Council Canada - National Science Library

    Poppe, Peter

    1999-01-01

    ...) contaminated sites that the services must decontaminate. Existing models for estimating the cost of UXO removal often require a high level of expertise and provide only a point estimate for the costs...

  11. Improved CORF model of simple cell combined with non-classical receptive field and its application on edge detection

    Science.gov (United States)

    Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie

    2018-02-01

    Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.

  12. A simple model for low energy ion-solid interactions

    International Nuclear Information System (INIS)

    Mohajerzadeh, S.; Selvakumar, C.R.

    1997-01-01

    A simple analytical model for ion-solid interactions, suitable for low energy beam depositions, is reported. An approximation for the nuclear stopping power is used to obtain the analytic solution for the deposited energy in the solid. The ratio of the deposited energy in the bulk to the energy deposited in the surface yields a ceiling for the beam energy above which more defects are generated in the bulk resulting in defective films. The numerical evaluations agree with the existing results in the literature. copyright 1997 American Institute of Physics

  13. Model improves oil field operating cost estimates

    International Nuclear Information System (INIS)

    Glaeser, J.L.

    1996-01-01

    A detailed operating cost model that forecasts operating cost profiles toward the end of a field's life should be constructed for testing depletion strategies and plans for major oil fields. Developing a good understanding of future operating cost trends is important. Incorrectly forecasting the trend can result in bad decision making regarding investments and reservoir operating strategies. Recent projects show that significant operating expense reductions can be made in the latter stages o field depletion without significantly reducing the expected ultimate recoverable reserves. Predicting future operating cost trends is especially important for operators who are currently producing a field and must forecast the economic limit of the property. For reasons presented in this article, it is usually not correct to either assume that operating expense stays fixed in dollar terms throughout the lifetime of a field, nor is it correct to assume that operating costs stay fixed on a dollar per barrel basis

  14. A Layered Decision Model for Cost-Effective System Security

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Huaqiang; Alves-Foss, James; Soule, Terry; Pforsich, Hugh; Zhang, Du; Frincke, Deborah A.

    2008-10-01

    System security involves decisions in at least three areas: identification of well-defined security policies, selection of cost-effective defence strategies, and implementation of real-time defence tactics. Although choices made in each of these areas affect the others, existing decision models typically handle these three decision areas in isolation. There is no comprehensive tool that can integrate them to provide a single efficient model for safeguarding a network. In addition, there is no clear way to determine which particular combinations of defence decisions result in cost-effective solutions. To address these problems, this paper introduces a Layered Decision Model (LDM) for use in deciding how to address defence decisions based on their cost-effectiveness. To validate the LDM and illustrate how it is used, we used simulation to test model rationality and applied the LDM to the design of system security for an e-commercial business case.

  15. Cost benefit analysis cost effectiveness analysis

    International Nuclear Information System (INIS)

    Lombard, J.

    1986-09-01

    The comparison of various protection options in order to determine which is the best compromise between cost of protection and residual risk is the purpose of the ALARA procedure. The use of decision-aiding techniques is valuable as an aid to selection procedures. The purpose of this study is to introduce two rather simple and well known decision aiding techniques: the cost-effectiveness analysis and the cost-benefit analysis. These two techniques are relevant for the great part of ALARA decisions which need the use of a quantitative technique. The study is based on an hypothetical case of 10 protection options. Four methods are applied to the data

  16. Economic-Mathematical Modeling of the Impact of the Prime Cost of Products on the Effectiveness of the Activity of Entrepreneurial Establishments

    Directory of Open Access Journals (Sweden)

    Mihail N. Dudin

    2014-09-01

    Full Text Available Subject/topic. One of the key elements in managing the operating activity of organizations is managing expenditure, since expenditure, which is payments that need to be effected to be able to engage and retain economic resources, is one of the major factors that determine the organization’s financial results, the cost-effectiveness of capital investments, and, ultimately, the cost of the business. Aim/objectives. This work aims to investigate into the impact of the structure of the product’s prime cost on the indicator of the product’s cost-effectiveness. Methodology. In putting this article together, the author employed such methods of analysis as legal, comparative, economic-statistical, and correlational. Inferences/significance. The practical significance of this work lies in that the author fine-tunes the concept and composition of the prime cost of products and establishes equations for simple linear regression between the share of costs in the composition of the prime cost and the level of cost-effectiveness of the product across various types of economic activity in the Russian Federation (RF in 2012. Knowing the share of costs in the structure of the product’s self-cost across various types of economic activity in the RF in 2012, we shall be able to use the derived models to assess the average level of the product’s cost-effectiveness.

  17. Cost-effectiveness of female human papillomavirus vaccination in 179 countries: a PRIME modelling study.

    Science.gov (United States)

    Jit, Mark; Brisson, Marc; Portnoy, Allison; Hutubessy, Raymond

    2014-07-01

    Introduction of human papillomavirus (HPV) vaccination in settings with the highest burden of HPV is not universal, partly because of the absence of quantitative estimates of country-specific effects on health and economic costs. We aimed to develop and validate a simple generic model of such effects that could be used and understood in a range of settings with little external support. We developed the Papillomavirus Rapid Interface for Modelling and Economics (PRIME) model to assess cost-effectiveness and health effects of vaccination of girls against HPV before sexual debut in terms of burden of cervical cancer and mortality. PRIME models incidence according to proposed vaccine efficacy against HPV 16/18, vaccine coverage, cervical cancer incidence and mortality, and HPV type distribution. It assumes lifelong vaccine protection and no changes to other screening programmes or vaccine uptake. We validated PRIME against existing reports of HPV vaccination cost-effectiveness, projected outcomes for 179 countries (assuming full vaccination of 12-year-old girls), and outcomes for 71 phase 2 GAVI-eligible countries (using vaccine uptake data from the GAVI Alliance). We assessed differences between countries in terms of cost-effectiveness and health effects. In validation, PRIME reproduced cost-effectiveness conclusions for 24 of 26 countries from 17 published studies, and for all 72 countries in a published study of GAVI-eligible countries. Vaccination of a cohort of 58 million 12-year-old girls in 179 countries prevented 690,000 cases of cervical cancer and 420,000 deaths during their lifetime (mostly in low-income or middle-income countries), at a net cost of US$4 billion. HPV vaccination was very cost effective (with every disability-adjusted life-year averted costing less than the gross domestic product per head) in 156 (87%) of 179 countries. Introduction of the vaccine in countries without national HPV vaccination at present would prevent substantially more cases

  18. Balancing the benefits and costs of antibiotic drugs: the TREAT model.

    Science.gov (United States)

    Leibovici, L; Paul, M; Andreassen, S

    2010-12-01

    TREAT is a computerized decision support system aimed at improving empirical antibiotic treatment of inpatients with suspected bacterial infections. It contains a model that balances, for each antibiotic choice (including 'no antibiotics'), expected benefit and expected costs. The main benefit afforded by appropriate, empirical, early antibiotic treatment in moderate to severe infections is a better chance of survival. Each antibiotic drug was consigned three cost components: cost of the drug and administration; cost of side effects; and costs of future resistance. 'No treatment' incurs no costs. The model worked well for decision support. Its analysis showed, yet again, that for moderate to severe infections, a model that does not include costs of resistance to future patients will always return maximum antibiotic treatment. Two major moral decisions are hidden in the model: how to take into account the limited life-expectancy and limited quality of life of old or very sick patients; and how to assign a value for a life-year of a future, unnamed patient vs. the present, individual patient. © 2010 The Authors. Clinical Microbiology and Infection © 2010 European Society of Clinical Microbiology and Infectious Diseases.

  19. Simple models of the hydrofracture process

    KAUST Repository

    Marder, M.

    2015-12-29

    Hydrofracturing to recover natural gas and oil relies on the creation of a fracture network with pressurized water. We analyze the creation of the network in two ways. First, we assemble a collection of analytical estimates for pressure-driven crack motion in simple geometries, including crack speed as a function of length, energy dissipated by fluid viscosity and used to break rock, and the conditions under which a second crack will initiate while a first is running. We develop a pseudo-three-dimensional numerical model that couples fluid motion with solid mechanics and can generate branching crack structures not specified in advance. One of our main conclusions is that the typical spacing between fractures must be on the order of a meter, and this conclusion arises in two separate ways. First, it arises from analysis of gas production rates, given the diffusion constants for gas in the rock. Second, it arises from the number of fractures that should be generated given the scale of the affected region and the amounts of water pumped into the rock.

  20. Simple models of the hydrofracture process

    KAUST Repository

    Marder, M.; Chen, Chih-Hung; Patzek, Tadeusz

    2015-01-01

    Hydrofracturing to recover natural gas and oil relies on the creation of a fracture network with pressurized water. We analyze the creation of the network in two ways. First, we assemble a collection of analytical estimates for pressure-driven crack motion in simple geometries, including crack speed as a function of length, energy dissipated by fluid viscosity and used to break rock, and the conditions under which a second crack will initiate while a first is running. We develop a pseudo-three-dimensional numerical model that couples fluid motion with solid mechanics and can generate branching crack structures not specified in advance. One of our main conclusions is that the typical spacing between fractures must be on the order of a meter, and this conclusion arises in two separate ways. First, it arises from analysis of gas production rates, given the diffusion constants for gas in the rock. Second, it arises from the number of fractures that should be generated given the scale of the affected region and the amounts of water pumped into the rock.

  1. Development of a simple model for the simultaneous degradation of concrete and clay in contact

    International Nuclear Information System (INIS)

    Neretnieks, Ivars

    2014-01-01

    Highlights: • The rate at which concrete and bentonite in contact degrade each other is modelled. • In portlandite and bentonite receding degradation fronts develop and move. • The model results compare well with results from complex models. - Abstract: In nuclear waste repositories concrete and bentonite are used, sometimes in contact with each other. The rate of mutual degradation of concrete and bentonite by alkaline fluids from concrete is explored using a simple model. The model considers dissolution of a soluble compound in the concrete (e.g. portlandite), which is gradually dissolved as the solubilised hydroxide and the cation(s) diffuse towards and into the bentonite in which smectite degrades by interaction with the solutes. Accounting for only the diffusion resistances in concrete and clay, the solubility of the concrete compound and the hydroxide consumption capacity of the smectite, results in a very simple analytical model. The model is tested against several published modelling results that account for reaction kinetics, reactive surface, and equilibrium data for tens to many tens of different secondary minerals. In the models that include several specified minerals often assumptions need to be made on which minerals can form. This introduces subjective assumptions. The degradation rates using the simple model are within the range of results obtained by the complex models. In the studies of the data used in these models it was found that the uncertainties in thermodynamic data are considerable and can give contradictory information on under what conditions smectite degrades. Some smectite models and thermodynamic data suggest that smectite will transform to other minerals spontaneously if there were no kinetic restrictions

  2. Incidence and cost of rotavirus hospitalizations in Denmark

    DEFF Research Database (Denmark)

    Fischer, Thea Kølsen; Nielsen, Nete Munk; Wohlfahrt, Jan

    2007-01-01

    In anticipation of licensure and introduction of rotavirus vaccine into the western market, we used modeling of national hospital registry data to determine the incidence and direct medical costs of annual rotavirus-associated admissions over >11 years in Denmark. Diarrhea-associated hospitalizat......In anticipation of licensure and introduction of rotavirus vaccine into the western market, we used modeling of national hospital registry data to determine the incidence and direct medical costs of annual rotavirus-associated admissions over >11 years in Denmark. Diarrhea......-associated hospitalizations coded as nonspecified viral or presumed infectious have demonstrated a marked winter peak similar to that of rotavirus-associated hospitalizations, which suggests that the registered rotavirus-coded admissions are grossly underestimated. We therefore obtained more realistic estimates by 2...... different models, which indicated 2.4 and 2.5 (for children rotavirus-associated admissions per 1,000 children per year, respectively. These admissions amount to associated direct medical costs of US $1.7-1.8 million per year. Using 2 simple...

  3. Optimal Vehicle Design Using the Integrated System and Cost Modeling Tool Suite

    Science.gov (United States)

    2010-08-01

    Space Vehicle Costing ( ACEIT ) • New Small Sat Model Development & Production Cost O&M Cost Module  Radiation Exposure  Radiation Detector Response...Reliability OML Availability Risk l l Tools CEA, SRM Model, POST, ACEIT , Inflation Model, Rotor Blade Des, Microsoft Project, ATSV, S/1-iABP...space STK, SOAP – Specific mission • Space Vehicle Design (SMAD) • Space Vehicle Propulsion • Orbit Propagation • Space Vehicle Costing ( ACEIT ) • New

  4. Comparing the relative cost-effectiveness of diagnostic studies: a new model

    International Nuclear Information System (INIS)

    Patton, D.D.; Woolfenden, J.M.; Wellish, K.L.

    1986-01-01

    We have developed a model to compare the relative cost-effectiveness of two or more diagnostic tests. The model defines a cost-effectiveness ratio (CER) for a diagnostic test as the ratio of effective cost to base cost, only dollar costs considered. Effective cost includes base cost, cost of dealing with expected side effects, and wastage due to imperfect test performance. Test performance is measured by diagnostic utility (DU), a measure of test outcomes incorporating the decision-analytic variables sensitivity, specificity, equivocal fraction, disease probability, and outcome utility. Each of these factors affecting DU, and hence CER, is a local, not universal, value; these local values strongly affect CER, which in effect becomes a property of the local medical setting. When DU = +1 and there are no adverse effects, CER = 1 and the patient benefits from the test dollar for dollar. When there are adverse effects effective cost exceeds base cost, and for an imperfect test DU 1. As DU approaches 0 (worthless test), CER approaches infinity (no effectiveness at any cost). If DU is negative, indicating that doing the test at all would be detrimental, CER also becomes negative. We conclude that the CER model is a useful preliminary method for ranking the relative cost-effectiveness of diagnostic tests, and that the comparisons would best be done using local values; different groups might well arrive at different rankings. (Author)

  5. A RECREATION OPTIMIZATION MODEL BASED ON THE TRAVEL COST METHOD

    OpenAIRE

    Hof, John G.; Loomis, John B.

    1983-01-01

    A recreation allocation model is developed which efficiently selects recreation areas and degree of development from an array of proposed and existing sites. The model does this by maximizing the difference between gross recreation benefits and travel, investment, management, and site-opportunity costs. The model presented uses the Travel Cost Method for estimating recreation benefits within an operations research framework. The model is applied to selection of potential wilderness areas in C...

  6. Adoption of an activity based costing model in an Indian steel plant

    Directory of Open Access Journals (Sweden)

    Rishi Dwivedi

    2016-10-01

    Full Text Available In the age of relentless global competition, constantly improving technology and better information systems, managers are often compelled to devise new strategies to maintain sustained competitive advantage while adopting new business management approaches. So, in this paper, an activity based costing (ABC model is proposed for a raw material handling section of an Indian steel plant. The results obtained from ABC model application in the said department facilitates quantification of the unit cost of each process, analysis of various activities in order to identify inefficiency, setting-up of better budget allocation, initiation of cost minimization procedure and establishment of an efficient resource requirement plan. Moreover, the cost information derived from ABC model is compared with that extracted from the traditional costing system to demonstrate that ABC model can significantly minimize the product cost distortion resulting from unsystematic allocation of overhead costs. This paper also discusses the practical implication of the implemented ABC model with respect to its critical role in effective resource control, improved strategic and operational decision making, and aid in continuous improvement through internal cost minimization in the department.

  7. A cost-performance model for ground-based optical communications receiving telescopes

    Science.gov (United States)

    Lesh, J. R.; Robinson, D. L.

    1986-01-01

    An analytical cost-performance model for a ground-based optical communications receiving telescope is presented. The model considers costs of existing telescopes as a function of diameter and field of view. This, coupled with communication performance as a function of receiver diameter and field of view, yields the appropriate telescope cost versus communication performance curve.

  8. Comparison study on models for calculation of NPP’s levelized unit electricity cost

    International Nuclear Information System (INIS)

    Nuryanti; Mochamad Nasrullah; Suparman

    2014-01-01

    Economic analysis that is generally done through the calculation of Levelized Unit Electricity Cost (LUEC) is crucial to be done prior to any investment decision on the nuclear power plant (NPP) project. There are several models that can be used to calculate LUEC, which are: R&D PT. PLN (Persero) Model, Mini G4ECONS model and Levelized Cost model. This study aimed to perform a comparison between the three models. Comparison technique was done by tracking the similarity used for each model and then given a case of LUEC calculation for SMR NPP 2 x 100 MW using these models. The result showed that the R&D PT. PLN (Persero) Model have a common principle with Mini G4ECONS model, which use Capital Recovery Factor (CRF) to discount the investment cost which eventually become annuity value along the life of plant. LUEC on both models is calculated by dividing the sum of the annual investment cost and the cost for operating NPP with an annual electricity production.While Levelized Cost model based on the annual cash flow. Total of annual costs and annual electricity production were discounted to the first year of construction in order to obtain the total discounted annual cost and the total discounted energy generation. LUEC was obtained by dividing both of the discounted values. LUEC calculations on the three models produce LUEC value, which are: 14.5942 cents US$/kWh for R&D PT. PLN (Persero) Model, 15.056 cents US$/kWh for Mini G4ECONs model and 14.240 cents US$/kWh for Levelized Cost model. (author)

  9. Semer: a simple calculational tool for the economic evaluations of reactor systems and associated innovations

    International Nuclear Information System (INIS)

    Nisan, S.; Rouyer, J.L.

    2001-01-01

    This paper summarises part of our on-going investigations on the economic evaluations of various nuclear and fossil energy systems and related innovations. These investigations are principally concerned with the development of the code system SEMER and its validation. SEMER has been developed to furnish top management and project leaders a simple tool for cost evaluations enabling a choice between competitive technological options. The cost evaluation models, actually integrated in the SEMER system, already cover a very wide range of electricity producing systems and, where relevant, their associated fuel cycles: The ''global models'', allowing rapid but relatively approximate overall cost estimations (about 15 % error). These include: Almost all the electricity producing systems using fossil energies (Oil, Coal, Gas, including gas turbines with combined cycles); Nuclear reactor systems including all the French PWRs, HTRs, Compact PWRs, and PWRs for nuclear propulsion systems. (author)

  10. Simple intake and pharmacokinetic modeling to characterize exposure of Americans to perfluoroctanoic acid, PFOA.

    Science.gov (United States)

    Lorber, Matthew; Egeghy, Peter P

    2011-10-01

    Models for assessing intakes of perfluorooctanoic acid, PFOA, are described and applied. One model is based on exposure media concentrations and contact rates. This model is applied to general population exposures for adults and 2-year old children. The other model is a simple one-compartment, first-order pharmacokinetic (PK) model. Parameters for this model include a rate of elimination of PFOA and a blood volume of distribution. The model was applied to data from the National Health and Nutritional Examination Survey, NHANES, to backcalculate intakes. The central tendency intake estimate for adults and children based on exposure media concentrations and contact rates were 70 and 26 ng/day, respectively. The central tendency adult intake derived from NHANES data was 56 and 37 ng/day for males and females, respectively. Variability and uncertainty discussions regarding the intake modeling focus on lack of data on direct exposure to PFOA used in consumer products, precursor compounds, and food. Discussions regarding PK modeling focus on the range of blood measurements in NHANES, the appropriateness of the simple PK model, and the uncertainties associated with model parameters. Using the PK model, the 10th and 95th percentile long-term average adult intakes of PFOA are 15 and 130 ng/day.

  11. A Simple Physics-Based Model Predicts Oil Production from Thousands of Horizontal Wells in Shales

    KAUST Repository

    Patzek, Tadeusz; Saputra, Wardana; Kirati, Wissem

    2017-01-01

    and ultimate recovery in shale wells. Here we introduce a simple model of producing oil and solution gas from the horizontal hydrofractured wells. This model is consistent with the basic physics and geometry of the extraction process. We then apply our model

  12. Simple Electromagnetic Modeling of Small Airplanes: Neural Network Approach

    Directory of Open Access Journals (Sweden)

    P. Tobola

    2009-04-01

    Full Text Available The paper deals with the development of simple electromagnetic models of small airplanes, which can contain composite materials in their construction. Electromagnetic waves can penetrate through the surface of the aircraft due to the specific electromagnetic properties of the composite materials, which can increase the intensity of fields inside the airplane and can negatively influence the functionality of the sensitive avionics. The airplane is simulated by two parallel dielectric layers (the left-hand side wall and the right-hand side wall of the airplane. The layers are put into a rectangular metallic waveguide terminated by the absorber in order to simulate the illumination of the airplane by the external wave (both of the harmonic nature and pulse one. Thanks to the simplicity of the model, the parametric analysis can be performed, and the results can be used in order to train an artificial neural network. The trained networks excel in further reduction of CPU-time demands of an airplane modeling.

  13. Working long hours: less productive but less costly? Firm-level evidence from Belgium

    OpenAIRE

    DELMEZ, Françoise; Vandenberghe, Vincent

    2017-01-01

    From the point of view of a profit-maximizing firm, the optimal number of working hours depends not only on the marginal productivity of hours but also on the marginal labour cost. This paper develops and assesses empirically a simple model of firms' decision making where productivity varies with hours and where the firm faces labour costs per worker that are invariant to the number of hours worked: i.e. quasi-fixed labour costs. Using Belgian firm-level data on production, labour costs, work...

  14. A Simple Model to Demonstrate the Balance of Forces at Functional Residual Capacity

    Science.gov (United States)

    Kanthakumar, Praghalathan; Oommen, Vinay

    2012-01-01

    Numerous models have been constructed to aid teaching respiratory mechanics. A simple model using a syringe and a water-filled bottle has been described by Thomas Sherman to explain inspiration and expiration. The elastic recoil of the chest wall and lungs has been described using a coat hanger or by using rods and rubber bands. A more complex…

  15. Context-dependent decision-making: a simple Bayesian model.

    Science.gov (United States)

    Lloyd, Kevin; Leslie, David S

    2013-05-06

    Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or 'contexts' allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects.

  16. Simple standard problem for the Preisach moving model

    International Nuclear Information System (INIS)

    Morentin, F.J.; Alejos, O.; Francisco, C. de; Munoz, J.M.; Hernandez-Gomez, P.; Torres, C.

    2004-01-01

    The present work proposes a simple magnetic system as a candidate for a Standard Problem for Preisach-based models. The system consists in a regular square array of magnetic particles totally oriented along the direction of application of an external magnetic field. The behavior of such system was numerically simulated for different values of the interaction between particles and of the standard deviation of the critical fields of the particles. The characteristic parameters of the Preisach moving model were worked out during simulations, i.e., the mean value and the standard deviation of the interaction field. For this system, results reveal that the mean interaction field depends linearly on the system magnetization, as the Preisach moving model predicts. Nevertheless, the standard deviation cannot be considered as independent of the magnetization. In fact, the standard deviation shows a maximum at demagnetization and two minima at magnetization saturation. Furthermore, not all the demagnetization states are equivalent. The plot standard deviation vs. magnetization is a multi-valuated curve when the system undergoes an AC demagnetization procedure. In this way, the standard deviation increases as the system goes from coercivity to the AC demagnetized state

  17. Synergistic Role of Balanced Scorecard/Activity Based Costing and Goal Programming Combined Model on Strategic Cost Management

    OpenAIRE

    Taleghani, Mohammad

    2017-01-01

    During the past few years, we have seen a significant shift in cost accounting and management. In the new business environment, cost management has become a critical skill, but it is not sufficient for simply reducing costs; instead, costs must be managed strategically. Application of a successful Strategic Cost Management (StraCM) system plays the significant role in success of organization performance. In this study, we want to illustrate how the goal programming model in combination with t...

  18. A simple model for retrieving bare soil moisture from radar-scattering coefficients

    International Nuclear Information System (INIS)

    Chen, K.S.; Yen, S.K.; Huang, W.P.

    1995-01-01

    A simple algorithm based on a rough surface scattering model was developed to invert the bare soil moisture content from active microwave remote sensing data. In the algorithm development, a frequency mixing model was used to relate soil moisture to the dielectric constant. In particular, the Integral Equation Model (IEM) was used over a wide range of surface roughness and radar frequencies. To derive the algorithm, a sensitivity analysis was performed using a Monte Carlo simulation to study the effects of surface parameters, including height variance, correlation length, and dielectric constant. Because radar return is inherently dependent on both moisture content and surface roughness, the purpose of the sensitivity testing was to select the proper radar parameters so as to optimally decouple these two factors, in an attempt to minimize the effects of one while the other was observed. As a result, the optimal radar parameter ranges can be chosen for the purpose of soil moisture content inversion. One thousand samples were then generated with the IEM model followed by multivariate linear regression analysis to obtain an empirical soil moisture model. Numerical comparisons were made to illustrate the inversion performance using experimental measurements. Results indicate that the present algorithm is simple and accurate, and can be a useful tool for the remote sensing of bare soil surfaces. (author)

  19. Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow

    Science.gov (United States)

    Kerner, Boris S.; Klenov, Sergey L.; Schreckenberg, Michael

    2011-10-01

    We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules “acceleration,” “deceleration,” “randomization,” and “motion” of the Nagel-Schreckenberg CA model as well as “overacceleration through lane changing to the faster lane,” “comparison of vehicle gap with the synchronization gap,” and “speed adaptation within the synchronization gap” of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.

  20. PACCOM: A nuclear waste packaging facility cost model: Draft technical report

    International Nuclear Information System (INIS)

    Dippold, D.G.; Tzemos, S.; Smith, D.J.

    1985-05-01

    PACCOM is a computerized, parametric model used to estimate the capital, operating, and decommissioning costs of a variety of nuclear waste packaging facility configurations. The model is based upon a modular waste packaging facility concept from which functional components of the overall facility have been identified and their design and costs related to various parameters such as waste type, waste throughput, and the number of operational shifts employed. The model may be used to either estimate the cost of a particular waste packaging facility configuration or to explore the cost tradeoff between plant capital and labor. That is, one may use the model to search for the particular facility sizes and associated cost which when coupled with a particular number of shifts, and thus staffing level, leads to the lowest overall total cost. The functional components which the model considers include hot cells and their supporting facilities, transportation, cask handling facilities, transuranic waste handling facilities, and administrative facilities such as warehouses, security buildings, maintenance buildings, etc. The cost of each of these functional components is related either directly or indirectly to the various independent design parameters. Staffing by shift is reported into direct and indirect support labor. These staffing levels are in turn related to the waste type, waste throughput, etc. 2 refs., 11 figs., 3 tabs

  1. A Simple Model of Trade with Heterogeneous Firms and Trade Policy

    OpenAIRE

    Fukushima, Marcelo; Kikuchi, Toru

    2008-01-01

    This paper builds a Ricardian-Chamberlinian two-country model with heterogeneous firms in a monopolistically competitive sector in which every new entrant faces increasing fixed costs of production. There are efficiency gaps between countries in marginal and fixed costs and a country unilaterally imposes an import tariff. It is shown that an increase in tariff increases the number of firms of the tariff imposing country while decreases the number of firms of the tariff-imposed country, ...

  2. DYNAMIC SURFACE BOUNDARY-CONDITIONS - A SIMPLE BOUNDARY MODEL FOR MOLECULAR-DYNAMICS SIMULATIONS

    NARCIS (Netherlands)

    JUFFER, AH; BERENDSEN, HJC

    1993-01-01

    A simple model for the treatment of boundaries in molecular dynamics simulations is presented. The method involves the positioning of boundary atoms on a surface that surrounds a system of interest. The boundary atoms interact with the inner region and represent the effect of atoms outside the

  3. The model for estimation production cost of embroidery handicraft

    Science.gov (United States)

    Nofierni; Sriwana, IK; Septriani, Y.

    2017-12-01

    Embroidery industry is one of type of micro industry that produce embroidery handicraft. These industries are emerging in some rural areas of Indonesia. Embroidery clothing are produce such as scarves and clothes that show cultural value of certain region. The owner of an enterprise must calculate the cost of production before making a decision on how many products are received from the customer. A calculation approach to production cost analysis is needed to consider the feasibility of each order coming. This study is proposed to design the expert system (ES) in order to improve production management in the embroidery industry. The model will design used Fuzzy inference system as a model to estimate production cost. Research conducted based on survey and knowledge acquisitions from stakeholder of supply chain embroidery handicraft industry at Bukittinggi, West Sumatera, Indonesia. This paper will use fuzzy input where the quality, the complexity of the design and the working hours required and the result of the model are useful to manage production cost on embroidery production.

  4. Modeling the stylized facts in finance through simple nonlinear adaptive systems

    Science.gov (United States)

    Hommes, Cars H.

    2002-01-01

    Recent work on adaptive systems for modeling financial markets is discussed. Financial markets are viewed as evolutionary systems between different, competing trading strategies. Agents are boundedly rational in the sense that they tend to follow strategies that have performed well, according to realized profits or accumulated wealth, in the recent past. Simple technical trading rules may survive evolutionary competition in a heterogeneous world where prices and beliefs co-evolve over time. Evolutionary models can explain important stylized facts, such as fat tails, clustered volatility, and long memory, of real financial series. PMID:12011401

  5. An approximate fractional Gaussian noise model with computational cost

    KAUST Repository

    Sørbye, Sigrunn H.

    2017-09-18

    Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood-based approach is ${\\\\mathcal O}(n^{2})$, exploiting the Toeplitz structure of the covariance matrix. In most realistic cases, we do not observe the fGn process directly but only through indirect Gaussian observations, so the Toeplitz structure is easily lost and the computational cost increases to ${\\\\mathcal O}(n^{3})$. This paper presents an approximate fGn model of ${\\\\mathcal O}(n)$ computational cost, both with direct or indirect Gaussian observations, with or without conditioning. This is achieved by approximating fGn with a weighted sum of independent first-order autoregressive processes, fitting the parameters of the approximation to match the autocorrelation function of the fGn model. The resulting approximation is stationary despite being Markov and gives a remarkably accurate fit using only four components. The performance of the approximate fGn model is demonstrated in simulations and two real data examples.

  6. MONNIE 2000: A description of a model to calculate environmental costs

    International Nuclear Information System (INIS)

    Hanemaaijer, A.H.; Kirkx, M.C.A.P.

    2001-02-01

    A new model (MONNIE 2000) was developed by the RIVM in the Netherlands in 2000 to calculate environmental costs on a macro level. The model, it's theoretical backgrounds and the technical aspects are described, making it attractive to both the user and the designer of the model. A user manual on how to calculate with the model is included. The basic principle of the model is the use of a harmonised method for calculating environmental costs, which provides the user with an output that can easily be compared with and used in other economic statistics and macro-economic models in the Netherlands. Input for the model are yearly figures on operational costs, investments and savings from environmental measures. With MONNIE 2000 calculated environmental costs per policy target group, economic sector and theme can be shown, With this model the burden of environmental measures on the economic sectors and the environmental expenditures of the government can be presented as well. MONNIE 2000 is developed in Visual Basic and by using Excel as input and output a user-friendly data exchange is realised. 12 refs

  7. Utilising temperature differences as constraints for estimating parameters in a simple climate model

    International Nuclear Information System (INIS)

    Bodman, Roger W; Karoly, David J; Enting, Ian G

    2010-01-01

    Simple climate models can be used to estimate the global temperature response to increasing greenhouse gases. Changes in the energy balance of the global climate system are represented by equations that necessitate the use of uncertain parameters. The values of these parameters can be estimated from historical observations, model testing, and tuning to more complex models. Efforts have been made at estimating the possible ranges for these parameters. This study continues this process, but demonstrates two new constraints. Previous studies have shown that land-ocean temperature differences are only weakly correlated with global mean temperature for natural internal climate variations. Hence, these temperature differences provide additional information that can be used to help constrain model parameters. In addition, an ocean heat content ratio can also provide a further constraint. A pulse response technique was used to identify relative parameter sensitivity which confirmed the importance of climate sensitivity and ocean vertical diffusivity, but the land-ocean warming ratio and the land-ocean heat exchange coefficient were also found to be important. Experiments demonstrate the utility of the land-ocean temperature difference and ocean heat content ratio for setting parameter values. This work is based on investigations with MAGICC (Model for the Assessment of Greenhouse-gas Induced Climate Change) as the simple climate model.

  8. Simple model of variation of the signature of a space-time metric

    International Nuclear Information System (INIS)

    Konstantinov, M.Yu.

    2004-01-01

    The problem on the changes in the space-time signature metrics is discussed. The simple model, wherein the space-time metrics signature is determined by the nonlinear scalar field, is proposed. It is shown that both classical and quantum description of changes in the metrics signature is possible within the frames of the considered model; the most characteristic peculiarities and variations of the classical and quantum descriptions are also briefly noted [ru

  9. New Approaches in Reuseable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    Zapata, Edgar

    2013-01-01

    This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model

  10. New Approaches in Reusable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    Zapata, Edgar

    2013-01-01

    This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model

  11. Analysing uncertainty around costs of innovative medical technologies: the case of fibrin sealant (QUIXIL) for total knee replacement.

    NARCIS (Netherlands)

    Steuten, Lotte Maria Gertruda; Vallejo-Torres, Laura; Bastide, Philippe; Buxton, Martin J.

    2009-01-01

    This paper presents a relatively simple cost model comparing the costs of using a commercial fibrin sealant (QUIXIL®) in addition to conventional haemostatic treatment vs. conventional treatment alone in total knee replacement (TKR) surgery, and demonstrates and discusses how one- and two-way

  12. Fixed transaction costs and modelling limited dependent variables

    NARCIS (Netherlands)

    Hempenius, A.L.

    1994-01-01

    As an alternative to the Tobit model, for vectors of limited dependent variables, I suggest a model, which follows from explicitly using fixed costs, if appropriate of course, in the utility function of the decision-maker.

  13. Construction cost prediction model for conventional and sustainable college buildings in North America

    Directory of Open Access Journals (Sweden)

    Othman Subhi Alshamrani

    2017-03-01

    Full Text Available The literature lacks in initial cost prediction models for college buildings, especially comparing costs of sustainable and conventional buildings. A multi-regression model was developed for conceptual initial cost estimation of conventional and sustainable college buildings in North America. RS Means was used to estimate the national average of construction costs for 2014, which was subsequently utilized to develop the model. The model could predict the initial cost per square feet with two structure types made of steel and concrete. The other predictor variables were building area, number of floors and floor height. The model was developed in three major stages, such as preliminary diagnostics on data quality, model development and validation. The developed model was successfully tested and validated with real-time data.

  14. A Costing Analysis for Decision Making Grid Model in Failure-Based Maintenance

    Directory of Open Access Journals (Sweden)

    Burhanuddin M. A.

    2011-01-01

    Full Text Available Background. In current economic downturn, industries have to set good control on production cost, to maintain their profit margin. Maintenance department as an imperative unit in industries should attain all maintenance data, process information instantaneously, and subsequently transform it into a useful decision. Then act on the alternative to reduce production cost. Decision Making Grid model is used to identify strategies for maintenance decision. However, the model has limitation as it consider two factors only, that is, downtime and frequency of failures. We consider third factor, cost, in this study for failure-based maintenance. The objective of this paper is to introduce the formulae to estimate maintenance cost. Methods. Fish bone analysis conducted with Ishikawa model and Decision Making Grid methods are used in this study to reveal some underlying risk factors that delay failure-based maintenance. The goal of the study is to estimate the risk factor that is, repair cost to fit in the Decision Making Grid model. Decision Making grid model consider two variables, frequency of failure and downtime in the analysis. This paper introduces third variable, repair cost for Decision Making Grid model. This approaches give better result to categorize the machines, reduce cost, and boost the earning for the manufacturing plant. Results. We collected data from one of the food processing factories in Malaysia. From our empirical result, Machine C, Machine D, Machine F, and Machine I must be in the Decision Making Grid model even though their frequency of failures and downtime are less than Machine B and Machine N, based on the costing analysis. The case study and experimental results show that the cost analysis in Decision Making Grid model gives more promising strategies in failure-based maintenance. Conclusions. The improvement of Decision Making Grid model for decision analysis with costing analysis is our contribution in this paper for

  15. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  16. Cost calculation model concerning small-scale production of chips and split firewood

    International Nuclear Information System (INIS)

    Ryynaenen, S.; Naett, H.; Valkonen, J.

    1995-01-01

    The TTS-Institute's Forestry Department has developed a computer-based cost calculation model for the production of wood chips and split firewood. This development work was carried out in conjunction with the nation-wide BIOENERGY -research programme. The said calculation model eases and speeds up the calculation of unit costs and resource needs in harvesting systems for wood chips and split firewood. The model also enables the user to find out how changes in the productivity and costs bases of different harvesting chains influences the unit costs of the system as a whole. The undertaking was composed of the following parts: clarification and modification of productivity bases for application in the model as mathematical models, clarification of machine and device costs bases, designing of the structure and functions of the calculation model, construction and testing of the model's 0-version, model calculations concerning typical chains, review of calculation bases, and charting of development needs focusing on the model. The calculation model was developed to serve research needs, but with further development it could be useful as a tool in forestry and agricultural extension work, related schools and colleges, and in the hands of firewood producers. (author)

  17. EFFICIENCY AND COST MODELLING OF THERMAL POWER PLANTS

    Directory of Open Access Journals (Sweden)

    Péter Bihari

    2010-01-01

    Full Text Available The proper characterization of energy suppliers is one of the most important components in the modelling of the supply/demand relations of the electricity market. Power generation capacity i. e. power plants constitute the supply side of the relation in the electricity market. The supply of power stations develops as the power stations attempt to achieve the greatest profit possible with the given prices and other limitations. The cost of operation and the cost of load increment are thus the most important characteristics of their behaviour on the market. In most electricity market models, however, it is not taken into account that the efficiency of a power station also depends on the level of the load, on the type and age of the power plant, and on environmental considerations. The trade in electricity on the free market cannot rely on models where these essential parameters are omitted. Such an incomplete model could lead to a situation where a particular power station would be run either only at its full capacity or else be entirely deactivated depending on the prices prevailing on the free market. The reality is rather that the marginal cost of power generation might also be described by a function using the efficiency function. The derived marginal cost function gives the supply curve of the power station. The load level dependent efficiency function can be used not only for market modelling, but also for determining the pollutant and CO2 emissions of the power station, as well as shedding light on the conditions for successfully entering the market. Based on the measurement data our paper presents mathematical models that might be used for the determination of the load dependent efficiency functions of coal, oil, or gas fuelled power stations (steam turbine, gas turbine, combined cycle and IC engine based combined heat and power stations. These efficiency functions could also contribute to modelling market conditions and determining the

  18. Update on Multi-Variable Parametric Cost Models for Ground and Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2012-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper reports on recent revisions and improvements to our ground telescope cost model and refinements of our understanding of space telescope cost models. One interesting observation is that while space telescopes are 50X to 100X more expensive than ground telescopes, their respective scaling relationships are similar. Another interesting speculation is that the role of technology development may be different between ground and space telescopes. For ground telescopes, the data indicates that technology development tends to reduce cost by approximately 50% every 20 years. But for space telescopes, there appears to be no such cost reduction because we do not tend to re-fly similar systems. Thus, instead of reducing cost, 20 years of technology development may be required to enable a doubling of space telescope capability. Other findings include: mass should not be used to estimate cost; spacecraft and science instrument costs account for approximately 50% of total mission cost; and, integration and testing accounts for only about 10% of total mission cost.

  19. Radioimmunoassay evaluation and quality control by use of a simple computer program for a low cost desk top calculator

    International Nuclear Information System (INIS)

    Schwarz, S.

    1980-01-01

    A simple computer program for the data processing and quality control of radioimmunoassays is presented. It is written for low cost programmable desk top calculator (Hewlett Packard 97), which can be afforded by smaller laboratories. The untreated counts from the scintillation spectrometer are entered manually; the printout gives the following results: initial data, logit-log transformed calibration points, parameters of goodness of fit and of the position of the standard curve, control and unknown samples dose estimates (mean value from single dose interpolations and scatter of replicates) together with the automatic calculation of within assay variance and, by use of magnetic cards holding the control parameters of all previous assays, between assay variance. (orig.) [de

  20. Spin decoherence in electron storage rings. More from a simple model

    Energy Technology Data Exchange (ETDEWEB)

    Barber, D.P. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Heinemann, K. [The Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Mathematics and Statistics

    2015-06-15

    This is an addendum to the paper ''Some models of spin coherence and decoherence in storage rings'' by one of the authors (K. Heinemann, DESY Report 97-166 (1997)), in which spin diffusion in simple electron storage rings is studied. In particular, we illustrate in a compact way, namely that the exact formalism of this article delivers a rate of depolarisation which can differ from that obtained by the conventional treatments of spin diffusion which rely on the use of the derivative ∂n/∂η. As a vehicle we consider a ring with a Siberian Snake and electron polarisation in the plane of the ring. For this simple setup with its one-dimensional spin motion, we avoid having to deal directly with the Bloch equation for the polarisation density. Our treatment, which is deliberately pedagogical, shows that the use of ∂n/∂η provides a very good approximation to the rate of spin depolarisation in the model considered. But it then shows that the exact rate of depolarisation can be obtained by replacing ∂n/∂η by another derivative, while giving a heuristic justification for the new derivative.

  1. A simple electromagnetic model for the light clock of special relativity

    International Nuclear Information System (INIS)

    Smith, Glenn S

    2011-01-01

    Thought experiments involving a light clock are common in introductory treatments of special relativity, because they provide a simple way of demonstrating the non-intuitive phenomenon of time dilation. The properties of the ray or pulse of light that is continuously reflected between the parallel mirrors of the clock are often stated vaguely and sometimes involve implicitly other relativistic effects, such as aberration. While this approach is adequate for an introduction, it should be supplemented by a more accurate analysis of the light clock once the formulae for the Lorentz transformation and the transformation of the electromagnetic field have been developed. A simple yet accurate electromagnetic model for the light clock is presented for this purpose. In this model, the ray of light in the qualitative treatment is replaced by a guided wave in a parallel-plate waveguide. Expressions for the electromagnetic field and energy density within the waveguide are determined in the inertial frame in which the clock is at rest and the laboratory frame in which the clock is moving with constant velocity. The analytical expressions and graphical results obtained clearly demonstrate the operation of the clock and time dilation, as well as other interesting relativistic effects.

  2. The Cost-Effectiveness of Dual Mobility Implants for Primary Total Hip Arthroplasty: A Computer-Based Cost-Utility Model.

    Science.gov (United States)

    Barlow, Brian T; McLawhorn, Alexander S; Westrich, Geoffrey H

    2017-05-03

    Dislocation remains a clinically important problem following primary total hip arthroplasty, and it is a common reason for revision total hip arthroplasty. Dual mobility (DM) implants decrease the risk of dislocation but can be more expensive than conventional implants and have idiosyncratic failure mechanisms. The purpose of this study was to investigate the cost-effectiveness of DM implants compared with conventional bearings for primary total hip arthroplasty. Markov model analysis was conducted from the societal perspective with use of direct and indirect costs. Costs, expressed in 2013 U.S. dollars, were derived from the literature, the National Inpatient Sample, and the Centers for Medicare & Medicaid Services. Effectiveness was expressed in quality-adjusted life years (QALYs). The model was populated with health state utilities and state transition probabilities derived from previously published literature. The analysis was performed for a patient's lifetime, and costs and effectiveness were discounted at 3% annually. The principal outcome was the incremental cost-effectiveness ratio (ICER), with a willingness-to-pay threshold of $100,000/QALY. Sensitivity analyses were performed to explore relevant uncertainty. In the base case, DM total hip arthroplasty showed absolute dominance over conventional total hip arthroplasty, with lower accrued costs ($39,008 versus $40,031 U.S. dollars) and higher accrued utility (13.18 versus 13.13 QALYs) indicating cost-savings. DM total hip arthroplasty ceased being cost-saving when its implant costs exceeded those of conventional total hip arthroplasty by $1,023, and the cost-effectiveness threshold for DM implants was $5,287 greater than that for conventional implants. DM was not cost-effective when the annualized incremental probability of revision from any unforeseen failure mechanism or mechanisms exceeded 0.29%. The probability of intraprosthetic dislocation exerted the most influence on model results. This model

  3. NASA/Air Force Cost Model: NAFCOM

    Science.gov (United States)

    Winn, Sharon D.; Hamcher, John W. (Technical Monitor)

    2002-01-01

    The NASA/Air Force Cost Model (NAFCOM) is a parametric estimating tool for space hardware. It is based on historical NASA and Air Force space projects and is primarily used in the very early phases of a development project. NAFCOM can be used at the subsystem or component levels.

  4. A Simple Model to Teach Business Cycle Macroeconomics for Emerging Market and Developing Economies

    Science.gov (United States)

    Duncan, Roberto

    2015-01-01

    The canonical neoclassical model is insufficient to understand business cycle fluctuations in emerging market and developing economies. The author reformulates the model proposed by Aguiar and Gopinath (2007) in a simple setting that can be used to teach business cycle macroeconomics for emerging market and developing economies at the…

  5. The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making

    Directory of Open Access Journals (Sweden)

    Gabriela Tavares

    2017-08-01

    Full Text Available Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.

  6. The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making.

    Science.gov (United States)

    Tavares, Gabriela; Perona, Pietro; Rangel, Antonio

    2017-01-01

    Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.

  7. Least squares estimation in a simple random coefficient autoregressive model

    DEFF Research Database (Denmark)

    Johansen, S; Lange, T

    2013-01-01

    The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p...... we prove the curious result that View the MathML source. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of View the MathML source and View the MathML source and hence the limit of View the MathML source...

  8. Cost and cost effectiveness of long-lasting insecticide-treated bed nets - a model-based analysis

    Directory of Open Access Journals (Sweden)

    Pulkki-Brännström Anni-Maria

    2012-04-01

    Full Text Available Abstract Background The World Health Organization recommends that national malaria programmes universally distribute long-lasting insecticide-treated bed nets (LLINs. LLINs provide effective insecticide protection for at least three years while conventional nets must be retreated every 6-12 months. LLINs may also promise longer physical durability (lifespan, but at a higher unit price. No prospective data currently available is sufficient to calculate the comparative cost effectiveness of different net types. We thus constructed a model to explore the cost effectiveness of LLINs, asking how a longer lifespan affects the relative cost effectiveness of nets, and if, when and why LLINs might be preferred to conventional insecticide-treated nets. An innovation of our model is that we also considered the replenishment need i.e. loss of nets over time. Methods We modelled the choice of net over a 10-year period to facilitate the comparison of nets with different lifespan (and/or price and replenishment need over time. Our base case represents a large-scale programme which achieves high coverage and usage throughout the population by distributing either LLINs or conventional nets through existing health services, and retreats a large proportion of conventional nets regularly at low cost. We identified the determinants of bed net programme cost effectiveness and parameter values for usage rate, delivery and retreatment cost from the literature. One-way sensitivity analysis was conducted to explicitly compare the differential effect of changing parameters such as price, lifespan, usage and replenishment need. Results If conventional and long-lasting bed nets have the same physical lifespan (3 years, LLINs are more cost effective unless they are priced at more than USD 1.5 above the price of conventional nets. Because a longer lifespan brings delivery cost savings, each one year increase in lifespan can be accompanied by a USD 1 or more increase in price

  9. A Simple model for breach formation by overtopping

    Energy Technology Data Exchange (ETDEWEB)

    Trousseau, P. [Hydro-Quebec, Montreal, PQ (Canada); Kahawita, R. [Ecole Polytechnique, Montreal, PQ (Canada)

    2006-07-01

    Failures in earth or rockfill dams are often caused by overtopping of the crest, leading to initiation and uncontrolled progression of a breach. Overtopping may occur because of large inflows into the reservoir caused by excessive rainfall or by the failure of an upstream dam that causes a large volume of water to suddenly arrive at the downstream reservoir thus rapidly exceeding the storage and spillway evacuation capacity. Breach formation in a rockfill or earthfill dike due to overtopping of the crest is a complex process as it involves interaction between the hydraulics of the flow and the erosion characteristics of the fill material. This paper presented a description and validation of a simple parametric model for breach formation due to overtopping. A study was conducted to model, as closely as possible, the physical processes involved within the restriction of the simplified analysis. The objective of the study was to predict the relevant timescales for the phenomenon leading to a prediction of the outflow hydrograph. The model has been validated on the Oros dam failure in Brazil as well as on embankment tests conducted at Rosvatn, Norway. It was concluded that the major impediment to the development of breach erosion models for use as predictive tools is in the characterization of the erosion behaviour. 19 refs., 2 tabs., 9 figs.

  10. Fermionic dark matter in a simple t-channel model

    International Nuclear Information System (INIS)

    Goyal, Ashok; Kumar, Mukesh

    2016-01-01

    We consider a fermionic dark matter (DM) particle in renormalizable Standard Model (SM) gauge interactions in a simple t-channel model. The DM particle interactions with SM fermions is through the exchange of scalar and vector mediators which carry colour or lepton number. In the case of coloured mediators considered in this study, we find that if the DM is thermally produced and accounts for the observed relic density almost the entire parameter space is ruled out by the direct detection observations. The bounds from the monojet plus missing energy searches at the Large Hadron Collider are less stringent in this case. In contrast for the case of Majorana DM, we obtain strong bounds from the monojet searches which rule out DM particles of mass less than about a few hundred GeV for both the scalar and vector mediators.

  11. Operations Assessment of Launch Vehicle Architectures using Activity Based Cost Models

    Science.gov (United States)

    Ruiz-Torres, Alex J.; McCleskey, Carey

    2000-01-01

    The growing emphasis on affordability for space transportation systems requires the assessment of new space vehicles for all life cycle activities, from design and development, through manufacturing and operations. This paper addresses the operational assessment of launch vehicles, focusing on modeling the ground support requirements of a vehicle architecture, and estimating the resulting costs and flight rate. This paper proposes the use of Activity Based Costing (ABC) modeling for this assessment. The model uses expert knowledge to determine the activities, the activity times and the activity costs based on vehicle design characteristics. The approach provides several advantages to current approaches to vehicle architecture assessment including easier validation and allowing vehicle designers to understand the cost and cycle time drivers.

  12. European Union-28: An annualised cost-of-illness model for venous thromboembolism.

    Science.gov (United States)

    Barco, Stefano; Woersching, Alex L; Spyropoulos, Alex C; Piovella, Franco; Mahan, Charles E

    2016-04-01

    Annual costs for venous thromboembolism (VTE) have been defined within the United States (US) demonstrating a large opportunity for cost savings. Costs for the European Union-28 (EU-28) have never been defined. A literature search was conducted to evaluate EU-28 cost sources. Median costs were defined for each cost input and costs were inflated to 2014 Euros (€) in the study country and adjusted for Purchasing Power Parity between EU countries. Adjusted costs were used to populate previously published cost-models based on adult incidence-based events. In the base model, annual expenditures for total, hospital-associated, preventable, and indirect costs were €1.5-2.2 billion, €1.0-1.5 billion, €0.5-1.1 billion and €0.2-0.3 billion, respectively (indirect costs: 12 % of expenditures). In the long-term attack rate model, total, hospital-associated, preventable, and indirect costs were €1.8-3.3 billion, €1.2-2.4 billion, €0.6-1.8 billion and €0.2-0.7 billion (indirect costs: 13 % of expenditures). In the multiway sensitivity analysis, annual expenditures for total, hospital-associated, preventable, and indirect costs were €3.0-8.5 billion, €2.2-6.2 billion, €1.1-4.6 billion and €0.5-1.4 billion (indirect costs: 22 % of expenditures). When the value of a premature life-lost increased slightly, aggregate costs rose considerably since these costs are higher than the direct medical costs. When evaluating the models aggregately for costs, the results suggests total, hospital-associated, preventable, and indirect costs ranging from €1.5-13.2 billion, €1.0-9.7 billion, €0.5-7.3 billion and €0.2-6.1 billion, respectively. Our study demonstrates that VTE costs have a large financial impact upon the EU-28's healthcare systems and that significant savings could be realised if better preventive measures are applied.

  13. Towards a Multi-Variable Parametric Cost Model for Ground and Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd

    2016-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost approximately (X) D(exp (1.75 +/- 0.05)) lambda(exp(-0.5 +/- 0.25) T(exp -0.25) e (exp (-0.04)Y). Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).

  14. Systematic review of model-based analyses reporting the cost-effectiveness and cost-utility of cardiovascular disease management programs.

    Science.gov (United States)

    Maru, Shoko; Byrnes, Joshua; Whitty, Jennifer A; Carrington, Melinda J; Stewart, Simon; Scuffham, Paul A

    2015-02-01

    The reported cost effectiveness of cardiovascular disease management programs (CVD-MPs) is highly variable, potentially leading to different funding decisions. This systematic review evaluates published modeled analyses to compare study methods and quality. Articles were included if an incremental cost-effectiveness ratio (ICER) or cost-utility ratio (ICUR) was reported, it is a multi-component intervention designed to manage or prevent a cardiovascular disease condition, and it addressed all domains specified in the American Heart Association Taxonomy for Disease Management. Nine articles (reporting 10 clinical outcomes) were included. Eight cost-utility and two cost-effectiveness analyses targeted hypertension (n=4), coronary heart disease (n=2), coronary heart disease plus stoke (n=1), heart failure (n=2) and hyperlipidemia (n=1). Study perspectives included the healthcare system (n=5), societal and fund holders (n=1), a third party payer (n=3), or was not explicitly stated (n=1). All analyses were modeled based on interventions of one to two years' duration. Time horizon ranged from two years (n=1), 10 years (n=1) and lifetime (n=8). Model structures included Markov model (n=8), 'decision analytic models' (n=1), or was not explicitly stated (n=1). Considerable variation was observed in clinical and economic assumptions and reporting practices. Of all ICERs/ICURs reported, including those of subgroups (n=16), four were above a US$50,000 acceptability threshold, six were below and six were dominant. The majority of CVD-MPs was reported to have favorable economic outcomes, but 25% were at unacceptably high cost for the outcomes. Use of standardized reporting tools should increase transparency and inform what drives the cost-effectiveness of CVD-MPs. © The European Society of Cardiology 2014.

  15. Simple model of surface roughness for binary collision sputtering simulations

    Science.gov (United States)

    Lindsey, Sloan J.; Hobler, Gerhard; Maciążek, Dawid; Postawa, Zbigniew

    2017-02-01

    It has been shown that surface roughness can strongly influence the sputtering yield - especially at glancing incidence angles where the inclusion of surface roughness leads to an increase in sputtering yields. In this work, we propose a simple one-parameter model (the "density gradient model") which imitates surface roughness effects. In the model, the target's atomic density is assumed to vary linearly between the actual material density and zero. The layer width is the sole model parameter. The model has been implemented in the binary collision simulator IMSIL and has been evaluated against various geometric surface models for 5 keV Ga ions impinging an amorphous Si target. To aid the construction of a realistic rough surface topography, we have performed MD simulations of sequential 5 keV Ga impacts on an initially crystalline Si target. We show that our new model effectively reproduces the sputtering yield, with only minor variations in the energy and angular distributions of sputtered particles. The success of the density gradient model is attributed to a reduction of the reflection coefficient - leading to increased sputtering yields, similar in effect to surface roughness.

  16. Aluminium alloyed iron-silicide/silicon solar cells: A simple approach for low cost environmental-friendly photovoltaic technology.

    Science.gov (United States)

    Kumar Dalapati, Goutam; Masudy-Panah, Saeid; Kumar, Avishek; Cheh Tan, Cheng; Ru Tan, Hui; Chi, Dongzhi

    2015-12-03

    This work demonstrates the fabrication of silicide/silicon based solar cell towards the development of low cost and environmental friendly photovoltaic technology. A heterostructure solar cells using metallic alpha phase (α-phase) aluminum alloyed iron silicide (FeSi(Al)) on n-type silicon is fabricated with an efficiency of 0.8%. The fabricated device has an open circuit voltage and fill-factor of 240 mV and 60%, respectively. Performance of the device was improved by about 7 fold to 5.1% through the interface engineering. The α-phase FeSi(Al)/silicon solar cell devices have promising photovoltaic characteristic with an open circuit voltage, short-circuit current and a fill factor (FF) of 425 mV, 18.5 mA/cm(2), and 64%, respectively. The significant improvement of α-phase FeSi(Al)/n-Si solar cells is due to the formation p(+-)n homojunction through the formation of re-grown crystalline silicon layer (~5-10 nm) at the silicide/silicon interface. Thickness of the regrown silicon layer is crucial for the silicide/silicon based photovoltaic devices. Performance of the α-FeSi(Al)/n-Si solar cells significantly depends on the thickness of α-FeSi(Al) layer and process temperature during the device fabrication. This study will open up new opportunities for the Si based photovoltaic technology using a simple, sustainable, and los cost method.

  17. The Application of Architecture Frameworks to Modelling Exploration Operations Costs

    Science.gov (United States)

    Shishko, Robert

    2006-01-01

    Developments in architectural frameworks and system-of-systems thinking have provided useful constructs for systems engineering. DoDAF concepts, language, and formalisms, in particular, provide a natural way of conceptualizing an operations cost model applicable to NASA's space exploration vision. Not all DoDAF products have meaning or apply to a DoDAF inspired operations cost model, but this paper describes how such DoDAF concepts as nodes, systems, and operational activities relate to the development of a model to estimate exploration operations costs. The paper discusses the specific implementation to the Mission Operations Directorate (MOD) operational functions/activities currently being developed and presents an overview of how this powerful representation can apply to robotic space missions as well.

  18. COST OF QUALITY MODELS AND THEIR IMPLEMENTATION IN MANUFACTURING FIRMS

    Directory of Open Access Journals (Sweden)

    N.M. Vaxevanidis

    2009-03-01

    Full Text Available In order to improve quality, an organization must take into account the costs associated with achieving quality since the objective of continuous improvement programs is not only to meet customer requirements, but also to do it at the lowest, possible, cost. This can only obtained by reducing the costs needed to achieve quality, and the reduction of these costs is only possible if they are identified and measured. Therefore, measuring and reporting the cost of quality (CoQ should be considered an important issue for achieving quality excellence. To collect quality costs an organization needs to adopt a framework to classify costs; however, there is no general agreement on a single broad definition of quality costs. CoQ is usually understood as the sum of conformance plus non-conformance costs, where cost of conformance is the price paid for prevention of poor quality (for example, inspection and quality appraisal and cost of non-conformance is the cost of poor quality caused by product and service failure (for example, rework and returns. The objective of this paper is to give a survey of research articles on the topic of CoQ; it opens with a literature review focused on existing CoQ models; then, it briefly presents the most common CoQ parameters and the metrics (indices used for monitoring CoQ. Finally, the use of CoQ models in practice, i.e., the implementation of a quality costing system and cost of quality reporting in companies is discussed, with emphasis in cases concerning manufacturing firms.

  19. Analysis of financial cost models of strategic planning

    Directory of Open Access Journals (Sweden)

    Vorobev Aleksei Viacheslavovich

    2013-11-01

    Full Text Available This article analyzes the cost of financial models for strategic planning. Shows the strengths and weaknesses of the model, economic value added EVA (Economic Value Added. Necessity of further development of methods for determining financial policy priorities.

  20. Competition and fragmentation: a simple model generating lognormal-like distributions

    International Nuclear Information System (INIS)

    Schwaemmle, V; Queiros, S M D; Brigatti, E; Tchumatchenko, T

    2009-01-01

    The current distribution of language size in terms of speaker population is generally described using a lognormal distribution. Analyzing the original real data we show how the double-Pareto lognormal distribution can give an alternative fit that indicates the existence of a power law tail. A simple Monte Carlo model is constructed based on the processes of competition and fragmentation. The results reproduce the power law tails of the real distribution well and give better results for a poorly connected topology of interactions.